aid
string | mid
string | abstract
string | related_work
string | ref_abstract
dict | title
string | text_except_rw
string | total_words
int64 |
---|---|---|---|---|---|---|---|
1906.07138
|
2900464026
|
Mapping road networks today is labor-intensive. As a result, road maps have poor coverage outside urban centers in many countries. Systems to automatically infer road network graphs from aerial imagery and GPS trajectories have been proposed to improve coverage of road maps. However, because of high error rates, these systems have not been adopted by mapping communities. We propose machine-assisted map editing, where automatic map inference is integrated into existing, human-centric map editing workflows. To realize this, we build Machine-Assisted iD (MAiD), where we extend the web-based OpenStreetMap editor, iD, with machine-assistance functionality. We complement MAiD with a novel approach for inferring road topology from aerial imagery that combines the speed of prior segmentation approaches with the accuracy of prior iterative graph construction methods. We design MAiD to tackle the addition of major, arterial roads in regions where existing maps have poor coverage, and the incremental improvement of coverage in regions where major roads are already mapped. We conduct two user studies and find that, when participants are given a fixed time to map roads, they are able to add as much as 3.5x more roads with MAiD.
|
propose improving the segmentation output by using a conditional generative adversarial network @cite_13 . They train the segmentation CNN not only to output the ground truth labels (with a mean-squared-error loss), but also to fool a discriminator CNN that is trained to distinguish between the ground truth labels and the segmentation CNN outputs.
|
{
"abstract": [
"Road detection with high-precision from very high resolution remote sensing imagery is very important in a huge variety of applications. However, most existing approaches do not automatically extract the road with a smooth appearance and accurate boundaries. To address this problem, we proposed a novel end-to-end generative adversarial network. In particular, we construct a convolutional network based on adversarial training that could discriminate between segmentation maps coming either from the ground truth or generated by the segmentation model. The proposed method could improve the segmentation result by finding and correcting the difference between ground truth and result output by the segmentation model. Extensive experiments demonstrate that the proposed method outperforms the state-of-the-art methods greatly on the performance of segmentation map."
],
"cite_N": [
"@cite_13"
],
"mid": [
"2768705966"
]
}
|
Machine-Assisted Map Editing
|
In many countries, road maps have poor coverage outside urban centers. For example, in Indonesia, roads in the OpenStreetMap Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. SIGSPATIAL '18, November 6-9, 2018, Seattle, WA, USA © 2018 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-1-4503-5889-7/18/11. . . $15.00 https://doi.org/10.1145/3274895.3274927 dataset [9] cover only 55% of the country's road infrastructure 1 ; the closest mapped road to a small village may be tens of miles away. Map coverage improves slowly because mapping road networks is very labor-intensive. For example, when adding roads visible in aerial imagery, users need to perform repeated clicks to draw lines corresponding to road segments.
This issue has motivated significant interest in automatic map inference. Several systems have been proposed for automatically constructing road maps from aerial imagery [6,11] and GPS trajectories [4,15]. Yet, despite over a decade of research in this space, these systems have not gained traction in OpenStreetMap and other mapping communities. Indeed, OpenStreetMap contributors continue to add roads solely by tracing them by hand.
Fundamentally, high error rates make full automation impractical. Even state-of-the-art automatic map inference approaches have error rates between 5% and 10% [2,15]. Navigating the road network using road maps with such high frequencies of errors would be virtually impossible.
Thus, we believe that automatic map inference can only be useful when it is integrated with existing, human-centric map editing workflows. In this paper, we propose machine-assisted map editing to do exactly that.
Our primary contribution is the design and development of Machine-Assisted iD (MAiD), where we integrate machine-assistance functionality into iD, a web-based OpenStreetMap editor. At its core, MAiD replaces manual tracing of roads with human validation of automatically inferred road segments. We designed MAiD with a holistic view of the map editing process, focusing on the parts of the workflow that can benefit substantially from machine-assistance. Specifically, MAiD accelerates map editing in two ways.
In regions where the map has low coverage, MAiD focuses the user's effort on validation of major, arterial roads that form the backbone of the road network. Incorporating these roads into the map is very useful since arterial roads are crucial to many routes. At the same time, because major roads span large distances, validating automatically inferred segments covering major roads is significantly faster than tracing the roads manually. However, road networks inferred by map inference methods include both major and minor roads. Thus, we propose a novel shortest-path-based pruning scheme that operates on an inferred road network graph to retain only inferred segments that correspond to major roads.
In regions where the map has high coverage, further improving map coverage requires users to painstakingly scan the aerial imagery and other data sources for unmapped roads. We reduce this scanning time by adding a "teleport" feature that immediately pans the user to an inferred road segment. Because many inferred segments correspond to service roads and residential roads that are not crucial to the road network, we design a segment ranking scheme to prioritize segments that are more useful.
We find that existing schemes to automatically infer roads from aerial imagery are not suitable for the interactive workflow in MAiD. Segmentation-based approaches [6,11,14], which apply a CNN to label pixels in the imagery as "road" or "non-road", have low accuracy because they require an error-prone post-processing stage to extract a road network graph from the pixel labels. Iterative graph construction (IGC) approaches [2,17] improve accuracy by extracting road topology directly from the CNN, but have execution times six times slower than segmentation, which is too slow for interactivity.
To facilitate machine-assisted interactive mapping, we develop a novel method for extracting road topology from aerial imagery that combines the speed of segmentation-based approaches with the high-accuracy of iterative graph construction (IGC) approaches. Our method adapts the IGC process to use a CNN that outputs road directions for all pixels in one shot; this substantially reduces the number of CNN evaluations, thereby reducing inference time for IGC by almost 8x with near-identical accuracy. Furthermore, in contrast to prior work, our approach infers not only unmapped roads, but also their connections to an existing road network graph.
To evaluate MAiD, we conduct two user studies where we compare the mapping productivity of our validation-based editor (coupled with our map inference approach) to an editor that requires manual tracing. In the first study, we ask participants to map roads in an area of Indonesia with no coverage in OpenStreetMap, with the goal of maximizing the percentage of houses covered by the mapped road network. We find that, given a fixed time to map roads, participants are able to produce road network graphs with 1.7x the coverage and comparable error when using MAiD. In the second study, participants add roads in an area of Washington where major roads are already mapped. With MAiD, participants add 3.5x more roads with comparable error.
In summary, the contributions of this paper are:
• We develop MAiD, a machine-assisted map editing tool that enables efficient human validation of automatically inferred roads. • We propose a novel pruning algorithm and teleport feature that focus validation efforts on tasks where machine-assisted editing offers the greatest improvement in mapping productivity. • We develop an approach for inferring road topology from aerial imagery that complements MAiD by improving on prior work. • We conduct user studies to evaluate MAiD in realistic editing scenarios, where we use the current state of OpenStreetMap, and find that MAiD improves mapping productivity by as much as 3.5x.
The remainder of this paper is organized as follows. In Section 2, we discuss related work. Then, in Section 3, we detail the machine-assisted map editing features that we develop to incorporate automatic map inference into the map editing process. In Section 4, we introduce our novel approach for map inference from aerial imagery. Finally, we evaluate MAiD and our map inference algorithm in Section 5, and conclude in Section 6.
UI for Validation
We build MAiD, where we incorporate our machine-assistance features into iD, a web-based OpenStreetMap editor.
A road network graph is a graph where vertices are annotated with spatial coordinates (latitude and longitude) and edges correspond to straight-line road segments. MAiD inputs an existing road network graph G 0 = (V 0 , E 0 ) containing roads already incorporated in the map. To use MAiD, users first select a region of interest for improving map coverage. MAiD runs an automatic map inference approach in this region to obtain an inferred road network graph G = (V , E) containing inferred segments corresponding to unmapped roads. G should satisfy E 0 ∩ E = ; however, G and G 0 share vertices at the points where inferred segments connect with the existing map.
To make validation of automatically inferred segments intuitive, MAiD then produces a yellow overlay that highlights inferred segments in G over the aerial imagery. Although the overlay is partially transparent, in some cases it is nevertheless difficult to verify the position of the road in the imagery when the overlay is active; thus, users can press and hold a key to temporarily hide the overlay so that they can consult the imagery.
After verifying that an inferred segment is correct, users can left-click the segment to incorporate it into the map. Existing functionality in the editor can then be used to adjust the geometry or topology of the road. If an inferred segment is erroneous, users can either ignore the segment, or right-click on the segment to hide it. Figure 2 shows the MAiD editing workflow.
Mapping Major Roads
However, we find that this validation-based UI alone does not significantly increase mapping productivity. To address this, we first consider adding roads in regions where the map has low coverage.
In practice, when mapping these regions, users typically focus on tracing major, arterial roads that form the backbone of the road network. More precisely, major roads connect centers of activity within a city, or link towns and villages outside cities; in Open-StreetMap, these roads are labelled "primary", "secondary", or "tertiary". Users skip short, minor roads because they are not useful until these important links are mapped. Because major roads span large distances, though, tracing them is slow. Thus, validation can substantially reduce the mapping time for these roads. Supporting efficient validation of major roads requires the pruning of inferred segments corresponding to minor roads. However, automatically distinguishing major roads is difficult. Often, major roads have the same width and appearance as minor roads in aerial imagery. Similarly, while major roads in general have higher coverage by GPS trajectories, more trips may traverse minor roads in population centers than major roads in rural regions.
Rather than detecting major roads from the data source, we propose a shortest-path-based pruning scheme that operates on an inferred road network graph to retain only inferred segments that correspond to major roads. Intuitively, major roads are related to shortest paths: because major roads offer fast connections between far apart locations, they should appear on shortest paths between such locations.
We initially applied betweenness centrality [8], a measure of edge importance based on shortest paths. The betweenness centrality of an edge is the number of shortest paths between unique origindestination pairs that pass through the edge. (When computing shortest paths in the road network graph, the length of an edge is simply the distance between its endpoints.) Formally, for a road network graph G = (V , E), the betweenness centrality of an edge e is:
д(e) = s t ∈V I [e ∈ shortest-path(s, t)]
We can then filter edges in the graph by thresholding based on the betweenness centrality scores.
However, we find that segments with high betweenness centrality often do not correspond to important links in the road network. When using a high threshold, the segments produced after thresholding cover major roads connecting dense clusters in the original graph, but miss connections to smaller clusters. When using a low threshold, most major roads are retained, but minor roads in dense clusters are also retained. Figure 3 shows an example of this issue. Additionally, different regions require very different thresholds. Grey segments are pruned to produce a road network graph containing the blue segments. On the left, a high threshold misses the road to the eastern cluster. On the right, a low threshold includes small roads in the northern and southern clusters.
Thus, we propose an adaptation of betweenness centrality for our pruning problem.
Pruning Minor Roads Fundamentally, betweenness centrality fails to consider the overall spatial distribution of vertices in the road network graph. Dense but compact clusters in the road network should not have an undue influence on the pruning process.
Our pruning approach builds on our earlier intuition, that major roads connect far apart locations. Thus, rather than considering all shortest paths in the graph, we focus on long shortest paths. Additionally, we observe that the path may use minor roads near the source and near the destination, but edges on the middle of a shortest path are more likely to be major roads.
We first cluster the vertices of the road network. Then, we compute shortest paths between cluster centers that are at least a minimum radius R apart. Rather than computing a score and then thresholding on the score, we build a set of edges E major containing edges corresponding to major roads that we will retain. For each shortest path, we trim a fixed distance from the ends of the path, and add all edges in the remaining middle of the path to E major . We prune any edge that does not appear in E major . Figure 4 illustrates our approach. We find that our approach is robust to the choice of the clustering algorithm. Clustering is primarily used to avoid placing cluster centers at vertices that are at the end of a long road that only connects a small number of destinations (and, thus, isn't a major road). In our implementation, we use a simple grid-based clustering scheme: we divide the road network into a grid of r ×r cells, remove cells that contain less than a minimum number of vertices, and then place cluster centers at the mean position of vertices in the remaining cells. We use r = 1 km, R = 5 km.
In practice, we find that for constant R, the runtime of our approach scales linearly with the length of the input road network.
MAiD Implementation. We add a button to toggle between an overlay containing all inferred roads, and an overlay after pruning. Figure 5 shows an example of pruning in Indonesia.
Teleporting to Unmapped Roads
In regions where the map already has high coverage, further improving the map coverage is tedious. Because most roads already appear in the map, users need to slowly scan the aerial imagery to identify unmapped roads in a very time-consuming process.
To address this, we add a teleport capability into the map editor, which pans the editor viewport directly to an area with unmapped roads. Specifically, we identify connected components in the inferred road network G, and pan to a connected component. This functionality enables a user to teleport to an unmapped component, add the roads, and then immediately teleport to another component. By eliminating the time cost of searching for unmapped roads in the imagery, we speed up the mapping process significantly.
However, there may be hundreds of thousands of connected components, and validating all of the components may not be practical. Thus, we propose a prioritization scheme so that longer roads that offer more alternate connections between points on the existing road network are validated first.
Let area(C) be the area of a convex hull containing the edges of a connected component C in G, and let conn(C) be the number of vertices that appear in both C and G 0 , i.e., the number of connections between the existing road network and the inferred component C. We rank connected components by score(C) = area(C) + λconn(C), for a weighting factor λ.
FAST, ACCURATE MAP INFERENCE
In the map inference problem, given an existing road network graph G 0 = (V 0 , E 0 ), we want to produce an inferred road network graph G = (V , E) where each edge in E corresponds to a road segment visible in the imagery but missing from the existing map.
Prior work in extracting road topology from aerial imagery generally employ a two-stage segmentation-based architecture. First, a convolutional neural network (CNN) is trained to label pixels in the aerial imagery as either "road" or "non-road". To extract a road network graph, the CNN output is passed through a heuristic postprocessing pipeline that begins with thresholding, morphological thinning [20], and Douglas-Peucker simplification [7]. However, robustly extracting a graph from the CNN output is challenging, and the post-processing pipeline is error-prone; often, noise in the CNN output is amplified in the final road network graph [2].
Rather than segmenting the imagery, RoadTracer [2] and IDL [17] propose an iterative graph construction (IGC) approach that improves accuracy by deriving the road network graph more directly from the CNN. IGC uses a step-by-step process to construct the graph, where each step contributes a short segment of road to a partial graph. To decide where to place this segment, IGC queries the CNN, which outputs the most likely direction of an unexplored road. Because we query the CNN on each step, though, IGC requires an order of magnitude more inference steps than segmentationbased approaches. We find that IGC is over six times slower than segmentation.
Thus, existing map inference methods are not suitable for the interactive nature of MAiD.
We combine the two-stage architecture of segmentation-based approaches with the road-direction output and iterative search process of IGC to achieve a high-speed, high-accuracy approach. In the first stage, rather than labeling pixels as road or non-road, we apply a CNN on the aerial imagery to annotate each pixel in the imagery with the direction of roads near that pixel. Figure 6 shows an example of these annotations. In the second stage, we iteratively construct a road network graph by following these directions in a search process.
Ground Truth Direction Labels We first describe how we obtain the per-pixel road-direction information shown in Figure 6 from a ground truth road network G * = (V * , E * ). For each pixel (i, j), we compute a set of angles A * i, j . If there are no edges in G * within a matching threshold of (i, j), A * i, j = . Otherwise, suppose e is the closest edge to (i, j), and let p be the closest point on e computed by projecting (i, j) onto e. Let P i, j be the set of points in G * that are a fixed distance D from p; put another way, P i, j contains each point p ′ such that p ′ falls on some edge e ′ ∈ E * , and the shortest distance from p to p ′ in G * is D.
Then, A * i, j = {angle(p ′ − (i, j)) | p ′ ∈ P i, j }, i.e., A * i, j contains the angle from (i, j) to each point in P i, j . Figure 7 shows an example of computing A * i, j . Representing Road Directions. We represent A * as a 3-dimensional matrix U * that can be output by a CNN. We discretize the space of angles corresponding to road directions into b = 64 buckets, where the kth bucket covers the range of angles from 2k π b to 2(k +1)π b
. We then convert each set of road directions A * i, j to a b-vector u * (i, j), where u * (i, j) k = 1 if there is some angle in A * i, j falling into the kth angle bucket, and u * (i, j) k = 0 otherwise. Then, U * i, j,k = u(i, j) k . CNN Architecture. Our CNN model inputs the RGB channels from the w × h aerial imagery, and outputs a w × h × b matrix U .
We apply 16 convolutional layers in a U-Net-like configuration [12], where the first 11 layers downsample to 1/32 the input resolution, and the last 5 layers upsample back up to 1/4 the input resolution. We use 3 × 3 kernels in all layers. We use sigmoid activation in the output layer, and rectified linear activation in all other layers. We use batch normalization in the 14 intermediate layers between the input and output layers.
We train the CNN on random 256 × 256 crops of the imagery with a mean-squared-error loss, i, j,k (U i, j,k − U * i, j,k ) 2 , and use the ADAM gradient descent optimizer [10].
Search Process. At inference time, after applying the CNN on aerial imagery to obtain U , we perform a search process using the predicted road directions in U to derive a road network graph. We adapt the search process from IGC. Essentially, the search iteratively follows directions in U to construct the graph, adding a fixed-length road segment on each step.
We assume that a set of points V init known to be on the road network are provided. If there is an existing map G 0 , we will show later how to derive V init from G 0 . Otherwise, V init may be derived from peaks in the two-dimensional matrix m(U ) i, j = max k U i, j,k . We initialize a road network graph G and a vertex stack S, and populate both with vertices at the points in V init .
Let S top be the vertex at the head of S, and let u top = U (S top ) be the vector in U corresponding to the position of S top . For an angle bucket a, u top,a is the predicted likelihood that there is a road in the direction corresponding to a from S top . On each step of the search, we use u top to decide whether there is a road segment adjacent to S top that hasn't yet been mapped in G, and if there is such a segment, what direction that segment extends in.
We first mask out directions in u top corresponding to roads already incorporated into G to obtain a masked vector mask(u top ); we will discuss the masking procedure later. Masking ensures that we do not add a road segment that duplicates a road that we captured earlier in the search process. Then, mask(u top ) a is the likelihood that there is an unexplored road in the direction a.
If the maximum likelihood after masking, max a mask(u top ) a , exceeds a threshold T , then we decide to add a road segment. Let a best = argmax a mask(u top ) a be the direction with highest likelihood after masking, and let w a best be a unit-vector corresponding to angle bucket a best . We add a vertex v at S top + Dw a best , i.e., at the point D away from S top in the direction indicated by a best . We then add an edge (S top , v), and push v onto S.
Otherwise, if max a mask(u top ) a < T , we stop searching from S top (since there are no unexplored directions with a high enough confidence in U ) by popping S top from S. On the next search step, we will return to the previous vertex in S. Figure 8 illustrates the search process. At the top, we show three search iterations, where we add a segment, stop, and then add another segment. At the bottom, we show the fourth iteration in detail. Likelihoods in u top peak to the left, topleft, and right. After masking, only the blue bars pointing right remain, since the left and topleft directions correspond to roads that we already mapped. We take the maximum of these remaining likelihoods and compare to the threshold T to decide whether to add a segment from S top or stop.
When searching, we may need to merge the current search path with other parts of the graph. For example, in the fourth iteration of Figure 8, we approach an intersection on the right where the perpendicular road was already added to G earlier in the search. We handle merging with a simple heuristic that avoids creating spurious loops. Let N k (S top ) be the set of vertices within k edges from S top . If S top is within 2D of another vertex v in G such that v N 5 (S top ), then we add an edge (S top , v).
Masking Explored Roads. If we do not mask during the search, then we would repeatedly explore the same road in a loop. Masking out directions corresponding to roads that were explored earlier in the search ensures that roads are not duplicated in G.
We first mask out directions that are similar to the angle of edges incident to S top . For each edge e incident to S top , if the angle of e falls in bucket a, we set mask(u top ) a+k = 0 ∀k, −5 ≤ k ≤ 5.
However, this is not sufficient. In the fourth iteration of Figure 8, there is an explored road to the north of S top , but that road is connected to a neighbor west of S top rather than directly to S top . Thus, we also mask directions that are similar to the angle from S top to any vertex in N 5 (S top ).
Extending an Existing Map. We now show how to apply our map inference approach to improve an existing road network graph G 0 . Our key insight is that we can use points on G 0 as starting locations for the search process. Then, when new road segments are inferred, these points inform the connectivity between the new segments and G 0 .
We first preprocess G 0 to derive a densified existing map G ′ 0 . Densification is necessary because there may not be a vertex at the point where an unmapped road branches off from a road in the existing map. To densify G 0 = (V 0 , E 0 ), for each e ∈ E 0 where length(e) > D, we add ⌊ length(e) D ⌋ evenly spaced vertices between the endpoints of e, and replace e with edges between those vertices. This densification preprocessing produces a base map G ′ 0 where the distance between adjacent vertices is at most D.
To initialize the search, we set G = G ′ 0 , and add vertices in G ′ 0 to S. We then run the search process to termination. The search produces a merged road network graph G that contains both segments in the existing map and inferred segments. We extract the inferred road network graph by removing the edges of G ′ 0 from this output graph G.
EVALUATION
To evaluate MAiD, we perform two user studies. In Section 5.1, we consider a region of Indonesia where OpenStreetMap has poor coverage to evaluate our pruning approach. In Section 5.2, we turn to a region of Washington where major roads are already mapped to evaluate the teleport functionality.
In Section 5.3, we compare our map inference scheme against prior work in map inference from aerial imagery on the RoadTracer dataset [2]. We show qualitative results when using MAiD with our map inference approach in Section 5.4.
Indonesia Region: Low Coverage
We first conduct a user study to evaluate mapping productivity when adding roads in a small area of Indonesia with no coverage in OSM. With MAiD, the interface includes a yellow overlay of automatically inferred roads; to obtain these roads, we generate an inferred graph from aerial imagery using our map inference method, and then apply our pruning algorithm to retain only the major roads. After validating the geometry of a road, the user can click it to incorporate the road into the map. In the baseline unmodified editor, users manually trace roads by performing repeated clicks along the road in the imagery.
Procedure. The task is to map major roads in a region using the imagery, with the goal of maximizing coverage in terms of the percentage of houses within 1000 ft of the road network. Users are also asked to produce a connected road network, and to minimize the distance between road segments and the road position in the imagery. We define two metrics to measure this distance: road geometry error (RGE), the average distance between road segments that the participants add and a ground truth map that we hand label, and max-RGE, the maximum distance.
Ten volunteers, all graduate and postdoctoral students age 20-30, participate in our study. We use a within-subjects design; five participants perform the task first on the baseline editor, and then on MAiD, and five participants proceed in the opposite order.
Participants perform the experiment in a twenty-minute session. We select three regions from the unmapped area: an example region, a training region, and a test region. We first introduce participants to the iD editor, and enumerate the editor features as they add one road. We then describe the task, and show them the example region where the task has already been completed. Participants briefly practice the task on the training region, and then have four minutes to perform the task on a test region. We repeat the training and testing for both editors.
We choose the test region so that it is too large to map within the allotted four minutes. We then evaluate the road network graphs that the participants produce using each editor in terms of coverage (percentage of houses covered), RGE, and max-RGE.
Results. We report the mean and standard error of the percentage of houses covered by the participants with the two editors in Figure 9. We find that MAiD improves the mean percentage covered by 1.7x (from 17% to 29%). While manually tracing a major road may take 15-30 clicks, the road can be captured with one click in MAiD after the geometry of an inferred segment is verified.
RGE and max-RGE are comparable for both editors, although there is more variance between participants with the baseline editor
Washington Region: High Coverage
Next, we evaluate mapping productivity in a high-coverage region of rural Washington. With MAiD, users can press a Teleport button to immediately pan to a group of unmapped roads. A yellow overlay includes all inferred segments covering those roads; we do not use our pruning approach for this study. With the baseline editor, users need to pan around the imagery to find unmapped roads. After finding an unmapped road, users manually trace it.
Procedure. The task is to add roads that are visible in the aerial imagery but not yet covered by the map. Because major roads in this region are already mapped, rather than measuring house coverage, we ask users to add as much length of unmapped roads as possible. We again ask users to minimize the distance between road segments and the road position in the imagery, and to ensure that new segments are connected to the existing map.
Ten volunteers (consisting of graduate students, postdoctoral students, and professional software engineers all age 20-30) participate in our study. We again use a within-subjects design and counterbalance the order of the baseline editor and MAiD.
Participants perform the experiment in a fifteen-to-twenty minute session. For each editing interface, we first provide instructions on the task and editor functionality (accompanied by a 30-second video where we use the editor), and show images of example unmapped roads. Participants then practice the task on a training region in a warm-up phase, with a suggested duration of two to three minutes. After participants finish the warm-up, they are given three minutes to perform the task on a test region. As before, we repeat training and testing for both interfaces.
We evaluate the road network graphs that the participants produce in terms of total road length, RGE, and max-RGE.
Results. We report the mean and standard error of total road length added by the participants in Figure 10. MAiD improves mapping productivity in terms of road length by 3.5x (from 25 km to 88 km). Most of this improvement can be attributed to the teleport functionality eliminating the need for panning around the imagery to find unmapped roads. Additionally, though, because teleport prioritizes large unmapped components with many connections to the existing road network, validating these components is much faster than manually tracing them.
As before, RGE and max-RGE are comparable for the two editors. Mean and standard error of RGE is 7.0 m ± 0.7 m with the baseline editor, and 5.3 m ± 0.1 m with MAiD. For max-RGE, it is 53 m ± 14 m with the baseline, and 39 m ± 4 m with MAiD.
Automatic Map Inference
Dataset. We evaluate our approach for inferring road topology from aerial imagery on the RoadTracer dataset [2], which contains imagery and ground truth road network graphs from forty cities. The data is split into a training set and a test set; the test set includes data for a 16 sq km region around the city centers of 15 cities, while the training set contains data from 25 other cities. Imagery is from Google Maps, and road network data is from OpenStreetMap.
The test set includes 9 cities in the U.S., 3 in Canada, and 1 in each of France, the Netherlands, and Japan.
Baselines. We compare against the baseline segmentation approach and the IGC implementation from [2]. The segmentation approach applies a 13-layer CNN, and then extracts a road network graph using thresholding, thinning, and refinement. The IGC approach, RoadTracer, trains a CNN using a supervised dynamic labels procedure that resembles reinforcement learning. This approach achieves state-of-the-art performance on the dataset, on which DeepRoadMapper [11] has also been evaluated.
Metrics. We evaluate the road network graphs output by the map inference schemes on the TOPO metric [3], which is commonly used in the automatic road map inference literature [1]. TOPO evaluates both the geometrical accuracy (how closely the inferred segments align with the actual road) and the topological accuracy (correct connectivity) of an inferred map. It simulates an agent traveling on the road network from an origin location, and compares the destinations that can be reached within a fixed radius in the inferred map with those that can be reached in the ground truth map. This comparison is repeated over a large number of randomly selected origins to obtain an average precision and recall. We also evaluate the execution time of the schemes on an AWS p2.xlarge instance with an NVIDIA Tesla K80 GPU.
Results. We show TOPO precision-recall curves obtained by varying parameter choices in Figure 11, and average execution time in the 15-square-km test regions for parameters that correspond to a 10% error rate in Table 1. We find that our approach exhibits both the high-accuracy of IGC and the speed of segmentation methods.
Our map inference approach has comparable TOPO performance to IGC, while outperforming the segmentation approach on error rate by up to 1.6x. This improvement in error rate is crucial for machine-assisted map editing as it reduces the time users spend validating incorrect inferred segments.
On execution time, our approach performs comparably to the segmentation approach, while IGC is almost 8x slower. A low execution time is crucial to MAiD's interactive workflow. Users can explore a new region for two to three minutes while the automatic map inference approach runs; however, a fifteen-minute runtime breaks the workflow.
Qualitative Results
In Figure 12, we show qualitative results from MAiD when using segments inferred by our map inference algorithm.
CONCLUSION
Full automation for building road maps has proven unfeasible due to high error rates in automatic map inference methods. We instead propose machine-assisted map editing, where we integrate automatically inferred road segments into the existing map editing process by having humans validate these segments before the segments are incorporated into the map. Our map editor, Machine-Assisted iD (MAiD), improves mapping productivity by as much as 3.5x by focusing on tasks where machine-assistance provides the most benefit. We believe that by improving mapping productivity, MAiD has the potential to substantially improve coverage in road maps. Figure 12: Qualitative results from MAiD with our map inference algorithm. Segments in the existing map are in white. We show our pruning approach applied on a region of Indonesia in the top image, with pruned roads in purple and retained roads in yellow. The middle and bottom images show connected components of inferred segments that the teleport feature pans the user to, in Washington and Bangkok respectively.
| 5,944 |
1906.07138
|
2900464026
|
Mapping road networks today is labor-intensive. As a result, road maps have poor coverage outside urban centers in many countries. Systems to automatically infer road network graphs from aerial imagery and GPS trajectories have been proposed to improve coverage of road maps. However, because of high error rates, these systems have not been adopted by mapping communities. We propose machine-assisted map editing, where automatic map inference is integrated into existing, human-centric map editing workflows. To realize this, we build Machine-Assisted iD (MAiD), where we extend the web-based OpenStreetMap editor, iD, with machine-assistance functionality. We complement MAiD with a novel approach for inferring road topology from aerial imagery that combines the speed of prior segmentation approaches with the accuracy of prior iterative graph construction methods. We design MAiD to tackle the addition of major, arterial roads in regions where existing maps have poor coverage, and the incremental improvement of coverage in regions where major roads are already mapped. We conduct two user studies and find that, when participants are given a fixed time to map roads, they are able to add as much as 3.5x more roads with MAiD.
|
DeepRoadMapper adds an additional post-processing step to infer missing connections in the initial extracted road network @cite_10 . Candidate missing connections are generated by performing a shortest path search on a graph defined by the segmentation probabilities. Then, a separate CNN is trained to identify correct missing connections.
|
{
"abstract": [
"Creating road maps is essential for applications such as autonomous driving and city planning. Most approaches in industry focus on leveraging expensive sensors mounted on top of a fleet of cars. This results in very accurate estimates when exploiting a user in the loop. However, these solutions are very expensive and have small coverage. In contrast, in this paper we propose an approach that directly estimates road topology from aerial images. This provides us with an affordable solution with large coverage. Towards this goal, we take advantage of the latest developments in deep learning to have an initial segmentation of the aerial images. We then propose an algorithm that reasons about missing connections in the extracted road topology as a shortest path problem that can be solved efficiently. We demonstrate the effectiveness of our approach in the challenging TorontoCity dataset [23] and show very significant improvements over the state-of-the-art."
],
"cite_N": [
"@cite_10"
],
"mid": [
"2780861787"
]
}
|
Machine-Assisted Map Editing
|
In many countries, road maps have poor coverage outside urban centers. For example, in Indonesia, roads in the OpenStreetMap Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. SIGSPATIAL '18, November 6-9, 2018, Seattle, WA, USA © 2018 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-1-4503-5889-7/18/11. . . $15.00 https://doi.org/10.1145/3274895.3274927 dataset [9] cover only 55% of the country's road infrastructure 1 ; the closest mapped road to a small village may be tens of miles away. Map coverage improves slowly because mapping road networks is very labor-intensive. For example, when adding roads visible in aerial imagery, users need to perform repeated clicks to draw lines corresponding to road segments.
This issue has motivated significant interest in automatic map inference. Several systems have been proposed for automatically constructing road maps from aerial imagery [6,11] and GPS trajectories [4,15]. Yet, despite over a decade of research in this space, these systems have not gained traction in OpenStreetMap and other mapping communities. Indeed, OpenStreetMap contributors continue to add roads solely by tracing them by hand.
Fundamentally, high error rates make full automation impractical. Even state-of-the-art automatic map inference approaches have error rates between 5% and 10% [2,15]. Navigating the road network using road maps with such high frequencies of errors would be virtually impossible.
Thus, we believe that automatic map inference can only be useful when it is integrated with existing, human-centric map editing workflows. In this paper, we propose machine-assisted map editing to do exactly that.
Our primary contribution is the design and development of Machine-Assisted iD (MAiD), where we integrate machine-assistance functionality into iD, a web-based OpenStreetMap editor. At its core, MAiD replaces manual tracing of roads with human validation of automatically inferred road segments. We designed MAiD with a holistic view of the map editing process, focusing on the parts of the workflow that can benefit substantially from machine-assistance. Specifically, MAiD accelerates map editing in two ways.
In regions where the map has low coverage, MAiD focuses the user's effort on validation of major, arterial roads that form the backbone of the road network. Incorporating these roads into the map is very useful since arterial roads are crucial to many routes. At the same time, because major roads span large distances, validating automatically inferred segments covering major roads is significantly faster than tracing the roads manually. However, road networks inferred by map inference methods include both major and minor roads. Thus, we propose a novel shortest-path-based pruning scheme that operates on an inferred road network graph to retain only inferred segments that correspond to major roads.
In regions where the map has high coverage, further improving map coverage requires users to painstakingly scan the aerial imagery and other data sources for unmapped roads. We reduce this scanning time by adding a "teleport" feature that immediately pans the user to an inferred road segment. Because many inferred segments correspond to service roads and residential roads that are not crucial to the road network, we design a segment ranking scheme to prioritize segments that are more useful.
We find that existing schemes to automatically infer roads from aerial imagery are not suitable for the interactive workflow in MAiD. Segmentation-based approaches [6,11,14], which apply a CNN to label pixels in the imagery as "road" or "non-road", have low accuracy because they require an error-prone post-processing stage to extract a road network graph from the pixel labels. Iterative graph construction (IGC) approaches [2,17] improve accuracy by extracting road topology directly from the CNN, but have execution times six times slower than segmentation, which is too slow for interactivity.
To facilitate machine-assisted interactive mapping, we develop a novel method for extracting road topology from aerial imagery that combines the speed of segmentation-based approaches with the high-accuracy of iterative graph construction (IGC) approaches. Our method adapts the IGC process to use a CNN that outputs road directions for all pixels in one shot; this substantially reduces the number of CNN evaluations, thereby reducing inference time for IGC by almost 8x with near-identical accuracy. Furthermore, in contrast to prior work, our approach infers not only unmapped roads, but also their connections to an existing road network graph.
To evaluate MAiD, we conduct two user studies where we compare the mapping productivity of our validation-based editor (coupled with our map inference approach) to an editor that requires manual tracing. In the first study, we ask participants to map roads in an area of Indonesia with no coverage in OpenStreetMap, with the goal of maximizing the percentage of houses covered by the mapped road network. We find that, given a fixed time to map roads, participants are able to produce road network graphs with 1.7x the coverage and comparable error when using MAiD. In the second study, participants add roads in an area of Washington where major roads are already mapped. With MAiD, participants add 3.5x more roads with comparable error.
In summary, the contributions of this paper are:
• We develop MAiD, a machine-assisted map editing tool that enables efficient human validation of automatically inferred roads. • We propose a novel pruning algorithm and teleport feature that focus validation efforts on tasks where machine-assisted editing offers the greatest improvement in mapping productivity. • We develop an approach for inferring road topology from aerial imagery that complements MAiD by improving on prior work. • We conduct user studies to evaluate MAiD in realistic editing scenarios, where we use the current state of OpenStreetMap, and find that MAiD improves mapping productivity by as much as 3.5x.
The remainder of this paper is organized as follows. In Section 2, we discuss related work. Then, in Section 3, we detail the machine-assisted map editing features that we develop to incorporate automatic map inference into the map editing process. In Section 4, we introduce our novel approach for map inference from aerial imagery. Finally, we evaluate MAiD and our map inference algorithm in Section 5, and conclude in Section 6.
UI for Validation
We build MAiD, where we incorporate our machine-assistance features into iD, a web-based OpenStreetMap editor.
A road network graph is a graph where vertices are annotated with spatial coordinates (latitude and longitude) and edges correspond to straight-line road segments. MAiD inputs an existing road network graph G 0 = (V 0 , E 0 ) containing roads already incorporated in the map. To use MAiD, users first select a region of interest for improving map coverage. MAiD runs an automatic map inference approach in this region to obtain an inferred road network graph G = (V , E) containing inferred segments corresponding to unmapped roads. G should satisfy E 0 ∩ E = ; however, G and G 0 share vertices at the points where inferred segments connect with the existing map.
To make validation of automatically inferred segments intuitive, MAiD then produces a yellow overlay that highlights inferred segments in G over the aerial imagery. Although the overlay is partially transparent, in some cases it is nevertheless difficult to verify the position of the road in the imagery when the overlay is active; thus, users can press and hold a key to temporarily hide the overlay so that they can consult the imagery.
After verifying that an inferred segment is correct, users can left-click the segment to incorporate it into the map. Existing functionality in the editor can then be used to adjust the geometry or topology of the road. If an inferred segment is erroneous, users can either ignore the segment, or right-click on the segment to hide it. Figure 2 shows the MAiD editing workflow.
Mapping Major Roads
However, we find that this validation-based UI alone does not significantly increase mapping productivity. To address this, we first consider adding roads in regions where the map has low coverage.
In practice, when mapping these regions, users typically focus on tracing major, arterial roads that form the backbone of the road network. More precisely, major roads connect centers of activity within a city, or link towns and villages outside cities; in Open-StreetMap, these roads are labelled "primary", "secondary", or "tertiary". Users skip short, minor roads because they are not useful until these important links are mapped. Because major roads span large distances, though, tracing them is slow. Thus, validation can substantially reduce the mapping time for these roads. Supporting efficient validation of major roads requires the pruning of inferred segments corresponding to minor roads. However, automatically distinguishing major roads is difficult. Often, major roads have the same width and appearance as minor roads in aerial imagery. Similarly, while major roads in general have higher coverage by GPS trajectories, more trips may traverse minor roads in population centers than major roads in rural regions.
Rather than detecting major roads from the data source, we propose a shortest-path-based pruning scheme that operates on an inferred road network graph to retain only inferred segments that correspond to major roads. Intuitively, major roads are related to shortest paths: because major roads offer fast connections between far apart locations, they should appear on shortest paths between such locations.
We initially applied betweenness centrality [8], a measure of edge importance based on shortest paths. The betweenness centrality of an edge is the number of shortest paths between unique origindestination pairs that pass through the edge. (When computing shortest paths in the road network graph, the length of an edge is simply the distance between its endpoints.) Formally, for a road network graph G = (V , E), the betweenness centrality of an edge e is:
д(e) = s t ∈V I [e ∈ shortest-path(s, t)]
We can then filter edges in the graph by thresholding based on the betweenness centrality scores.
However, we find that segments with high betweenness centrality often do not correspond to important links in the road network. When using a high threshold, the segments produced after thresholding cover major roads connecting dense clusters in the original graph, but miss connections to smaller clusters. When using a low threshold, most major roads are retained, but minor roads in dense clusters are also retained. Figure 3 shows an example of this issue. Additionally, different regions require very different thresholds. Grey segments are pruned to produce a road network graph containing the blue segments. On the left, a high threshold misses the road to the eastern cluster. On the right, a low threshold includes small roads in the northern and southern clusters.
Thus, we propose an adaptation of betweenness centrality for our pruning problem.
Pruning Minor Roads Fundamentally, betweenness centrality fails to consider the overall spatial distribution of vertices in the road network graph. Dense but compact clusters in the road network should not have an undue influence on the pruning process.
Our pruning approach builds on our earlier intuition, that major roads connect far apart locations. Thus, rather than considering all shortest paths in the graph, we focus on long shortest paths. Additionally, we observe that the path may use minor roads near the source and near the destination, but edges on the middle of a shortest path are more likely to be major roads.
We first cluster the vertices of the road network. Then, we compute shortest paths between cluster centers that are at least a minimum radius R apart. Rather than computing a score and then thresholding on the score, we build a set of edges E major containing edges corresponding to major roads that we will retain. For each shortest path, we trim a fixed distance from the ends of the path, and add all edges in the remaining middle of the path to E major . We prune any edge that does not appear in E major . Figure 4 illustrates our approach. We find that our approach is robust to the choice of the clustering algorithm. Clustering is primarily used to avoid placing cluster centers at vertices that are at the end of a long road that only connects a small number of destinations (and, thus, isn't a major road). In our implementation, we use a simple grid-based clustering scheme: we divide the road network into a grid of r ×r cells, remove cells that contain less than a minimum number of vertices, and then place cluster centers at the mean position of vertices in the remaining cells. We use r = 1 km, R = 5 km.
In practice, we find that for constant R, the runtime of our approach scales linearly with the length of the input road network.
MAiD Implementation. We add a button to toggle between an overlay containing all inferred roads, and an overlay after pruning. Figure 5 shows an example of pruning in Indonesia.
Teleporting to Unmapped Roads
In regions where the map already has high coverage, further improving the map coverage is tedious. Because most roads already appear in the map, users need to slowly scan the aerial imagery to identify unmapped roads in a very time-consuming process.
To address this, we add a teleport capability into the map editor, which pans the editor viewport directly to an area with unmapped roads. Specifically, we identify connected components in the inferred road network G, and pan to a connected component. This functionality enables a user to teleport to an unmapped component, add the roads, and then immediately teleport to another component. By eliminating the time cost of searching for unmapped roads in the imagery, we speed up the mapping process significantly.
However, there may be hundreds of thousands of connected components, and validating all of the components may not be practical. Thus, we propose a prioritization scheme so that longer roads that offer more alternate connections between points on the existing road network are validated first.
Let area(C) be the area of a convex hull containing the edges of a connected component C in G, and let conn(C) be the number of vertices that appear in both C and G 0 , i.e., the number of connections between the existing road network and the inferred component C. We rank connected components by score(C) = area(C) + λconn(C), for a weighting factor λ.
FAST, ACCURATE MAP INFERENCE
In the map inference problem, given an existing road network graph G 0 = (V 0 , E 0 ), we want to produce an inferred road network graph G = (V , E) where each edge in E corresponds to a road segment visible in the imagery but missing from the existing map.
Prior work in extracting road topology from aerial imagery generally employ a two-stage segmentation-based architecture. First, a convolutional neural network (CNN) is trained to label pixels in the aerial imagery as either "road" or "non-road". To extract a road network graph, the CNN output is passed through a heuristic postprocessing pipeline that begins with thresholding, morphological thinning [20], and Douglas-Peucker simplification [7]. However, robustly extracting a graph from the CNN output is challenging, and the post-processing pipeline is error-prone; often, noise in the CNN output is amplified in the final road network graph [2].
Rather than segmenting the imagery, RoadTracer [2] and IDL [17] propose an iterative graph construction (IGC) approach that improves accuracy by deriving the road network graph more directly from the CNN. IGC uses a step-by-step process to construct the graph, where each step contributes a short segment of road to a partial graph. To decide where to place this segment, IGC queries the CNN, which outputs the most likely direction of an unexplored road. Because we query the CNN on each step, though, IGC requires an order of magnitude more inference steps than segmentationbased approaches. We find that IGC is over six times slower than segmentation.
Thus, existing map inference methods are not suitable for the interactive nature of MAiD.
We combine the two-stage architecture of segmentation-based approaches with the road-direction output and iterative search process of IGC to achieve a high-speed, high-accuracy approach. In the first stage, rather than labeling pixels as road or non-road, we apply a CNN on the aerial imagery to annotate each pixel in the imagery with the direction of roads near that pixel. Figure 6 shows an example of these annotations. In the second stage, we iteratively construct a road network graph by following these directions in a search process.
Ground Truth Direction Labels We first describe how we obtain the per-pixel road-direction information shown in Figure 6 from a ground truth road network G * = (V * , E * ). For each pixel (i, j), we compute a set of angles A * i, j . If there are no edges in G * within a matching threshold of (i, j), A * i, j = . Otherwise, suppose e is the closest edge to (i, j), and let p be the closest point on e computed by projecting (i, j) onto e. Let P i, j be the set of points in G * that are a fixed distance D from p; put another way, P i, j contains each point p ′ such that p ′ falls on some edge e ′ ∈ E * , and the shortest distance from p to p ′ in G * is D.
Then, A * i, j = {angle(p ′ − (i, j)) | p ′ ∈ P i, j }, i.e., A * i, j contains the angle from (i, j) to each point in P i, j . Figure 7 shows an example of computing A * i, j . Representing Road Directions. We represent A * as a 3-dimensional matrix U * that can be output by a CNN. We discretize the space of angles corresponding to road directions into b = 64 buckets, where the kth bucket covers the range of angles from 2k π b to 2(k +1)π b
. We then convert each set of road directions A * i, j to a b-vector u * (i, j), where u * (i, j) k = 1 if there is some angle in A * i, j falling into the kth angle bucket, and u * (i, j) k = 0 otherwise. Then, U * i, j,k = u(i, j) k . CNN Architecture. Our CNN model inputs the RGB channels from the w × h aerial imagery, and outputs a w × h × b matrix U .
We apply 16 convolutional layers in a U-Net-like configuration [12], where the first 11 layers downsample to 1/32 the input resolution, and the last 5 layers upsample back up to 1/4 the input resolution. We use 3 × 3 kernels in all layers. We use sigmoid activation in the output layer, and rectified linear activation in all other layers. We use batch normalization in the 14 intermediate layers between the input and output layers.
We train the CNN on random 256 × 256 crops of the imagery with a mean-squared-error loss, i, j,k (U i, j,k − U * i, j,k ) 2 , and use the ADAM gradient descent optimizer [10].
Search Process. At inference time, after applying the CNN on aerial imagery to obtain U , we perform a search process using the predicted road directions in U to derive a road network graph. We adapt the search process from IGC. Essentially, the search iteratively follows directions in U to construct the graph, adding a fixed-length road segment on each step.
We assume that a set of points V init known to be on the road network are provided. If there is an existing map G 0 , we will show later how to derive V init from G 0 . Otherwise, V init may be derived from peaks in the two-dimensional matrix m(U ) i, j = max k U i, j,k . We initialize a road network graph G and a vertex stack S, and populate both with vertices at the points in V init .
Let S top be the vertex at the head of S, and let u top = U (S top ) be the vector in U corresponding to the position of S top . For an angle bucket a, u top,a is the predicted likelihood that there is a road in the direction corresponding to a from S top . On each step of the search, we use u top to decide whether there is a road segment adjacent to S top that hasn't yet been mapped in G, and if there is such a segment, what direction that segment extends in.
We first mask out directions in u top corresponding to roads already incorporated into G to obtain a masked vector mask(u top ); we will discuss the masking procedure later. Masking ensures that we do not add a road segment that duplicates a road that we captured earlier in the search process. Then, mask(u top ) a is the likelihood that there is an unexplored road in the direction a.
If the maximum likelihood after masking, max a mask(u top ) a , exceeds a threshold T , then we decide to add a road segment. Let a best = argmax a mask(u top ) a be the direction with highest likelihood after masking, and let w a best be a unit-vector corresponding to angle bucket a best . We add a vertex v at S top + Dw a best , i.e., at the point D away from S top in the direction indicated by a best . We then add an edge (S top , v), and push v onto S.
Otherwise, if max a mask(u top ) a < T , we stop searching from S top (since there are no unexplored directions with a high enough confidence in U ) by popping S top from S. On the next search step, we will return to the previous vertex in S. Figure 8 illustrates the search process. At the top, we show three search iterations, where we add a segment, stop, and then add another segment. At the bottom, we show the fourth iteration in detail. Likelihoods in u top peak to the left, topleft, and right. After masking, only the blue bars pointing right remain, since the left and topleft directions correspond to roads that we already mapped. We take the maximum of these remaining likelihoods and compare to the threshold T to decide whether to add a segment from S top or stop.
When searching, we may need to merge the current search path with other parts of the graph. For example, in the fourth iteration of Figure 8, we approach an intersection on the right where the perpendicular road was already added to G earlier in the search. We handle merging with a simple heuristic that avoids creating spurious loops. Let N k (S top ) be the set of vertices within k edges from S top . If S top is within 2D of another vertex v in G such that v N 5 (S top ), then we add an edge (S top , v).
Masking Explored Roads. If we do not mask during the search, then we would repeatedly explore the same road in a loop. Masking out directions corresponding to roads that were explored earlier in the search ensures that roads are not duplicated in G.
We first mask out directions that are similar to the angle of edges incident to S top . For each edge e incident to S top , if the angle of e falls in bucket a, we set mask(u top ) a+k = 0 ∀k, −5 ≤ k ≤ 5.
However, this is not sufficient. In the fourth iteration of Figure 8, there is an explored road to the north of S top , but that road is connected to a neighbor west of S top rather than directly to S top . Thus, we also mask directions that are similar to the angle from S top to any vertex in N 5 (S top ).
Extending an Existing Map. We now show how to apply our map inference approach to improve an existing road network graph G 0 . Our key insight is that we can use points on G 0 as starting locations for the search process. Then, when new road segments are inferred, these points inform the connectivity between the new segments and G 0 .
We first preprocess G 0 to derive a densified existing map G ′ 0 . Densification is necessary because there may not be a vertex at the point where an unmapped road branches off from a road in the existing map. To densify G 0 = (V 0 , E 0 ), for each e ∈ E 0 where length(e) > D, we add ⌊ length(e) D ⌋ evenly spaced vertices between the endpoints of e, and replace e with edges between those vertices. This densification preprocessing produces a base map G ′ 0 where the distance between adjacent vertices is at most D.
To initialize the search, we set G = G ′ 0 , and add vertices in G ′ 0 to S. We then run the search process to termination. The search produces a merged road network graph G that contains both segments in the existing map and inferred segments. We extract the inferred road network graph by removing the edges of G ′ 0 from this output graph G.
EVALUATION
To evaluate MAiD, we perform two user studies. In Section 5.1, we consider a region of Indonesia where OpenStreetMap has poor coverage to evaluate our pruning approach. In Section 5.2, we turn to a region of Washington where major roads are already mapped to evaluate the teleport functionality.
In Section 5.3, we compare our map inference scheme against prior work in map inference from aerial imagery on the RoadTracer dataset [2]. We show qualitative results when using MAiD with our map inference approach in Section 5.4.
Indonesia Region: Low Coverage
We first conduct a user study to evaluate mapping productivity when adding roads in a small area of Indonesia with no coverage in OSM. With MAiD, the interface includes a yellow overlay of automatically inferred roads; to obtain these roads, we generate an inferred graph from aerial imagery using our map inference method, and then apply our pruning algorithm to retain only the major roads. After validating the geometry of a road, the user can click it to incorporate the road into the map. In the baseline unmodified editor, users manually trace roads by performing repeated clicks along the road in the imagery.
Procedure. The task is to map major roads in a region using the imagery, with the goal of maximizing coverage in terms of the percentage of houses within 1000 ft of the road network. Users are also asked to produce a connected road network, and to minimize the distance between road segments and the road position in the imagery. We define two metrics to measure this distance: road geometry error (RGE), the average distance between road segments that the participants add and a ground truth map that we hand label, and max-RGE, the maximum distance.
Ten volunteers, all graduate and postdoctoral students age 20-30, participate in our study. We use a within-subjects design; five participants perform the task first on the baseline editor, and then on MAiD, and five participants proceed in the opposite order.
Participants perform the experiment in a twenty-minute session. We select three regions from the unmapped area: an example region, a training region, and a test region. We first introduce participants to the iD editor, and enumerate the editor features as they add one road. We then describe the task, and show them the example region where the task has already been completed. Participants briefly practice the task on the training region, and then have four minutes to perform the task on a test region. We repeat the training and testing for both editors.
We choose the test region so that it is too large to map within the allotted four minutes. We then evaluate the road network graphs that the participants produce using each editor in terms of coverage (percentage of houses covered), RGE, and max-RGE.
Results. We report the mean and standard error of the percentage of houses covered by the participants with the two editors in Figure 9. We find that MAiD improves the mean percentage covered by 1.7x (from 17% to 29%). While manually tracing a major road may take 15-30 clicks, the road can be captured with one click in MAiD after the geometry of an inferred segment is verified.
RGE and max-RGE are comparable for both editors, although there is more variance between participants with the baseline editor
Washington Region: High Coverage
Next, we evaluate mapping productivity in a high-coverage region of rural Washington. With MAiD, users can press a Teleport button to immediately pan to a group of unmapped roads. A yellow overlay includes all inferred segments covering those roads; we do not use our pruning approach for this study. With the baseline editor, users need to pan around the imagery to find unmapped roads. After finding an unmapped road, users manually trace it.
Procedure. The task is to add roads that are visible in the aerial imagery but not yet covered by the map. Because major roads in this region are already mapped, rather than measuring house coverage, we ask users to add as much length of unmapped roads as possible. We again ask users to minimize the distance between road segments and the road position in the imagery, and to ensure that new segments are connected to the existing map.
Ten volunteers (consisting of graduate students, postdoctoral students, and professional software engineers all age 20-30) participate in our study. We again use a within-subjects design and counterbalance the order of the baseline editor and MAiD.
Participants perform the experiment in a fifteen-to-twenty minute session. For each editing interface, we first provide instructions on the task and editor functionality (accompanied by a 30-second video where we use the editor), and show images of example unmapped roads. Participants then practice the task on a training region in a warm-up phase, with a suggested duration of two to three minutes. After participants finish the warm-up, they are given three minutes to perform the task on a test region. As before, we repeat training and testing for both interfaces.
We evaluate the road network graphs that the participants produce in terms of total road length, RGE, and max-RGE.
Results. We report the mean and standard error of total road length added by the participants in Figure 10. MAiD improves mapping productivity in terms of road length by 3.5x (from 25 km to 88 km). Most of this improvement can be attributed to the teleport functionality eliminating the need for panning around the imagery to find unmapped roads. Additionally, though, because teleport prioritizes large unmapped components with many connections to the existing road network, validating these components is much faster than manually tracing them.
As before, RGE and max-RGE are comparable for the two editors. Mean and standard error of RGE is 7.0 m ± 0.7 m with the baseline editor, and 5.3 m ± 0.1 m with MAiD. For max-RGE, it is 53 m ± 14 m with the baseline, and 39 m ± 4 m with MAiD.
Automatic Map Inference
Dataset. We evaluate our approach for inferring road topology from aerial imagery on the RoadTracer dataset [2], which contains imagery and ground truth road network graphs from forty cities. The data is split into a training set and a test set; the test set includes data for a 16 sq km region around the city centers of 15 cities, while the training set contains data from 25 other cities. Imagery is from Google Maps, and road network data is from OpenStreetMap.
The test set includes 9 cities in the U.S., 3 in Canada, and 1 in each of France, the Netherlands, and Japan.
Baselines. We compare against the baseline segmentation approach and the IGC implementation from [2]. The segmentation approach applies a 13-layer CNN, and then extracts a road network graph using thresholding, thinning, and refinement. The IGC approach, RoadTracer, trains a CNN using a supervised dynamic labels procedure that resembles reinforcement learning. This approach achieves state-of-the-art performance on the dataset, on which DeepRoadMapper [11] has also been evaluated.
Metrics. We evaluate the road network graphs output by the map inference schemes on the TOPO metric [3], which is commonly used in the automatic road map inference literature [1]. TOPO evaluates both the geometrical accuracy (how closely the inferred segments align with the actual road) and the topological accuracy (correct connectivity) of an inferred map. It simulates an agent traveling on the road network from an origin location, and compares the destinations that can be reached within a fixed radius in the inferred map with those that can be reached in the ground truth map. This comparison is repeated over a large number of randomly selected origins to obtain an average precision and recall. We also evaluate the execution time of the schemes on an AWS p2.xlarge instance with an NVIDIA Tesla K80 GPU.
Results. We show TOPO precision-recall curves obtained by varying parameter choices in Figure 11, and average execution time in the 15-square-km test regions for parameters that correspond to a 10% error rate in Table 1. We find that our approach exhibits both the high-accuracy of IGC and the speed of segmentation methods.
Our map inference approach has comparable TOPO performance to IGC, while outperforming the segmentation approach on error rate by up to 1.6x. This improvement in error rate is crucial for machine-assisted map editing as it reduces the time users spend validating incorrect inferred segments.
On execution time, our approach performs comparably to the segmentation approach, while IGC is almost 8x slower. A low execution time is crucial to MAiD's interactive workflow. Users can explore a new region for two to three minutes while the automatic map inference approach runs; however, a fifteen-minute runtime breaks the workflow.
Qualitative Results
In Figure 12, we show qualitative results from MAiD when using segments inferred by our map inference algorithm.
CONCLUSION
Full automation for building road maps has proven unfeasible due to high error rates in automatic map inference methods. We instead propose machine-assisted map editing, where we integrate automatically inferred road segments into the existing map editing process by having humans validate these segments before the segments are incorporated into the map. Our map editor, Machine-Assisted iD (MAiD), improves mapping productivity by as much as 3.5x by focusing on tasks where machine-assistance provides the most benefit. We believe that by improving mapping productivity, MAiD has the potential to substantially improve coverage in road maps. Figure 12: Qualitative results from MAiD with our map inference algorithm. Segments in the existing map are in white. We show our pruning approach applied on a region of Indonesia in the top image, with pruned roads in purple and retained roads in yellow. The middle and bottom images show connected components of inferred segments that the teleport feature pans the user to, in Washington and Bangkok respectively.
| 5,944 |
1906.07138
|
2900464026
|
Mapping road networks today is labor-intensive. As a result, road maps have poor coverage outside urban centers in many countries. Systems to automatically infer road network graphs from aerial imagery and GPS trajectories have been proposed to improve coverage of road maps. However, because of high error rates, these systems have not been adopted by mapping communities. We propose machine-assisted map editing, where automatic map inference is integrated into existing, human-centric map editing workflows. To realize this, we build Machine-Assisted iD (MAiD), where we extend the web-based OpenStreetMap editor, iD, with machine-assistance functionality. We complement MAiD with a novel approach for inferring road topology from aerial imagery that combines the speed of prior segmentation approaches with the accuracy of prior iterative graph construction methods. We design MAiD to tackle the addition of major, arterial roads in regions where existing maps have poor coverage, and the incremental improvement of coverage in regions where major roads are already mapped. We conduct two user studies and find that, when participants are given a fixed time to map roads, they are able to add as much as 3.5x more roads with MAiD.
|
Rather than segmenting the imagery, RoadTracer @cite_8 and IDL @cite_18 employ an iterative graph construction (IGC) approach that extracts roads via a series of steps in a search process. On each step, a CNN is queried to determine what direction to move in the search, and a road segment is added to a partial road network graph in that direction. Although IGC methods improve accuracy, they are an order of magnitude slower in execution time than segmentation approaches and thus not suitable in interactive settings.
|
{
"abstract": [
"This paper tackles the task of estimating the topology of filamentary networks such as retinal vessels and road networks. Building on top of a global model that performs a dense semantical classification of the pixels of the image, we design a Convolutional Neural Network (CNN) that predicts the local connectivity between the central pixel of an input patch and its border points. By iterating this local connectivity we sweep the whole image and infer the global topology of the filamentary network, inspired by a human delineating a complex network with the tip of their finger. We perform an extensive and comprehensive qualitative and quantitative evaluation on two tasks: retinal veins and arteries topology extraction and road network estimation. In both cases, represented by two publicly available datasets (DRIVE and Massachusetts Roads), we show superior performance to very strong baselines.",
"Mapping road networks is currently both expensive and labor-intensive. High-resolution aerial imagery provides a promising avenue to automatically infer a road network. Prior work uses convolutional neural networks (CNNs) to detect which pixels belong to a road (segmentation), and then uses complex post-processing heuristics to infer graph connectivity. We show that these segmentation methods have high error rates because noisy CNN outputs are difficult to correct. We propose RoadTracer, a new method to automatically construct accurate road network maps from aerial images. RoadTracer uses an iterative search process guided by a CNN-based decision function to derive the road network graph directly from the output of the CNN. We compare our approach with a segmentation method on fifteen cities, and find that at a 5 error rate, RoadTracer correctly captures 45 more junctions across these cities."
],
"cite_N": [
"@cite_18",
"@cite_8"
],
"mid": [
"2774327564",
"2949395449"
]
}
|
Machine-Assisted Map Editing
|
In many countries, road maps have poor coverage outside urban centers. For example, in Indonesia, roads in the OpenStreetMap Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. SIGSPATIAL '18, November 6-9, 2018, Seattle, WA, USA © 2018 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-1-4503-5889-7/18/11. . . $15.00 https://doi.org/10.1145/3274895.3274927 dataset [9] cover only 55% of the country's road infrastructure 1 ; the closest mapped road to a small village may be tens of miles away. Map coverage improves slowly because mapping road networks is very labor-intensive. For example, when adding roads visible in aerial imagery, users need to perform repeated clicks to draw lines corresponding to road segments.
This issue has motivated significant interest in automatic map inference. Several systems have been proposed for automatically constructing road maps from aerial imagery [6,11] and GPS trajectories [4,15]. Yet, despite over a decade of research in this space, these systems have not gained traction in OpenStreetMap and other mapping communities. Indeed, OpenStreetMap contributors continue to add roads solely by tracing them by hand.
Fundamentally, high error rates make full automation impractical. Even state-of-the-art automatic map inference approaches have error rates between 5% and 10% [2,15]. Navigating the road network using road maps with such high frequencies of errors would be virtually impossible.
Thus, we believe that automatic map inference can only be useful when it is integrated with existing, human-centric map editing workflows. In this paper, we propose machine-assisted map editing to do exactly that.
Our primary contribution is the design and development of Machine-Assisted iD (MAiD), where we integrate machine-assistance functionality into iD, a web-based OpenStreetMap editor. At its core, MAiD replaces manual tracing of roads with human validation of automatically inferred road segments. We designed MAiD with a holistic view of the map editing process, focusing on the parts of the workflow that can benefit substantially from machine-assistance. Specifically, MAiD accelerates map editing in two ways.
In regions where the map has low coverage, MAiD focuses the user's effort on validation of major, arterial roads that form the backbone of the road network. Incorporating these roads into the map is very useful since arterial roads are crucial to many routes. At the same time, because major roads span large distances, validating automatically inferred segments covering major roads is significantly faster than tracing the roads manually. However, road networks inferred by map inference methods include both major and minor roads. Thus, we propose a novel shortest-path-based pruning scheme that operates on an inferred road network graph to retain only inferred segments that correspond to major roads.
In regions where the map has high coverage, further improving map coverage requires users to painstakingly scan the aerial imagery and other data sources for unmapped roads. We reduce this scanning time by adding a "teleport" feature that immediately pans the user to an inferred road segment. Because many inferred segments correspond to service roads and residential roads that are not crucial to the road network, we design a segment ranking scheme to prioritize segments that are more useful.
We find that existing schemes to automatically infer roads from aerial imagery are not suitable for the interactive workflow in MAiD. Segmentation-based approaches [6,11,14], which apply a CNN to label pixels in the imagery as "road" or "non-road", have low accuracy because they require an error-prone post-processing stage to extract a road network graph from the pixel labels. Iterative graph construction (IGC) approaches [2,17] improve accuracy by extracting road topology directly from the CNN, but have execution times six times slower than segmentation, which is too slow for interactivity.
To facilitate machine-assisted interactive mapping, we develop a novel method for extracting road topology from aerial imagery that combines the speed of segmentation-based approaches with the high-accuracy of iterative graph construction (IGC) approaches. Our method adapts the IGC process to use a CNN that outputs road directions for all pixels in one shot; this substantially reduces the number of CNN evaluations, thereby reducing inference time for IGC by almost 8x with near-identical accuracy. Furthermore, in contrast to prior work, our approach infers not only unmapped roads, but also their connections to an existing road network graph.
To evaluate MAiD, we conduct two user studies where we compare the mapping productivity of our validation-based editor (coupled with our map inference approach) to an editor that requires manual tracing. In the first study, we ask participants to map roads in an area of Indonesia with no coverage in OpenStreetMap, with the goal of maximizing the percentage of houses covered by the mapped road network. We find that, given a fixed time to map roads, participants are able to produce road network graphs with 1.7x the coverage and comparable error when using MAiD. In the second study, participants add roads in an area of Washington where major roads are already mapped. With MAiD, participants add 3.5x more roads with comparable error.
In summary, the contributions of this paper are:
• We develop MAiD, a machine-assisted map editing tool that enables efficient human validation of automatically inferred roads. • We propose a novel pruning algorithm and teleport feature that focus validation efforts on tasks where machine-assisted editing offers the greatest improvement in mapping productivity. • We develop an approach for inferring road topology from aerial imagery that complements MAiD by improving on prior work. • We conduct user studies to evaluate MAiD in realistic editing scenarios, where we use the current state of OpenStreetMap, and find that MAiD improves mapping productivity by as much as 3.5x.
The remainder of this paper is organized as follows. In Section 2, we discuss related work. Then, in Section 3, we detail the machine-assisted map editing features that we develop to incorporate automatic map inference into the map editing process. In Section 4, we introduce our novel approach for map inference from aerial imagery. Finally, we evaluate MAiD and our map inference algorithm in Section 5, and conclude in Section 6.
UI for Validation
We build MAiD, where we incorporate our machine-assistance features into iD, a web-based OpenStreetMap editor.
A road network graph is a graph where vertices are annotated with spatial coordinates (latitude and longitude) and edges correspond to straight-line road segments. MAiD inputs an existing road network graph G 0 = (V 0 , E 0 ) containing roads already incorporated in the map. To use MAiD, users first select a region of interest for improving map coverage. MAiD runs an automatic map inference approach in this region to obtain an inferred road network graph G = (V , E) containing inferred segments corresponding to unmapped roads. G should satisfy E 0 ∩ E = ; however, G and G 0 share vertices at the points where inferred segments connect with the existing map.
To make validation of automatically inferred segments intuitive, MAiD then produces a yellow overlay that highlights inferred segments in G over the aerial imagery. Although the overlay is partially transparent, in some cases it is nevertheless difficult to verify the position of the road in the imagery when the overlay is active; thus, users can press and hold a key to temporarily hide the overlay so that they can consult the imagery.
After verifying that an inferred segment is correct, users can left-click the segment to incorporate it into the map. Existing functionality in the editor can then be used to adjust the geometry or topology of the road. If an inferred segment is erroneous, users can either ignore the segment, or right-click on the segment to hide it. Figure 2 shows the MAiD editing workflow.
Mapping Major Roads
However, we find that this validation-based UI alone does not significantly increase mapping productivity. To address this, we first consider adding roads in regions where the map has low coverage.
In practice, when mapping these regions, users typically focus on tracing major, arterial roads that form the backbone of the road network. More precisely, major roads connect centers of activity within a city, or link towns and villages outside cities; in Open-StreetMap, these roads are labelled "primary", "secondary", or "tertiary". Users skip short, minor roads because they are not useful until these important links are mapped. Because major roads span large distances, though, tracing them is slow. Thus, validation can substantially reduce the mapping time for these roads. Supporting efficient validation of major roads requires the pruning of inferred segments corresponding to minor roads. However, automatically distinguishing major roads is difficult. Often, major roads have the same width and appearance as minor roads in aerial imagery. Similarly, while major roads in general have higher coverage by GPS trajectories, more trips may traverse minor roads in population centers than major roads in rural regions.
Rather than detecting major roads from the data source, we propose a shortest-path-based pruning scheme that operates on an inferred road network graph to retain only inferred segments that correspond to major roads. Intuitively, major roads are related to shortest paths: because major roads offer fast connections between far apart locations, they should appear on shortest paths between such locations.
We initially applied betweenness centrality [8], a measure of edge importance based on shortest paths. The betweenness centrality of an edge is the number of shortest paths between unique origindestination pairs that pass through the edge. (When computing shortest paths in the road network graph, the length of an edge is simply the distance between its endpoints.) Formally, for a road network graph G = (V , E), the betweenness centrality of an edge e is:
д(e) = s t ∈V I [e ∈ shortest-path(s, t)]
We can then filter edges in the graph by thresholding based on the betweenness centrality scores.
However, we find that segments with high betweenness centrality often do not correspond to important links in the road network. When using a high threshold, the segments produced after thresholding cover major roads connecting dense clusters in the original graph, but miss connections to smaller clusters. When using a low threshold, most major roads are retained, but minor roads in dense clusters are also retained. Figure 3 shows an example of this issue. Additionally, different regions require very different thresholds. Grey segments are pruned to produce a road network graph containing the blue segments. On the left, a high threshold misses the road to the eastern cluster. On the right, a low threshold includes small roads in the northern and southern clusters.
Thus, we propose an adaptation of betweenness centrality for our pruning problem.
Pruning Minor Roads Fundamentally, betweenness centrality fails to consider the overall spatial distribution of vertices in the road network graph. Dense but compact clusters in the road network should not have an undue influence on the pruning process.
Our pruning approach builds on our earlier intuition, that major roads connect far apart locations. Thus, rather than considering all shortest paths in the graph, we focus on long shortest paths. Additionally, we observe that the path may use minor roads near the source and near the destination, but edges on the middle of a shortest path are more likely to be major roads.
We first cluster the vertices of the road network. Then, we compute shortest paths between cluster centers that are at least a minimum radius R apart. Rather than computing a score and then thresholding on the score, we build a set of edges E major containing edges corresponding to major roads that we will retain. For each shortest path, we trim a fixed distance from the ends of the path, and add all edges in the remaining middle of the path to E major . We prune any edge that does not appear in E major . Figure 4 illustrates our approach. We find that our approach is robust to the choice of the clustering algorithm. Clustering is primarily used to avoid placing cluster centers at vertices that are at the end of a long road that only connects a small number of destinations (and, thus, isn't a major road). In our implementation, we use a simple grid-based clustering scheme: we divide the road network into a grid of r ×r cells, remove cells that contain less than a minimum number of vertices, and then place cluster centers at the mean position of vertices in the remaining cells. We use r = 1 km, R = 5 km.
In practice, we find that for constant R, the runtime of our approach scales linearly with the length of the input road network.
MAiD Implementation. We add a button to toggle between an overlay containing all inferred roads, and an overlay after pruning. Figure 5 shows an example of pruning in Indonesia.
Teleporting to Unmapped Roads
In regions where the map already has high coverage, further improving the map coverage is tedious. Because most roads already appear in the map, users need to slowly scan the aerial imagery to identify unmapped roads in a very time-consuming process.
To address this, we add a teleport capability into the map editor, which pans the editor viewport directly to an area with unmapped roads. Specifically, we identify connected components in the inferred road network G, and pan to a connected component. This functionality enables a user to teleport to an unmapped component, add the roads, and then immediately teleport to another component. By eliminating the time cost of searching for unmapped roads in the imagery, we speed up the mapping process significantly.
However, there may be hundreds of thousands of connected components, and validating all of the components may not be practical. Thus, we propose a prioritization scheme so that longer roads that offer more alternate connections between points on the existing road network are validated first.
Let area(C) be the area of a convex hull containing the edges of a connected component C in G, and let conn(C) be the number of vertices that appear in both C and G 0 , i.e., the number of connections between the existing road network and the inferred component C. We rank connected components by score(C) = area(C) + λconn(C), for a weighting factor λ.
FAST, ACCURATE MAP INFERENCE
In the map inference problem, given an existing road network graph G 0 = (V 0 , E 0 ), we want to produce an inferred road network graph G = (V , E) where each edge in E corresponds to a road segment visible in the imagery but missing from the existing map.
Prior work in extracting road topology from aerial imagery generally employ a two-stage segmentation-based architecture. First, a convolutional neural network (CNN) is trained to label pixels in the aerial imagery as either "road" or "non-road". To extract a road network graph, the CNN output is passed through a heuristic postprocessing pipeline that begins with thresholding, morphological thinning [20], and Douglas-Peucker simplification [7]. However, robustly extracting a graph from the CNN output is challenging, and the post-processing pipeline is error-prone; often, noise in the CNN output is amplified in the final road network graph [2].
Rather than segmenting the imagery, RoadTracer [2] and IDL [17] propose an iterative graph construction (IGC) approach that improves accuracy by deriving the road network graph more directly from the CNN. IGC uses a step-by-step process to construct the graph, where each step contributes a short segment of road to a partial graph. To decide where to place this segment, IGC queries the CNN, which outputs the most likely direction of an unexplored road. Because we query the CNN on each step, though, IGC requires an order of magnitude more inference steps than segmentationbased approaches. We find that IGC is over six times slower than segmentation.
Thus, existing map inference methods are not suitable for the interactive nature of MAiD.
We combine the two-stage architecture of segmentation-based approaches with the road-direction output and iterative search process of IGC to achieve a high-speed, high-accuracy approach. In the first stage, rather than labeling pixels as road or non-road, we apply a CNN on the aerial imagery to annotate each pixel in the imagery with the direction of roads near that pixel. Figure 6 shows an example of these annotations. In the second stage, we iteratively construct a road network graph by following these directions in a search process.
Ground Truth Direction Labels We first describe how we obtain the per-pixel road-direction information shown in Figure 6 from a ground truth road network G * = (V * , E * ). For each pixel (i, j), we compute a set of angles A * i, j . If there are no edges in G * within a matching threshold of (i, j), A * i, j = . Otherwise, suppose e is the closest edge to (i, j), and let p be the closest point on e computed by projecting (i, j) onto e. Let P i, j be the set of points in G * that are a fixed distance D from p; put another way, P i, j contains each point p ′ such that p ′ falls on some edge e ′ ∈ E * , and the shortest distance from p to p ′ in G * is D.
Then, A * i, j = {angle(p ′ − (i, j)) | p ′ ∈ P i, j }, i.e., A * i, j contains the angle from (i, j) to each point in P i, j . Figure 7 shows an example of computing A * i, j . Representing Road Directions. We represent A * as a 3-dimensional matrix U * that can be output by a CNN. We discretize the space of angles corresponding to road directions into b = 64 buckets, where the kth bucket covers the range of angles from 2k π b to 2(k +1)π b
. We then convert each set of road directions A * i, j to a b-vector u * (i, j), where u * (i, j) k = 1 if there is some angle in A * i, j falling into the kth angle bucket, and u * (i, j) k = 0 otherwise. Then, U * i, j,k = u(i, j) k . CNN Architecture. Our CNN model inputs the RGB channels from the w × h aerial imagery, and outputs a w × h × b matrix U .
We apply 16 convolutional layers in a U-Net-like configuration [12], where the first 11 layers downsample to 1/32 the input resolution, and the last 5 layers upsample back up to 1/4 the input resolution. We use 3 × 3 kernels in all layers. We use sigmoid activation in the output layer, and rectified linear activation in all other layers. We use batch normalization in the 14 intermediate layers between the input and output layers.
We train the CNN on random 256 × 256 crops of the imagery with a mean-squared-error loss, i, j,k (U i, j,k − U * i, j,k ) 2 , and use the ADAM gradient descent optimizer [10].
Search Process. At inference time, after applying the CNN on aerial imagery to obtain U , we perform a search process using the predicted road directions in U to derive a road network graph. We adapt the search process from IGC. Essentially, the search iteratively follows directions in U to construct the graph, adding a fixed-length road segment on each step.
We assume that a set of points V init known to be on the road network are provided. If there is an existing map G 0 , we will show later how to derive V init from G 0 . Otherwise, V init may be derived from peaks in the two-dimensional matrix m(U ) i, j = max k U i, j,k . We initialize a road network graph G and a vertex stack S, and populate both with vertices at the points in V init .
Let S top be the vertex at the head of S, and let u top = U (S top ) be the vector in U corresponding to the position of S top . For an angle bucket a, u top,a is the predicted likelihood that there is a road in the direction corresponding to a from S top . On each step of the search, we use u top to decide whether there is a road segment adjacent to S top that hasn't yet been mapped in G, and if there is such a segment, what direction that segment extends in.
We first mask out directions in u top corresponding to roads already incorporated into G to obtain a masked vector mask(u top ); we will discuss the masking procedure later. Masking ensures that we do not add a road segment that duplicates a road that we captured earlier in the search process. Then, mask(u top ) a is the likelihood that there is an unexplored road in the direction a.
If the maximum likelihood after masking, max a mask(u top ) a , exceeds a threshold T , then we decide to add a road segment. Let a best = argmax a mask(u top ) a be the direction with highest likelihood after masking, and let w a best be a unit-vector corresponding to angle bucket a best . We add a vertex v at S top + Dw a best , i.e., at the point D away from S top in the direction indicated by a best . We then add an edge (S top , v), and push v onto S.
Otherwise, if max a mask(u top ) a < T , we stop searching from S top (since there are no unexplored directions with a high enough confidence in U ) by popping S top from S. On the next search step, we will return to the previous vertex in S. Figure 8 illustrates the search process. At the top, we show three search iterations, where we add a segment, stop, and then add another segment. At the bottom, we show the fourth iteration in detail. Likelihoods in u top peak to the left, topleft, and right. After masking, only the blue bars pointing right remain, since the left and topleft directions correspond to roads that we already mapped. We take the maximum of these remaining likelihoods and compare to the threshold T to decide whether to add a segment from S top or stop.
When searching, we may need to merge the current search path with other parts of the graph. For example, in the fourth iteration of Figure 8, we approach an intersection on the right where the perpendicular road was already added to G earlier in the search. We handle merging with a simple heuristic that avoids creating spurious loops. Let N k (S top ) be the set of vertices within k edges from S top . If S top is within 2D of another vertex v in G such that v N 5 (S top ), then we add an edge (S top , v).
Masking Explored Roads. If we do not mask during the search, then we would repeatedly explore the same road in a loop. Masking out directions corresponding to roads that were explored earlier in the search ensures that roads are not duplicated in G.
We first mask out directions that are similar to the angle of edges incident to S top . For each edge e incident to S top , if the angle of e falls in bucket a, we set mask(u top ) a+k = 0 ∀k, −5 ≤ k ≤ 5.
However, this is not sufficient. In the fourth iteration of Figure 8, there is an explored road to the north of S top , but that road is connected to a neighbor west of S top rather than directly to S top . Thus, we also mask directions that are similar to the angle from S top to any vertex in N 5 (S top ).
Extending an Existing Map. We now show how to apply our map inference approach to improve an existing road network graph G 0 . Our key insight is that we can use points on G 0 as starting locations for the search process. Then, when new road segments are inferred, these points inform the connectivity between the new segments and G 0 .
We first preprocess G 0 to derive a densified existing map G ′ 0 . Densification is necessary because there may not be a vertex at the point where an unmapped road branches off from a road in the existing map. To densify G 0 = (V 0 , E 0 ), for each e ∈ E 0 where length(e) > D, we add ⌊ length(e) D ⌋ evenly spaced vertices between the endpoints of e, and replace e with edges between those vertices. This densification preprocessing produces a base map G ′ 0 where the distance between adjacent vertices is at most D.
To initialize the search, we set G = G ′ 0 , and add vertices in G ′ 0 to S. We then run the search process to termination. The search produces a merged road network graph G that contains both segments in the existing map and inferred segments. We extract the inferred road network graph by removing the edges of G ′ 0 from this output graph G.
EVALUATION
To evaluate MAiD, we perform two user studies. In Section 5.1, we consider a region of Indonesia where OpenStreetMap has poor coverage to evaluate our pruning approach. In Section 5.2, we turn to a region of Washington where major roads are already mapped to evaluate the teleport functionality.
In Section 5.3, we compare our map inference scheme against prior work in map inference from aerial imagery on the RoadTracer dataset [2]. We show qualitative results when using MAiD with our map inference approach in Section 5.4.
Indonesia Region: Low Coverage
We first conduct a user study to evaluate mapping productivity when adding roads in a small area of Indonesia with no coverage in OSM. With MAiD, the interface includes a yellow overlay of automatically inferred roads; to obtain these roads, we generate an inferred graph from aerial imagery using our map inference method, and then apply our pruning algorithm to retain only the major roads. After validating the geometry of a road, the user can click it to incorporate the road into the map. In the baseline unmodified editor, users manually trace roads by performing repeated clicks along the road in the imagery.
Procedure. The task is to map major roads in a region using the imagery, with the goal of maximizing coverage in terms of the percentage of houses within 1000 ft of the road network. Users are also asked to produce a connected road network, and to minimize the distance between road segments and the road position in the imagery. We define two metrics to measure this distance: road geometry error (RGE), the average distance between road segments that the participants add and a ground truth map that we hand label, and max-RGE, the maximum distance.
Ten volunteers, all graduate and postdoctoral students age 20-30, participate in our study. We use a within-subjects design; five participants perform the task first on the baseline editor, and then on MAiD, and five participants proceed in the opposite order.
Participants perform the experiment in a twenty-minute session. We select three regions from the unmapped area: an example region, a training region, and a test region. We first introduce participants to the iD editor, and enumerate the editor features as they add one road. We then describe the task, and show them the example region where the task has already been completed. Participants briefly practice the task on the training region, and then have four minutes to perform the task on a test region. We repeat the training and testing for both editors.
We choose the test region so that it is too large to map within the allotted four minutes. We then evaluate the road network graphs that the participants produce using each editor in terms of coverage (percentage of houses covered), RGE, and max-RGE.
Results. We report the mean and standard error of the percentage of houses covered by the participants with the two editors in Figure 9. We find that MAiD improves the mean percentage covered by 1.7x (from 17% to 29%). While manually tracing a major road may take 15-30 clicks, the road can be captured with one click in MAiD after the geometry of an inferred segment is verified.
RGE and max-RGE are comparable for both editors, although there is more variance between participants with the baseline editor
Washington Region: High Coverage
Next, we evaluate mapping productivity in a high-coverage region of rural Washington. With MAiD, users can press a Teleport button to immediately pan to a group of unmapped roads. A yellow overlay includes all inferred segments covering those roads; we do not use our pruning approach for this study. With the baseline editor, users need to pan around the imagery to find unmapped roads. After finding an unmapped road, users manually trace it.
Procedure. The task is to add roads that are visible in the aerial imagery but not yet covered by the map. Because major roads in this region are already mapped, rather than measuring house coverage, we ask users to add as much length of unmapped roads as possible. We again ask users to minimize the distance between road segments and the road position in the imagery, and to ensure that new segments are connected to the existing map.
Ten volunteers (consisting of graduate students, postdoctoral students, and professional software engineers all age 20-30) participate in our study. We again use a within-subjects design and counterbalance the order of the baseline editor and MAiD.
Participants perform the experiment in a fifteen-to-twenty minute session. For each editing interface, we first provide instructions on the task and editor functionality (accompanied by a 30-second video where we use the editor), and show images of example unmapped roads. Participants then practice the task on a training region in a warm-up phase, with a suggested duration of two to three minutes. After participants finish the warm-up, they are given three minutes to perform the task on a test region. As before, we repeat training and testing for both interfaces.
We evaluate the road network graphs that the participants produce in terms of total road length, RGE, and max-RGE.
Results. We report the mean and standard error of total road length added by the participants in Figure 10. MAiD improves mapping productivity in terms of road length by 3.5x (from 25 km to 88 km). Most of this improvement can be attributed to the teleport functionality eliminating the need for panning around the imagery to find unmapped roads. Additionally, though, because teleport prioritizes large unmapped components with many connections to the existing road network, validating these components is much faster than manually tracing them.
As before, RGE and max-RGE are comparable for the two editors. Mean and standard error of RGE is 7.0 m ± 0.7 m with the baseline editor, and 5.3 m ± 0.1 m with MAiD. For max-RGE, it is 53 m ± 14 m with the baseline, and 39 m ± 4 m with MAiD.
Automatic Map Inference
Dataset. We evaluate our approach for inferring road topology from aerial imagery on the RoadTracer dataset [2], which contains imagery and ground truth road network graphs from forty cities. The data is split into a training set and a test set; the test set includes data for a 16 sq km region around the city centers of 15 cities, while the training set contains data from 25 other cities. Imagery is from Google Maps, and road network data is from OpenStreetMap.
The test set includes 9 cities in the U.S., 3 in Canada, and 1 in each of France, the Netherlands, and Japan.
Baselines. We compare against the baseline segmentation approach and the IGC implementation from [2]. The segmentation approach applies a 13-layer CNN, and then extracts a road network graph using thresholding, thinning, and refinement. The IGC approach, RoadTracer, trains a CNN using a supervised dynamic labels procedure that resembles reinforcement learning. This approach achieves state-of-the-art performance on the dataset, on which DeepRoadMapper [11] has also been evaluated.
Metrics. We evaluate the road network graphs output by the map inference schemes on the TOPO metric [3], which is commonly used in the automatic road map inference literature [1]. TOPO evaluates both the geometrical accuracy (how closely the inferred segments align with the actual road) and the topological accuracy (correct connectivity) of an inferred map. It simulates an agent traveling on the road network from an origin location, and compares the destinations that can be reached within a fixed radius in the inferred map with those that can be reached in the ground truth map. This comparison is repeated over a large number of randomly selected origins to obtain an average precision and recall. We also evaluate the execution time of the schemes on an AWS p2.xlarge instance with an NVIDIA Tesla K80 GPU.
Results. We show TOPO precision-recall curves obtained by varying parameter choices in Figure 11, and average execution time in the 15-square-km test regions for parameters that correspond to a 10% error rate in Table 1. We find that our approach exhibits both the high-accuracy of IGC and the speed of segmentation methods.
Our map inference approach has comparable TOPO performance to IGC, while outperforming the segmentation approach on error rate by up to 1.6x. This improvement in error rate is crucial for machine-assisted map editing as it reduces the time users spend validating incorrect inferred segments.
On execution time, our approach performs comparably to the segmentation approach, while IGC is almost 8x slower. A low execution time is crucial to MAiD's interactive workflow. Users can explore a new region for two to three minutes while the automatic map inference approach runs; however, a fifteen-minute runtime breaks the workflow.
Qualitative Results
In Figure 12, we show qualitative results from MAiD when using segments inferred by our map inference algorithm.
CONCLUSION
Full automation for building road maps has proven unfeasible due to high error rates in automatic map inference methods. We instead propose machine-assisted map editing, where we integrate automatically inferred road segments into the existing map editing process by having humans validate these segments before the segments are incorporated into the map. Our map editor, Machine-Assisted iD (MAiD), improves mapping productivity by as much as 3.5x by focusing on tasks where machine-assistance provides the most benefit. We believe that by improving mapping productivity, MAiD has the potential to substantially improve coverage in road maps. Figure 12: Qualitative results from MAiD with our map inference algorithm. Segments in the existing map are in white. We show our pruning approach applied on a region of Indonesia in the top image, with pruned roads in purple and retained roads in yellow. The middle and bottom images show connected components of inferred segments that the teleport feature pans the user to, in Washington and Bangkok respectively.
| 5,944 |
1810.07753
|
2897323328
|
Stringent latency requirements in advanced Internet of Things (IoT) applications as well as an increased load on cloud data centers have prompted a move towards a more decentralized approach, bringing storage and processing of IoT data closer to the end-devices through the deployment of multi-purpose IoT gateways. However, the resource constrained nature and diversity of these gateways pose a challenge in developing applications that can be deployed widely. This challenge can be overcome with containerization, a form of lightweight virtualization, bringing support for a wide range of hardware architectures and operating system agnostic deployment of applications on IoT gateways. This paper discusses the architectural aspects of containerization, and studies the suitability of available containerization tools for multi-container deployment in the context of IoT gateways. We present containerization in the context of AGILE, a multi-container and micro-service based open source framework for IoT gateways, developed as part of a Horizon 2020 project. Our study of containerized services to perform common gateway functions like device discovery, data management and cloud integration among others, reveal the advantages of having a containerized environment for IoT gateways with regard to use of base image hierarchies and image layering for in-container and cross-container performance optimizations. We illustrate these results in a set of benchmark experiments in this paper.
|
The growth in both the number of IoT devices and IoT applications in a wide array of domains has brought about new requirements to the IoT ecosystem which include location awareness, geo-distribution of processing nodes and low latency in device-cloud communication. This has led to a burden on the traditional resources for IoT including networking, storage and processing resources. Consequently, considerable amount of literature has been published on the use of Single Board Computers (SBCs) like Raspberry Pi as an intermediate processing layer @cite_16 @cite_7 and an enabler for various IoT applications @cite_15 @cite_14 . Moreover, the suitability of networking resource virtualization for IoT including Software Defined Networks (SDNs) and Network Virtualization among others @cite_10 @cite_13 have also been discussed in existing work. Our study on the existing literature focuses on virtualization of storage and processing resources at the OS-level in the form of containerization. We study two aspects of containerization in the IoT context, first, the existing studies on the suitability and performance of containerization on resource-constrained devices and second, the application of containerization to different use-cases of IoT.
|
{
"abstract": [
"The Internet-of-Things (IoT) envisions a world where billions of everyday objects and mobile devices communicate using a large number of interconnected wired and wireless networks. Maximizing the utilization of this paradigm requires fine-grained QoS support for differentiated application requirements, context-aware semantic information retrieval, and quick and easy deployment of resources, among many other objectives. These objectives can only be achieved if components of the IoT can be dynamically managed end-to-end across heterogeneous objects, transmission technologies, and networking architectures. Software-defined Networking (SDN) is a new paradigm that provides powerful tools for addressing some of these challenges. Using a software-based control plane, SDNs introduce significant flexibility for resource management and adaptation of network functions. In this article, we study some promising solutions for the IoT based on SDN architectures. Particularly, we analyze the application of SDN in managing resources of different types of networks such as Wireless Sensor Networks (WSN) and mobile networks, the utilization of SDN for information-centric networking, and how SDN can leverage Sensing-as-a-Service (SaaS) as a key cloud application in the IoT.",
"Since its introduction, the Internet of Things (IoT) has changed several aspects of our lives, leading to the commercialization of different heterogeneous devices. In order to bridge the gap among these heterogeneous devices, in some of the most common IoT use cases-e.g., smart home, smart buildings, etc.- the presence of a gateway as an enabler of interoperability is required. In this paper, we introduce the concept of a Gateway-as-a-Service (GaaS), a lightweight device that can be shared between different users thanks to the use of virtualization techniques. Performance has been evaluated on real hardware and results demonstrate the lightweight characteristics of the proposal.",
"",
"An image capture system with embedded computing can extract information from images without need for an external processing unit, and interface devices used to make results available to other devices. The choosing of an Embedded platform is very unique and easy to implement. The paper proposed an image capturing technique in an embedded system based on Raspberry Pi board. Considering the requirements of image capturing and recognition algorithm, Raspberry Pi processing module and its peripherals, implementing based on this platform, finally actualized Embedded Image Capturing using Raspberry Pi system (EICSRS). Experimental results show that the designed system is fast enough to run the image capturing, recognition algorithm, and the data stream can flow smoothly between the camera and the Raspberry Pi board.",
"Cloud technology is moving towards multi-cloud environments with the inclusion of various devices. Cloud and IoT integration resulting in so-called edge cloud and fog computing has started. This requires the combination of data centre technologies with much more constrained devices, but still using virtualised solutions to deal with scalability, flexibility and multi-tenancy concerns. Lightweight virtualisation solutions do exist for this architectural setting with smaller, but still virtualised devices to provide application and platform technology as services. Containerisation is a solution component for lightweight virtualisation solution. Containers are furthermore relevant for cloud platform concerns dealt with by Platform-as-a-Service (PaaS) clouds like application packaging and orchestration. We demonstrate an architecture for edge cloud PaaS. For edge clouds, application and service orchestration can help to manage and orchestrate applications through containers. In this way, computation can be brought to the edge of the cloud, rather than data from the Internet-of-Things (IoT) to the cloud. We show that edge cloud requirements such as cost-efficiency, low power consumption, and robustness can be met by implementing container and cluster technology on small single-board devices like Raspberry Pis. This architecture can facilitate applications through distributed multi-cloud platforms built from a range of nodes from data centres to small devices, which we refer to as edge cloud. We illustrate key concepts of an edge cloud PaaS and refer to experimental and conceptual work to make that case.",
"The imminent arrival of the Internet of Things (IoT), which consists of a vast number of devices with heterogeneous characteristics, means that future networks need a new architecture to accommodate the expected increase in data generation. Software defined networking (SDN) and network virtualization (NV) are two technologies that promise to cost-effectively provide the scale and versatility necessary for IoT services. In this paper, we survey the state of the art on the application of SDN and NV to IoT. To the best of our knowledge, we are the first to provide a comprehensive description of every possible IoT implementation aspect for the two technologies. We start by outlining the ways of combining SDN and NV. Subsequently, we present how the two technologies can be used in the mobile and cellular context, with emphasis on forthcoming 5G networks. Afterward, we move to the study of wireless sensor networks, arguably the current foremost example of an IoT network. Finally, we review some general SDN-NV-enabled IoT architectures, along with real-life deployments and use-cases. We conclude by giving directions for future research on this topic."
],
"cite_N": [
"@cite_13",
"@cite_14",
"@cite_7",
"@cite_15",
"@cite_16",
"@cite_10"
],
"mid": [
"2207117320",
"2571221493",
"2043926804",
"2183145433",
"2533866537",
"2518660493"
]
}
|
Towards Multi-container Deployment on IoT Gateways
|
Over the recent years, the concept of Internet of Things has gradually evolved from a paradigm to create a network of objects connected to the Internet to an interconnected network of data producers and data consumers. Regular dayto-day objects equipped with sensors act as data producers generating data sensed from the surrounding environment while software applications as well as end-devices equipped with actuators act as data consumers performing actions based on the data gathered. This notion of IoT has led to its application to various domains like health-care, autonomous transport, smart cities, among others which leverage the data generated from end devices to gain meaningful insights on the generated data. Due to the resource-constrained nature of these IoT end-devices, the scope of storage and data pro-cessing on these devices is limited. Thus, the data processing and business analytics are performed on cloud platforms and software services running on the cloud. However, with the number of connected devices predicted to grow up to 75 billion by 2025 [1], the data processing and storage architecture based solely on the cloud is facing a few challenges.
The majority of IoT applications are heavily dependent on cloud storage and processing which affects the end-toend latency and results in inefficient utilization of resources. These challenges, together with privacy and security considerations, have prompted a move away from the centralized architecture of storage and processing on the cloud to bring the processing and storage closer to the end-devices with the edge computing model. The edge computing model leverages a set of devices in the architecture between the end devices and the cloud. These devices can be legacy devices present in the network with storage and processing capabilities [2] or dedicated devices deployed to serve this purpose like IoT gateways and cloudlet devices [3].
The presence of the edge computing layer aids offloading of data processing and validation to the edge layer. Moreover, it facilitates implementation of collaborative computing among end-devices as well as implementation of device and data management policies. However, the edge layer devices are not usually resource enriched; thus following the cloud oriented approach of hypervisor-based virtualization can prove to be cumbersome on these devices. Moreover, the edge layer devices are heterogeneous in terms of their hardware specifications, processor architecture and operating systems running on the devices. Thus, light-weight virtualization in the form of containerization offers a suitable solution to the concerns addressed above. Containerization allows virtualization at the OS-level, leveraging the kernel of the operating system to offer isolated user-spaces to each container. Thus, containerization facilitates the implementation of a microservice architecture, where each functionality on the edge device is developed as a service running inside each container. Containerization offers the flexibility to develop different services in different programming languages and communication among the containers using well-defined c 2018 IEEE APIs.
In this paper, we present AGILE, an open-source framework for IoT gateways offering services including device and protocol management, data storage, security and access control. AGILE is designed based on a microservice architecture with each of the services above deployed in separate containers. Such containerization provides multiple advantages, but comes with a performance overhead. We use AGILE as a case-study to observe the overhead associated with containerization over a conventional approach and discuss improvements achieved by applying techniques like crosscontainer optimization and in-container optimization.
The rest of the paper is structured as follows. In section II, we discuss about the state of the art in this research area highlighting the work on the performance measurements on containerized approaches as well as work on containerization in the IoT context. In section III, we discuss the services offered by the AGILE framework and the corresponding architecture. Section IV highlights the different approaches for optimizing the performance of containers for edge layer devices. In section V, we present our observations from the performance tests carried out on AGILE, concluding our work in section VI.
A. Suitability of Containerization
Previous studies have focused on the tradeoffs in applying hypervisor based virtualization and lightweight containerization to edge devices. The authors of [10] illustrate the advantages of containerization over hypervisor based hardware virtualization in terms of size of the resources, flexibility and portability. The authors state that hypervisor based virtualization is more suited for Infrastructure-as-a-Service on the cloud than containerization, which offers a portable runtime, easier deployability on multiple servers and interconnectivity among containers. These advantages, on the other hand, make containerization more suitable for the edge layer in a Platform-as-a-Service scenario [11]. Pahl et. al [4] leverages the resources of Raspberry Pi devices to further build a cluster of containers running on multiple devices in the PaaS context. The cluster is designed to perform computationally intensive tasks including cluster and data management, overcoming the resource-constrained nature of each device.
A significant amount of research is aimed at conducting performance tests to analyze the behavior of edge devices with the implementation of containerization. The authors of [12] study the performance of VM based virtualization and containerization against a native approach in terms of CPU cycles, disk throughput and network throughput. The results show that containerization outperforms VM based virtualization for memory I/O, network throughput and CPU cycles. Morabito et. al [13] studies the processing resource and power consumption for performing different tasks on Raspberry Pi for wired and wireless connectivity. These tasks include running a containerized CoAP server for processing data, as well as for performing sensing actuation and video analytics. The author of [14] performs benchmark tests on Raspberry Pi B+ and Raspberry Pi model B in terms of Disk and Network I/O for native and containerized approaches. While the overhead is very high for the Raspberry Pi B+, significant performance improvements are observed for Raspberry Pi 2.
In the above studies, the benchmarks clearly show that containerization is feasible on System on Chip (SOC) devices like Raspberry Pi 2 and it is more optimized than using VM based virtualization. However, there is a lack of studies on how the process of containerization itself can be optimized and benchmarks tests for the same. Existing literature shows approaches to deploy containers in a cluster of devices, however, there exists a gap in terms of multi-container deployment and optimization on a single device.
B. Containerization in IoT use cases
Several articles in existing literature present applications of IoT based on containerization in different use cases. The authors of [15] demonstrate a distributed and layered architecture for Industrial IoT based applications with deployment of docker containers on end devices, the gateway and the cloud. Chesov et. al [16] present a multi-tier approach to containerize different functionalities in the smart cities context like data aggregation, business analytics and user interaction with data and deploy the containers on the cloud. Kovatsch et. al simplify programmability of IoT applications by exposing scripts and configurations using a RESTful CoAP interface from a deployed container [17].
The existing literature applies containerization to different use-cases and specific areas of IoT. However, there is a lack of a framework based on containerization which can be applicable to multiple use cases and applications of IoT. We try to III. AGILE FRAMEWORK The AGILE gateway, which stands for an Adaptive & Modular Gateway for the IoT, was conceived to design and implement a truly modular gateway in terms of hardware and software and to be adaptive to handle various types of devices, networking interfaces, communication protocols, and use cases.
A. AGILE Microservices
AGILE is aimed at developing an open source software and hardware framework for IoT development. The hardware framework involves development of two separate versions of the gateway, a maker's version supporting fast prototyping based on the Raspberry Pi 3 board while the industrial version is being developed for use cases requiring rugged hardware and being ready for production. In the following subsection, we elaborate the software framework for AGILE as illustrated in Fig. 1.
1) Device Management: The device management handles addition and removal of new devices to the gateway. The device manager supports a set of devices which offer interfaces like reading and writing to the device as well as execution of methods offered by the device. The devices communicate using one or more protocols supported by the protocol manager.
2) Protocol Management: The protocol manager offers interfaces to add and remove protocols as well as support for implementations of underlying methods for each protocol including device discovery. The protocol manager supports a set of protocols which offer interfaces like read, write, connect and disconnect with the devices implementing the protocol.
3) Local Data Storage:
The data storage component provides timeseries based storage for data generated by IoT sensors, with support for retention policies and encryption.
4) Gateway Management UI:
The gateway management UI allows the user to manage and access functionalities on the gateway, to start and stop other services like device discovery as well as access to data and visualization of the stored data.
5) IoT App Developers UI and SDK:
The IoT App Developers UI is aimed at offering a user-friendly graphical interface to create application logic by wiring together AG-ILE specific and generic nodes in an application workflow. The applications can e.g. include data collection and storage, rules applicable on the data to implement sensing-actuation use-cases, as well as analytics on sensor data or on the data stored on the gateway. A separate software development kit (SDK) facilitates development of IoT applications written direcetly in JavaScript, while apps written in other languages can also directly use the APIs provided by gateway microservices.
6) Cloud Enablers: The AGILE framework supports multiple cloud providers including Xively and Google Drive to process and push the data collected from the device and store them on these cloud platforms. This allows seamless end-toend connectivity from the peripheral devices up to the cloud platforms.
B. Containerized Architecture for AGILE
The software framework for AGILE is designed using a microservice architecture. The rationale behind following the microservice oriented approach are the following: (i) The services offered are split into components which can interact with each other over required and provided interfaces, (ii) Using this approach, the components are easily scalable and adaptable to changing requirements in the system. The implementation of the software framework is achieved by containerizing the offered microservices. The containerization engine we have used for AGILE is Docker due to its wide-scale adoption, documentation and support for multiple architectures. The functionalities mentioned in the previous subsection are implemented individually in different Docker containers. The framework is language-agnostic, allowing development in any language, and thus, a wider choice of open-source code to be reused. Moreover, software dependency conflicts are easy to overcome since each containerized service has its own file system namespace.
The core functionality of the gateway which includes device management and protocol management is containerized exposing the interfaces for the HTTP REST API endpoints. The protocols are implemented in individual containers which include ZigBee and BLE. These core containers communicate with each other over the DBus which is implemented in a containerized form as well.
The Gateway management UI is implemented using OS.js, a JavaScript based Web Desktop platform, in a containerized environment. This container depends on agile-core to offer access to the other microservices. The developers UI leverages the Node-RED tool deployed in a separate container. The cloud integration modules are provided as nodes inside the Node-RED tool. The number of containers used for implementing incremental versions of the AGILE stack is illustrated in Figure 2; which acts as a premise for our experimental setup.
IV. CONTAINERIZATION AND IOT
Containerization of the micro-services provides several advantages in the design, development, deployment, security, and management of the gateways, however it also brings nonnegligible overhead in other areas, most notably in resource consumption.
From the design perspective, the containerized architecture maps well to the micro-service principle, providing clear boundaries between individual services. Containerization also allows fine-grained control over access to system resources as well as restricted interactions between containers based on access policies. For example, in case of AGILE, only protocol adapters dealing with network interfaces are allowed access to system devices specifically only to those interfaces they require. Containerization facilitates version management of individual components, while also simplifying the whole micro-service composition. Finally, we should note that due to the large number of Edge/GW class hardware platforms, support of Linux distributions and package repositories is often lagging behind their server and desktop counterparts. Containerization overcomes this by requiring only an up-to date kernel and the container framework to be installed on the host. The rest is containerized and available on all CPU compatible platforms.
Containerization can also simplify the development process if the build process is also containerized. In this case, tools in the host file system are not involved in the build process, ensuring that the service is built from source in an entirely reproducible way.
However, there is considerable overhead involved in containerization, especially if we consider a multi-container installation with dozens of containers deployed on a single gateway class machine. Most prominent is the overhead in the overall size of deployment, which translates into an overhead in cost since higher reliability in the form of Embedded MultiMediaCard (eMMC) memory is more expensive than Secure Digital (SD) cards. Due to the possibility of limited and intermittent connectivity, download or update times affect the performance of the system. Due to the resource constrained nature of the gateways in comparison to the cloud, high resource utilization and build times for containers add to the overhead.
As mentioned earlier AGILE relies on Docker for containerization. In a Docker based multi-container environment each micro-service has its own file system image generated using a "Dockerfile": the shell script like "recipe" to build, install, and run the service. To allow sharing content between containers and to speed up generation of images, Docker uses layered images where each layer contains a set of files adding, overwriting, or deleting files from the union of previous layers. The image generation starts from an image referred to as the "base image", followed by addition of build dependencies, build tools, build artifacts, runtime dependencies, and language runtimes. The overall size for all the images of all micro-services can be significant.
To reduce the overall size of the distribution and to address the aforementioned overhead we define a novel taxonomy to propose the following optimization techniques.
A. In-container image layering optimizations
In container image layering optimization is aimed at reducing overhead during the generation and deployment of individual micro-services. Techniques illustrated on Fig. 3 are:
1) self-contained Dockerfile: In this case, which serves as our baseline for the following optimizations, each of the above mentioned steps in the process of generating the final 2) multi-stage Dockerfile: In case of the multi-stage build, first a dockerized "build image" is created comprising of the base image, build dependencies and build steps. Leveraging the build image, a special "deployment image" is created containing only the runtime environment and the actual executables 1 .
3) image squashing: The image squashing technique creates a single-layered or monolithic image from a multilayered image. In a multi-layered image, the generation of a new layer makes all previous layers immutable. Thus, if a file is modified or deleted, the old content is residual, increasing overall image size. This overhead of larger image sizes is mitigated by the image squashing technique.
B. Cross-container optimizations
When multiple containers are deployed on the same device, a further optimization is possible as shown on Fig. 4: containers from the set of images can rely on common layers. 1 While previously multi-stage build required external tools, support was included in docker with version 17.05-ce. With this we have also migrated our containers to use the new framework.
1) baseline image collection:
Our baseline is a multicontainer setup where each image is generated by developers choosing their base images independently. In practice, this can result in a lack of common layers among the images, and the overall distribution size becomes the sum of individual images.
2) base image hierarchy based optimization: To overcome the previous problem, we have introduced images based on a hierarchy of base images with the objective of maximizing the number of shared base layers. This hierarchy 2 of base images is built as follows: firstly, a series of lean CPU architecture specific images are created. Secondly, based on each of these images, a series of Linux distribution specific images are created. The significance of this second layer is to provide broad support for installation of both build and runtime dependencies. In the third layer of the hierarchy, images of the previous layer are leveraged to generate images for every relevant language build environment and runtime. By forcing the selection of images from this hierarchy, we achieve two important goals: (i) the number of layers that are shared between our deployed images is maximized, for example a Java Runtime Environment (JRE) will only be deployed once, and not in all images running Java code. (ii) Since all base images in the hierarchy exist for all CPU architectures, supporting multiple architectures becomes straightforward by generating CPU specific versions of our own images.
V. OBSERVATION
In this section, we present our study of the performance of our multi-container microservice based deployment on the AGILE gateway. We study the performance improvements in terms of the container sizes and container download/installation times.
A. Experimental setup
For the experiments, we have used the makers' version of the gateway which consists of an ARM Cortex A53 processor clocked at 1.2 GHz, 1 GB of RAM and a 32 GB Samsung EVO Plus Class 10 SD card. The installed operating system on the device was Raspbian Jessie, and it was connected to the Internet with a dedicated 50/10 Mbps FTTC/VDSL2 line. The following containers comprising the AGILE stack were used for the experiment. The updates to the overall stack were pushed and released monthly while individual containers were updated biweekly on average. 1) agile-core: : The agile-core container is built from a ZuluJDK base image and is dependent on the following containers, (i) agile-dbus, (ii) agile-devicemanager, (iii) agileprotocolmanager and (iv) agile-devicefactory. If the containers required by agile-core are already built and available on the device, they are reused, or they are built as well during the build process.
2) agile-ble: : The agile-ble container is also built from a ZuluJDK base image and depends on the agile-core docker image. The agile-ble container registers a dbus object which can be leveraged by the agile-protocolmanager. The dbus object offers interfaces for methods supported by the BLE protocol including connection, discovery, read and write methods.
3) agile-nodered: : The agile-nodered container is built from a NodeJS base image and does not depend on other containers. This container runs the Node-Red application inside the container and exposes an endpoint to access the application. The Node-Red running inside the container also offers access to multiple other custom Node-Red nodes developed for the AGILE gateway which includes recommendation, cloud integration and device interfacing nodes.
B. Tests
The following tests are conducted to address two key issues for gateway devices mentioned in the state of the art. First, we study the sizes of the selected images as a function of in-container image optimization techniques, assessing their effectiveness. Second, we evaluate cross-container optimization by looking at the size and consequently download time of the AGILE stack. Time measurements are repeated ten times and average values are presented. Table I shows the effect of in-container optimization techniques on images of the three selected components. Note that image sizes contain base layers, language runtimes for different languages (Java or JavaScript) and dependencies, hence the relatively large image sizes. Multi-stage build optimization reduces image sizes to as low as 45%, in case of agile-core, of the baseline. Squashing, if applied on top of the already optimized multi-stage image, improves image size only marginally.
On the contrary, as we show next, squashing can be counterproductive for updates. Fig. 5 shows the download times for images built using in-container optimization. Although in reality we have introduced various optimizations gradually as the stack and its components evolved in the recent years, for this experiment we have regenerated each version of the components in a baseline, a multi-stage, and a squashed version. First, the generated images are pushed to repositories on Docker Hub with incremental tags for changing versions. These images are then iteratively downloaded in the following manners: (i) absolute, where previous versions of an image are removed before download, and (ii) incremental, emulating software update, where previous image versions are kept to download just the updated layers. The first column of figure 5 shows the absolute download times of a specific version of agile-core, considering the three aforementioned techniques. Download times are almost proportional to image sizes, although it is interesting to note that the squashed image, even if smaller, downloads slightly slower than the multi-stage one. This is due to the way image download and decompression is parallelized in Docker. Subsequent columns show incremental download performance, where multi-stage, even baseline outperforms squashing. In fact, squashing forces the whole image content to be downloaded again, while multi-stage allows small changes to transform into the update of only a few small layers.
Finally, we consider multiple containers on the same gateway, i.e. the deployment of the AGILE stack. To simplify discussion and avoid distortion from functionalities added during the evolution of the stack, we restrict deployment to the subset of images discussed before (agile-core, agile-ble, agile-nodered). Fig. 6 shows both the download time and the data amount as the stack evolved. In the initial versions, neither the base images were chosen considering common layers, nor in-container optimizations were used. Hence, the download amount was similar to the sum of baseline image sizes in Table I. The difference is due to the way data was obtained: in this experiment we use the original images forming a given version of the stack, while Table I contains the size of the newest version of each component regenerated using a given optimization technique. Incremental times and size also shows large values for v0.1.0, but only because this is the first stack deployed on a clean system. In the subsequent versions, absolute download times are reduced due to both cross-container and in-container optimizations. More specifically, multi-stage was introduced in v0.1.3 for agile-core and agile-ble, while only in v0.4.1 for agilenodered. Reduction in the overall size can clearly be seen in these cases. The use of the base image hierarchy, instead, was introduced in v0.1.4 and v0.2.0, respectively. Since this involved a change in the base images, incremental downloads increased for the specific release, but the overall effect to
VI. CONCLUSION
This paper summarizes the current state of the art on containerization techniques and suitability of containerization in the context of IoT systems. The existing gap in the current literature in terms of multi-container deployments on edge layer devices is addressed by illustrating a microservice architecture based container deployment and defining optimization techniques to improve the gateway performance.
The cross-container optimization technique proposed facilitates the reduction in build times for images and removes the redundancy among base images used in the stack. On the other hand, in-container optimization reduces the overall size of the AGILE stack and thus reduces download times for the images as well.
In future work, we will perform further measurements on multiple devices and device architectures to improve our proposed optimization techniques. We would also investigate optimization of pushing delta updates to the devices while minimizing the build and download times using the current study as a reference.
| 3,919 |
1810.07753
|
2897323328
|
Stringent latency requirements in advanced Internet of Things (IoT) applications as well as an increased load on cloud data centers have prompted a move towards a more decentralized approach, bringing storage and processing of IoT data closer to the end-devices through the deployment of multi-purpose IoT gateways. However, the resource constrained nature and diversity of these gateways pose a challenge in developing applications that can be deployed widely. This challenge can be overcome with containerization, a form of lightweight virtualization, bringing support for a wide range of hardware architectures and operating system agnostic deployment of applications on IoT gateways. This paper discusses the architectural aspects of containerization, and studies the suitability of available containerization tools for multi-container deployment in the context of IoT gateways. We present containerization in the context of AGILE, a multi-container and micro-service based open source framework for IoT gateways, developed as part of a Horizon 2020 project. Our study of containerized services to perform common gateway functions like device discovery, data management and cloud integration among others, reveal the advantages of having a containerized environment for IoT gateways with regard to use of base image hierarchies and image layering for in-container and cross-container performance optimizations. We illustrate these results in a set of benchmark experiments in this paper.
|
Previous studies have focused on the tradeoffs in applying hypervisor based virtualization and lightweight containerization to edge devices. The authors of @cite_5 illustrate the advantages of containerization over hypervisor based hardware virtualization in terms of size of the resources, flexibility and portability. The authors state that hypervisor based virtualization is more suited for Infrastructure-as-a-Service on the cloud than containerization, which offers a portable runtime, easier deployability on multiple servers and interconnectivity among containers. These advantages, on the other hand, make containerization more suitable for the edge layer in a Platform-as-a-Service scenario @cite_0 . Pahl et. al @cite_16 leverages the resources of Raspberry Pi devices to further build a cluster of containers running on multiple devices in the PaaS context. The cluster is designed to perform computationally intensive tasks including cluster and data management, overcoming the resource-constrained nature of each device.
|
{
"abstract": [
"Containerization is widely discussed as a lightweight virtualization solution. Apart from exhibiting benefits over traditional virtual machines in the cloud, containers are especially relevant for platform-as-a-service (PaaS) clouds to manage and orchestrate applications through containers as an application packaging mechanism. This article discusses the requirements that arise from having to facilitate applications through distributed multicloud platforms.",
"Cloud technology is moving towards more distribution across multi-clouds and the inclusion of various devices, as evident through IoT and network integration in the context of edge cloud and fog computing. Generally, lightweight virtualisation solutions are beneficial for this architectural setting with smaller, but still virtualised devices to host application and platform services, and the logistics required to manage this. Containerisation is currently discussed as a lightweight virtualisation solution. In addition to having benefits over traditional virtual machines in the cloud in terms of size and flexibility, containers are specifically relevant for platform concerns typically dealt with Platform-as-a-Service (PaaS) clouds such as application packaging and orchestration. For the edge cloud environment, application and service orchestration can help to manage and orchestrate applications through containers as an application packaging mechanism. We review edge cloud requirements and discuss the suitability container and cluster technology of that arise from having to facilitate applications through distributed multi-cloud platforms build from a range of networked nodes ranging from data centres to small devices, which we refer to here as edge cloud.",
"Cloud technology is moving towards multi-cloud environments with the inclusion of various devices. Cloud and IoT integration resulting in so-called edge cloud and fog computing has started. This requires the combination of data centre technologies with much more constrained devices, but still using virtualised solutions to deal with scalability, flexibility and multi-tenancy concerns. Lightweight virtualisation solutions do exist for this architectural setting with smaller, but still virtualised devices to provide application and platform technology as services. Containerisation is a solution component for lightweight virtualisation solution. Containers are furthermore relevant for cloud platform concerns dealt with by Platform-as-a-Service (PaaS) clouds like application packaging and orchestration. We demonstrate an architecture for edge cloud PaaS. For edge clouds, application and service orchestration can help to manage and orchestrate applications through containers. In this way, computation can be brought to the edge of the cloud, rather than data from the Internet-of-Things (IoT) to the cloud. We show that edge cloud requirements such as cost-efficiency, low power consumption, and robustness can be met by implementing container and cluster technology on small single-board devices like Raspberry Pis. This architecture can facilitate applications through distributed multi-cloud platforms built from a range of nodes from data centres to small devices, which we refer to as edge cloud. We illustrate key concepts of an edge cloud PaaS and refer to experimental and conceptual work to make that case."
],
"cite_N": [
"@cite_0",
"@cite_5",
"@cite_16"
],
"mid": [
"1959671196",
"1924994176",
"2533866537"
]
}
|
Towards Multi-container Deployment on IoT Gateways
|
Over the recent years, the concept of Internet of Things has gradually evolved from a paradigm to create a network of objects connected to the Internet to an interconnected network of data producers and data consumers. Regular dayto-day objects equipped with sensors act as data producers generating data sensed from the surrounding environment while software applications as well as end-devices equipped with actuators act as data consumers performing actions based on the data gathered. This notion of IoT has led to its application to various domains like health-care, autonomous transport, smart cities, among others which leverage the data generated from end devices to gain meaningful insights on the generated data. Due to the resource-constrained nature of these IoT end-devices, the scope of storage and data pro-cessing on these devices is limited. Thus, the data processing and business analytics are performed on cloud platforms and software services running on the cloud. However, with the number of connected devices predicted to grow up to 75 billion by 2025 [1], the data processing and storage architecture based solely on the cloud is facing a few challenges.
The majority of IoT applications are heavily dependent on cloud storage and processing which affects the end-toend latency and results in inefficient utilization of resources. These challenges, together with privacy and security considerations, have prompted a move away from the centralized architecture of storage and processing on the cloud to bring the processing and storage closer to the end-devices with the edge computing model. The edge computing model leverages a set of devices in the architecture between the end devices and the cloud. These devices can be legacy devices present in the network with storage and processing capabilities [2] or dedicated devices deployed to serve this purpose like IoT gateways and cloudlet devices [3].
The presence of the edge computing layer aids offloading of data processing and validation to the edge layer. Moreover, it facilitates implementation of collaborative computing among end-devices as well as implementation of device and data management policies. However, the edge layer devices are not usually resource enriched; thus following the cloud oriented approach of hypervisor-based virtualization can prove to be cumbersome on these devices. Moreover, the edge layer devices are heterogeneous in terms of their hardware specifications, processor architecture and operating systems running on the devices. Thus, light-weight virtualization in the form of containerization offers a suitable solution to the concerns addressed above. Containerization allows virtualization at the OS-level, leveraging the kernel of the operating system to offer isolated user-spaces to each container. Thus, containerization facilitates the implementation of a microservice architecture, where each functionality on the edge device is developed as a service running inside each container. Containerization offers the flexibility to develop different services in different programming languages and communication among the containers using well-defined c 2018 IEEE APIs.
In this paper, we present AGILE, an open-source framework for IoT gateways offering services including device and protocol management, data storage, security and access control. AGILE is designed based on a microservice architecture with each of the services above deployed in separate containers. Such containerization provides multiple advantages, but comes with a performance overhead. We use AGILE as a case-study to observe the overhead associated with containerization over a conventional approach and discuss improvements achieved by applying techniques like crosscontainer optimization and in-container optimization.
The rest of the paper is structured as follows. In section II, we discuss about the state of the art in this research area highlighting the work on the performance measurements on containerized approaches as well as work on containerization in the IoT context. In section III, we discuss the services offered by the AGILE framework and the corresponding architecture. Section IV highlights the different approaches for optimizing the performance of containers for edge layer devices. In section V, we present our observations from the performance tests carried out on AGILE, concluding our work in section VI.
A. Suitability of Containerization
Previous studies have focused on the tradeoffs in applying hypervisor based virtualization and lightweight containerization to edge devices. The authors of [10] illustrate the advantages of containerization over hypervisor based hardware virtualization in terms of size of the resources, flexibility and portability. The authors state that hypervisor based virtualization is more suited for Infrastructure-as-a-Service on the cloud than containerization, which offers a portable runtime, easier deployability on multiple servers and interconnectivity among containers. These advantages, on the other hand, make containerization more suitable for the edge layer in a Platform-as-a-Service scenario [11]. Pahl et. al [4] leverages the resources of Raspberry Pi devices to further build a cluster of containers running on multiple devices in the PaaS context. The cluster is designed to perform computationally intensive tasks including cluster and data management, overcoming the resource-constrained nature of each device.
A significant amount of research is aimed at conducting performance tests to analyze the behavior of edge devices with the implementation of containerization. The authors of [12] study the performance of VM based virtualization and containerization against a native approach in terms of CPU cycles, disk throughput and network throughput. The results show that containerization outperforms VM based virtualization for memory I/O, network throughput and CPU cycles. Morabito et. al [13] studies the processing resource and power consumption for performing different tasks on Raspberry Pi for wired and wireless connectivity. These tasks include running a containerized CoAP server for processing data, as well as for performing sensing actuation and video analytics. The author of [14] performs benchmark tests on Raspberry Pi B+ and Raspberry Pi model B in terms of Disk and Network I/O for native and containerized approaches. While the overhead is very high for the Raspberry Pi B+, significant performance improvements are observed for Raspberry Pi 2.
In the above studies, the benchmarks clearly show that containerization is feasible on System on Chip (SOC) devices like Raspberry Pi 2 and it is more optimized than using VM based virtualization. However, there is a lack of studies on how the process of containerization itself can be optimized and benchmarks tests for the same. Existing literature shows approaches to deploy containers in a cluster of devices, however, there exists a gap in terms of multi-container deployment and optimization on a single device.
B. Containerization in IoT use cases
Several articles in existing literature present applications of IoT based on containerization in different use cases. The authors of [15] demonstrate a distributed and layered architecture for Industrial IoT based applications with deployment of docker containers on end devices, the gateway and the cloud. Chesov et. al [16] present a multi-tier approach to containerize different functionalities in the smart cities context like data aggregation, business analytics and user interaction with data and deploy the containers on the cloud. Kovatsch et. al simplify programmability of IoT applications by exposing scripts and configurations using a RESTful CoAP interface from a deployed container [17].
The existing literature applies containerization to different use-cases and specific areas of IoT. However, there is a lack of a framework based on containerization which can be applicable to multiple use cases and applications of IoT. We try to III. AGILE FRAMEWORK The AGILE gateway, which stands for an Adaptive & Modular Gateway for the IoT, was conceived to design and implement a truly modular gateway in terms of hardware and software and to be adaptive to handle various types of devices, networking interfaces, communication protocols, and use cases.
A. AGILE Microservices
AGILE is aimed at developing an open source software and hardware framework for IoT development. The hardware framework involves development of two separate versions of the gateway, a maker's version supporting fast prototyping based on the Raspberry Pi 3 board while the industrial version is being developed for use cases requiring rugged hardware and being ready for production. In the following subsection, we elaborate the software framework for AGILE as illustrated in Fig. 1.
1) Device Management: The device management handles addition and removal of new devices to the gateway. The device manager supports a set of devices which offer interfaces like reading and writing to the device as well as execution of methods offered by the device. The devices communicate using one or more protocols supported by the protocol manager.
2) Protocol Management: The protocol manager offers interfaces to add and remove protocols as well as support for implementations of underlying methods for each protocol including device discovery. The protocol manager supports a set of protocols which offer interfaces like read, write, connect and disconnect with the devices implementing the protocol.
3) Local Data Storage:
The data storage component provides timeseries based storage for data generated by IoT sensors, with support for retention policies and encryption.
4) Gateway Management UI:
The gateway management UI allows the user to manage and access functionalities on the gateway, to start and stop other services like device discovery as well as access to data and visualization of the stored data.
5) IoT App Developers UI and SDK:
The IoT App Developers UI is aimed at offering a user-friendly graphical interface to create application logic by wiring together AG-ILE specific and generic nodes in an application workflow. The applications can e.g. include data collection and storage, rules applicable on the data to implement sensing-actuation use-cases, as well as analytics on sensor data or on the data stored on the gateway. A separate software development kit (SDK) facilitates development of IoT applications written direcetly in JavaScript, while apps written in other languages can also directly use the APIs provided by gateway microservices.
6) Cloud Enablers: The AGILE framework supports multiple cloud providers including Xively and Google Drive to process and push the data collected from the device and store them on these cloud platforms. This allows seamless end-toend connectivity from the peripheral devices up to the cloud platforms.
B. Containerized Architecture for AGILE
The software framework for AGILE is designed using a microservice architecture. The rationale behind following the microservice oriented approach are the following: (i) The services offered are split into components which can interact with each other over required and provided interfaces, (ii) Using this approach, the components are easily scalable and adaptable to changing requirements in the system. The implementation of the software framework is achieved by containerizing the offered microservices. The containerization engine we have used for AGILE is Docker due to its wide-scale adoption, documentation and support for multiple architectures. The functionalities mentioned in the previous subsection are implemented individually in different Docker containers. The framework is language-agnostic, allowing development in any language, and thus, a wider choice of open-source code to be reused. Moreover, software dependency conflicts are easy to overcome since each containerized service has its own file system namespace.
The core functionality of the gateway which includes device management and protocol management is containerized exposing the interfaces for the HTTP REST API endpoints. The protocols are implemented in individual containers which include ZigBee and BLE. These core containers communicate with each other over the DBus which is implemented in a containerized form as well.
The Gateway management UI is implemented using OS.js, a JavaScript based Web Desktop platform, in a containerized environment. This container depends on agile-core to offer access to the other microservices. The developers UI leverages the Node-RED tool deployed in a separate container. The cloud integration modules are provided as nodes inside the Node-RED tool. The number of containers used for implementing incremental versions of the AGILE stack is illustrated in Figure 2; which acts as a premise for our experimental setup.
IV. CONTAINERIZATION AND IOT
Containerization of the micro-services provides several advantages in the design, development, deployment, security, and management of the gateways, however it also brings nonnegligible overhead in other areas, most notably in resource consumption.
From the design perspective, the containerized architecture maps well to the micro-service principle, providing clear boundaries between individual services. Containerization also allows fine-grained control over access to system resources as well as restricted interactions between containers based on access policies. For example, in case of AGILE, only protocol adapters dealing with network interfaces are allowed access to system devices specifically only to those interfaces they require. Containerization facilitates version management of individual components, while also simplifying the whole micro-service composition. Finally, we should note that due to the large number of Edge/GW class hardware platforms, support of Linux distributions and package repositories is often lagging behind their server and desktop counterparts. Containerization overcomes this by requiring only an up-to date kernel and the container framework to be installed on the host. The rest is containerized and available on all CPU compatible platforms.
Containerization can also simplify the development process if the build process is also containerized. In this case, tools in the host file system are not involved in the build process, ensuring that the service is built from source in an entirely reproducible way.
However, there is considerable overhead involved in containerization, especially if we consider a multi-container installation with dozens of containers deployed on a single gateway class machine. Most prominent is the overhead in the overall size of deployment, which translates into an overhead in cost since higher reliability in the form of Embedded MultiMediaCard (eMMC) memory is more expensive than Secure Digital (SD) cards. Due to the possibility of limited and intermittent connectivity, download or update times affect the performance of the system. Due to the resource constrained nature of the gateways in comparison to the cloud, high resource utilization and build times for containers add to the overhead.
As mentioned earlier AGILE relies on Docker for containerization. In a Docker based multi-container environment each micro-service has its own file system image generated using a "Dockerfile": the shell script like "recipe" to build, install, and run the service. To allow sharing content between containers and to speed up generation of images, Docker uses layered images where each layer contains a set of files adding, overwriting, or deleting files from the union of previous layers. The image generation starts from an image referred to as the "base image", followed by addition of build dependencies, build tools, build artifacts, runtime dependencies, and language runtimes. The overall size for all the images of all micro-services can be significant.
To reduce the overall size of the distribution and to address the aforementioned overhead we define a novel taxonomy to propose the following optimization techniques.
A. In-container image layering optimizations
In container image layering optimization is aimed at reducing overhead during the generation and deployment of individual micro-services. Techniques illustrated on Fig. 3 are:
1) self-contained Dockerfile: In this case, which serves as our baseline for the following optimizations, each of the above mentioned steps in the process of generating the final 2) multi-stage Dockerfile: In case of the multi-stage build, first a dockerized "build image" is created comprising of the base image, build dependencies and build steps. Leveraging the build image, a special "deployment image" is created containing only the runtime environment and the actual executables 1 .
3) image squashing: The image squashing technique creates a single-layered or monolithic image from a multilayered image. In a multi-layered image, the generation of a new layer makes all previous layers immutable. Thus, if a file is modified or deleted, the old content is residual, increasing overall image size. This overhead of larger image sizes is mitigated by the image squashing technique.
B. Cross-container optimizations
When multiple containers are deployed on the same device, a further optimization is possible as shown on Fig. 4: containers from the set of images can rely on common layers. 1 While previously multi-stage build required external tools, support was included in docker with version 17.05-ce. With this we have also migrated our containers to use the new framework.
1) baseline image collection:
Our baseline is a multicontainer setup where each image is generated by developers choosing their base images independently. In practice, this can result in a lack of common layers among the images, and the overall distribution size becomes the sum of individual images.
2) base image hierarchy based optimization: To overcome the previous problem, we have introduced images based on a hierarchy of base images with the objective of maximizing the number of shared base layers. This hierarchy 2 of base images is built as follows: firstly, a series of lean CPU architecture specific images are created. Secondly, based on each of these images, a series of Linux distribution specific images are created. The significance of this second layer is to provide broad support for installation of both build and runtime dependencies. In the third layer of the hierarchy, images of the previous layer are leveraged to generate images for every relevant language build environment and runtime. By forcing the selection of images from this hierarchy, we achieve two important goals: (i) the number of layers that are shared between our deployed images is maximized, for example a Java Runtime Environment (JRE) will only be deployed once, and not in all images running Java code. (ii) Since all base images in the hierarchy exist for all CPU architectures, supporting multiple architectures becomes straightforward by generating CPU specific versions of our own images.
V. OBSERVATION
In this section, we present our study of the performance of our multi-container microservice based deployment on the AGILE gateway. We study the performance improvements in terms of the container sizes and container download/installation times.
A. Experimental setup
For the experiments, we have used the makers' version of the gateway which consists of an ARM Cortex A53 processor clocked at 1.2 GHz, 1 GB of RAM and a 32 GB Samsung EVO Plus Class 10 SD card. The installed operating system on the device was Raspbian Jessie, and it was connected to the Internet with a dedicated 50/10 Mbps FTTC/VDSL2 line. The following containers comprising the AGILE stack were used for the experiment. The updates to the overall stack were pushed and released monthly while individual containers were updated biweekly on average. 1) agile-core: : The agile-core container is built from a ZuluJDK base image and is dependent on the following containers, (i) agile-dbus, (ii) agile-devicemanager, (iii) agileprotocolmanager and (iv) agile-devicefactory. If the containers required by agile-core are already built and available on the device, they are reused, or they are built as well during the build process.
2) agile-ble: : The agile-ble container is also built from a ZuluJDK base image and depends on the agile-core docker image. The agile-ble container registers a dbus object which can be leveraged by the agile-protocolmanager. The dbus object offers interfaces for methods supported by the BLE protocol including connection, discovery, read and write methods.
3) agile-nodered: : The agile-nodered container is built from a NodeJS base image and does not depend on other containers. This container runs the Node-Red application inside the container and exposes an endpoint to access the application. The Node-Red running inside the container also offers access to multiple other custom Node-Red nodes developed for the AGILE gateway which includes recommendation, cloud integration and device interfacing nodes.
B. Tests
The following tests are conducted to address two key issues for gateway devices mentioned in the state of the art. First, we study the sizes of the selected images as a function of in-container image optimization techniques, assessing their effectiveness. Second, we evaluate cross-container optimization by looking at the size and consequently download time of the AGILE stack. Time measurements are repeated ten times and average values are presented. Table I shows the effect of in-container optimization techniques on images of the three selected components. Note that image sizes contain base layers, language runtimes for different languages (Java or JavaScript) and dependencies, hence the relatively large image sizes. Multi-stage build optimization reduces image sizes to as low as 45%, in case of agile-core, of the baseline. Squashing, if applied on top of the already optimized multi-stage image, improves image size only marginally.
On the contrary, as we show next, squashing can be counterproductive for updates. Fig. 5 shows the download times for images built using in-container optimization. Although in reality we have introduced various optimizations gradually as the stack and its components evolved in the recent years, for this experiment we have regenerated each version of the components in a baseline, a multi-stage, and a squashed version. First, the generated images are pushed to repositories on Docker Hub with incremental tags for changing versions. These images are then iteratively downloaded in the following manners: (i) absolute, where previous versions of an image are removed before download, and (ii) incremental, emulating software update, where previous image versions are kept to download just the updated layers. The first column of figure 5 shows the absolute download times of a specific version of agile-core, considering the three aforementioned techniques. Download times are almost proportional to image sizes, although it is interesting to note that the squashed image, even if smaller, downloads slightly slower than the multi-stage one. This is due to the way image download and decompression is parallelized in Docker. Subsequent columns show incremental download performance, where multi-stage, even baseline outperforms squashing. In fact, squashing forces the whole image content to be downloaded again, while multi-stage allows small changes to transform into the update of only a few small layers.
Finally, we consider multiple containers on the same gateway, i.e. the deployment of the AGILE stack. To simplify discussion and avoid distortion from functionalities added during the evolution of the stack, we restrict deployment to the subset of images discussed before (agile-core, agile-ble, agile-nodered). Fig. 6 shows both the download time and the data amount as the stack evolved. In the initial versions, neither the base images were chosen considering common layers, nor in-container optimizations were used. Hence, the download amount was similar to the sum of baseline image sizes in Table I. The difference is due to the way data was obtained: in this experiment we use the original images forming a given version of the stack, while Table I contains the size of the newest version of each component regenerated using a given optimization technique. Incremental times and size also shows large values for v0.1.0, but only because this is the first stack deployed on a clean system. In the subsequent versions, absolute download times are reduced due to both cross-container and in-container optimizations. More specifically, multi-stage was introduced in v0.1.3 for agile-core and agile-ble, while only in v0.4.1 for agilenodered. Reduction in the overall size can clearly be seen in these cases. The use of the base image hierarchy, instead, was introduced in v0.1.4 and v0.2.0, respectively. Since this involved a change in the base images, incremental downloads increased for the specific release, but the overall effect to
VI. CONCLUSION
This paper summarizes the current state of the art on containerization techniques and suitability of containerization in the context of IoT systems. The existing gap in the current literature in terms of multi-container deployments on edge layer devices is addressed by illustrating a microservice architecture based container deployment and defining optimization techniques to improve the gateway performance.
The cross-container optimization technique proposed facilitates the reduction in build times for images and removes the redundancy among base images used in the stack. On the other hand, in-container optimization reduces the overall size of the AGILE stack and thus reduces download times for the images as well.
In future work, we will perform further measurements on multiple devices and device architectures to improve our proposed optimization techniques. We would also investigate optimization of pushing delta updates to the devices while minimizing the build and download times using the current study as a reference.
| 3,919 |
1810.07753
|
2897323328
|
Stringent latency requirements in advanced Internet of Things (IoT) applications as well as an increased load on cloud data centers have prompted a move towards a more decentralized approach, bringing storage and processing of IoT data closer to the end-devices through the deployment of multi-purpose IoT gateways. However, the resource constrained nature and diversity of these gateways pose a challenge in developing applications that can be deployed widely. This challenge can be overcome with containerization, a form of lightweight virtualization, bringing support for a wide range of hardware architectures and operating system agnostic deployment of applications on IoT gateways. This paper discusses the architectural aspects of containerization, and studies the suitability of available containerization tools for multi-container deployment in the context of IoT gateways. We present containerization in the context of AGILE, a multi-container and micro-service based open source framework for IoT gateways, developed as part of a Horizon 2020 project. Our study of containerized services to perform common gateway functions like device discovery, data management and cloud integration among others, reveal the advantages of having a containerized environment for IoT gateways with regard to use of base image hierarchies and image layering for in-container and cross-container performance optimizations. We illustrate these results in a set of benchmark experiments in this paper.
|
Several articles in existing literature present applications of IoT based on containerization in different use cases. The authors of @cite_11 demonstrate a distributed and layered architecture for Industrial IoT based applications with deployment of docker containers on end devices, the gateway and the cloud. Chesov et. al @cite_6 present a multi-tier approach to containerize different functionalities in the smart cities context like data aggregation, business analytics and user interaction with data and deploy the containers on the cloud. Kovatsch et. al simplify programmability of IoT applications by exposing scripts and configurations using a RESTful CoAP interface from a deployed container @cite_1 .
|
{
"abstract": [
"Programming Internet of Things (IoT) applications is challenging because developers have to be knowledgeable in various technical domains, from low-power networking, over embedded operating systems, to distributed algorithms. Hence, it will be challenging to find enough experts to provide software for the vast number of expected devices, which must also be scalable and particularly safe due to the connection to the physical world. To remedy this situation, we propose an architecture that provides Web-like scripting for low-end devices through Cloud-based application servers and a consistent, RESTful programming model. Our novel runtime container Actinium (Ac) exposes scripts, their configuration, and their lifecycle management through a fully RESTful programming interface using the Constrained Application Protocol (CoAP). We endow the JavaScript language with an API for direct interaction with mote-class IoT devices, the CoapRequest object, and means to export script data as Web resources. With Actinium, applications can be created by simply mashing up resources provided by CoAP servers on devices, other scripts, and classic Web services. We also discuss security considerations and show the suitability of this architecture in terms of performance with our publicly available implementation.",
"This paper describes the containerization of a multi-tier client-server architecture based on LXC isolation technology for Smart Cities application layer. Multi-tier client-server architecture was used as a single node. Implementation classes and containerization management classes have been designed and implemented in C++. The approach proposed in this paper allowed to create a homogeneous environment in which the control nodes are built using the same technology as the controlled nodes to manage devices in Smart Cities",
"Industrial Internet of things (IIoTs) relies on different devices working together, gathering and sharing data using multiple communication protocols. This heterogeneity becomes a hindrance in the development of architectures that can support applications operating independently of the underlying protocols. Therefore in this paper, we proposed a modular and scalable architecture based on lightweight virtualization. The modularity provided by the proposed architecture combined with lightweight virtualization orchestration supplied by Docker simplifies management and enables distributed deployments. Availability and fault-tolerance characteristics are ensured by distributing the application logic across different devices where a single microservice or even device failure can have no effect on system performance. The proposed architecture is instantiated and tested on a simple time-dependent use case. The obtained results validates that the proposed architecture can be used to deploy services on demand at different architecture layers."
],
"cite_N": [
"@cite_1",
"@cite_6",
"@cite_11"
],
"mid": [
"1988282856",
"2617696370",
"2612758540"
]
}
|
Towards Multi-container Deployment on IoT Gateways
|
Over the recent years, the concept of Internet of Things has gradually evolved from a paradigm to create a network of objects connected to the Internet to an interconnected network of data producers and data consumers. Regular dayto-day objects equipped with sensors act as data producers generating data sensed from the surrounding environment while software applications as well as end-devices equipped with actuators act as data consumers performing actions based on the data gathered. This notion of IoT has led to its application to various domains like health-care, autonomous transport, smart cities, among others which leverage the data generated from end devices to gain meaningful insights on the generated data. Due to the resource-constrained nature of these IoT end-devices, the scope of storage and data pro-cessing on these devices is limited. Thus, the data processing and business analytics are performed on cloud platforms and software services running on the cloud. However, with the number of connected devices predicted to grow up to 75 billion by 2025 [1], the data processing and storage architecture based solely on the cloud is facing a few challenges.
The majority of IoT applications are heavily dependent on cloud storage and processing which affects the end-toend latency and results in inefficient utilization of resources. These challenges, together with privacy and security considerations, have prompted a move away from the centralized architecture of storage and processing on the cloud to bring the processing and storage closer to the end-devices with the edge computing model. The edge computing model leverages a set of devices in the architecture between the end devices and the cloud. These devices can be legacy devices present in the network with storage and processing capabilities [2] or dedicated devices deployed to serve this purpose like IoT gateways and cloudlet devices [3].
The presence of the edge computing layer aids offloading of data processing and validation to the edge layer. Moreover, it facilitates implementation of collaborative computing among end-devices as well as implementation of device and data management policies. However, the edge layer devices are not usually resource enriched; thus following the cloud oriented approach of hypervisor-based virtualization can prove to be cumbersome on these devices. Moreover, the edge layer devices are heterogeneous in terms of their hardware specifications, processor architecture and operating systems running on the devices. Thus, light-weight virtualization in the form of containerization offers a suitable solution to the concerns addressed above. Containerization allows virtualization at the OS-level, leveraging the kernel of the operating system to offer isolated user-spaces to each container. Thus, containerization facilitates the implementation of a microservice architecture, where each functionality on the edge device is developed as a service running inside each container. Containerization offers the flexibility to develop different services in different programming languages and communication among the containers using well-defined c 2018 IEEE APIs.
In this paper, we present AGILE, an open-source framework for IoT gateways offering services including device and protocol management, data storage, security and access control. AGILE is designed based on a microservice architecture with each of the services above deployed in separate containers. Such containerization provides multiple advantages, but comes with a performance overhead. We use AGILE as a case-study to observe the overhead associated with containerization over a conventional approach and discuss improvements achieved by applying techniques like crosscontainer optimization and in-container optimization.
The rest of the paper is structured as follows. In section II, we discuss about the state of the art in this research area highlighting the work on the performance measurements on containerized approaches as well as work on containerization in the IoT context. In section III, we discuss the services offered by the AGILE framework and the corresponding architecture. Section IV highlights the different approaches for optimizing the performance of containers for edge layer devices. In section V, we present our observations from the performance tests carried out on AGILE, concluding our work in section VI.
A. Suitability of Containerization
Previous studies have focused on the tradeoffs in applying hypervisor based virtualization and lightweight containerization to edge devices. The authors of [10] illustrate the advantages of containerization over hypervisor based hardware virtualization in terms of size of the resources, flexibility and portability. The authors state that hypervisor based virtualization is more suited for Infrastructure-as-a-Service on the cloud than containerization, which offers a portable runtime, easier deployability on multiple servers and interconnectivity among containers. These advantages, on the other hand, make containerization more suitable for the edge layer in a Platform-as-a-Service scenario [11]. Pahl et. al [4] leverages the resources of Raspberry Pi devices to further build a cluster of containers running on multiple devices in the PaaS context. The cluster is designed to perform computationally intensive tasks including cluster and data management, overcoming the resource-constrained nature of each device.
A significant amount of research is aimed at conducting performance tests to analyze the behavior of edge devices with the implementation of containerization. The authors of [12] study the performance of VM based virtualization and containerization against a native approach in terms of CPU cycles, disk throughput and network throughput. The results show that containerization outperforms VM based virtualization for memory I/O, network throughput and CPU cycles. Morabito et. al [13] studies the processing resource and power consumption for performing different tasks on Raspberry Pi for wired and wireless connectivity. These tasks include running a containerized CoAP server for processing data, as well as for performing sensing actuation and video analytics. The author of [14] performs benchmark tests on Raspberry Pi B+ and Raspberry Pi model B in terms of Disk and Network I/O for native and containerized approaches. While the overhead is very high for the Raspberry Pi B+, significant performance improvements are observed for Raspberry Pi 2.
In the above studies, the benchmarks clearly show that containerization is feasible on System on Chip (SOC) devices like Raspberry Pi 2 and it is more optimized than using VM based virtualization. However, there is a lack of studies on how the process of containerization itself can be optimized and benchmarks tests for the same. Existing literature shows approaches to deploy containers in a cluster of devices, however, there exists a gap in terms of multi-container deployment and optimization on a single device.
B. Containerization in IoT use cases
Several articles in existing literature present applications of IoT based on containerization in different use cases. The authors of [15] demonstrate a distributed and layered architecture for Industrial IoT based applications with deployment of docker containers on end devices, the gateway and the cloud. Chesov et. al [16] present a multi-tier approach to containerize different functionalities in the smart cities context like data aggregation, business analytics and user interaction with data and deploy the containers on the cloud. Kovatsch et. al simplify programmability of IoT applications by exposing scripts and configurations using a RESTful CoAP interface from a deployed container [17].
The existing literature applies containerization to different use-cases and specific areas of IoT. However, there is a lack of a framework based on containerization which can be applicable to multiple use cases and applications of IoT. We try to III. AGILE FRAMEWORK The AGILE gateway, which stands for an Adaptive & Modular Gateway for the IoT, was conceived to design and implement a truly modular gateway in terms of hardware and software and to be adaptive to handle various types of devices, networking interfaces, communication protocols, and use cases.
A. AGILE Microservices
AGILE is aimed at developing an open source software and hardware framework for IoT development. The hardware framework involves development of two separate versions of the gateway, a maker's version supporting fast prototyping based on the Raspberry Pi 3 board while the industrial version is being developed for use cases requiring rugged hardware and being ready for production. In the following subsection, we elaborate the software framework for AGILE as illustrated in Fig. 1.
1) Device Management: The device management handles addition and removal of new devices to the gateway. The device manager supports a set of devices which offer interfaces like reading and writing to the device as well as execution of methods offered by the device. The devices communicate using one or more protocols supported by the protocol manager.
2) Protocol Management: The protocol manager offers interfaces to add and remove protocols as well as support for implementations of underlying methods for each protocol including device discovery. The protocol manager supports a set of protocols which offer interfaces like read, write, connect and disconnect with the devices implementing the protocol.
3) Local Data Storage:
The data storage component provides timeseries based storage for data generated by IoT sensors, with support for retention policies and encryption.
4) Gateway Management UI:
The gateway management UI allows the user to manage and access functionalities on the gateway, to start and stop other services like device discovery as well as access to data and visualization of the stored data.
5) IoT App Developers UI and SDK:
The IoT App Developers UI is aimed at offering a user-friendly graphical interface to create application logic by wiring together AG-ILE specific and generic nodes in an application workflow. The applications can e.g. include data collection and storage, rules applicable on the data to implement sensing-actuation use-cases, as well as analytics on sensor data or on the data stored on the gateway. A separate software development kit (SDK) facilitates development of IoT applications written direcetly in JavaScript, while apps written in other languages can also directly use the APIs provided by gateway microservices.
6) Cloud Enablers: The AGILE framework supports multiple cloud providers including Xively and Google Drive to process and push the data collected from the device and store them on these cloud platforms. This allows seamless end-toend connectivity from the peripheral devices up to the cloud platforms.
B. Containerized Architecture for AGILE
The software framework for AGILE is designed using a microservice architecture. The rationale behind following the microservice oriented approach are the following: (i) The services offered are split into components which can interact with each other over required and provided interfaces, (ii) Using this approach, the components are easily scalable and adaptable to changing requirements in the system. The implementation of the software framework is achieved by containerizing the offered microservices. The containerization engine we have used for AGILE is Docker due to its wide-scale adoption, documentation and support for multiple architectures. The functionalities mentioned in the previous subsection are implemented individually in different Docker containers. The framework is language-agnostic, allowing development in any language, and thus, a wider choice of open-source code to be reused. Moreover, software dependency conflicts are easy to overcome since each containerized service has its own file system namespace.
The core functionality of the gateway which includes device management and protocol management is containerized exposing the interfaces for the HTTP REST API endpoints. The protocols are implemented in individual containers which include ZigBee and BLE. These core containers communicate with each other over the DBus which is implemented in a containerized form as well.
The Gateway management UI is implemented using OS.js, a JavaScript based Web Desktop platform, in a containerized environment. This container depends on agile-core to offer access to the other microservices. The developers UI leverages the Node-RED tool deployed in a separate container. The cloud integration modules are provided as nodes inside the Node-RED tool. The number of containers used for implementing incremental versions of the AGILE stack is illustrated in Figure 2; which acts as a premise for our experimental setup.
IV. CONTAINERIZATION AND IOT
Containerization of the micro-services provides several advantages in the design, development, deployment, security, and management of the gateways, however it also brings nonnegligible overhead in other areas, most notably in resource consumption.
From the design perspective, the containerized architecture maps well to the micro-service principle, providing clear boundaries between individual services. Containerization also allows fine-grained control over access to system resources as well as restricted interactions between containers based on access policies. For example, in case of AGILE, only protocol adapters dealing with network interfaces are allowed access to system devices specifically only to those interfaces they require. Containerization facilitates version management of individual components, while also simplifying the whole micro-service composition. Finally, we should note that due to the large number of Edge/GW class hardware platforms, support of Linux distributions and package repositories is often lagging behind their server and desktop counterparts. Containerization overcomes this by requiring only an up-to date kernel and the container framework to be installed on the host. The rest is containerized and available on all CPU compatible platforms.
Containerization can also simplify the development process if the build process is also containerized. In this case, tools in the host file system are not involved in the build process, ensuring that the service is built from source in an entirely reproducible way.
However, there is considerable overhead involved in containerization, especially if we consider a multi-container installation with dozens of containers deployed on a single gateway class machine. Most prominent is the overhead in the overall size of deployment, which translates into an overhead in cost since higher reliability in the form of Embedded MultiMediaCard (eMMC) memory is more expensive than Secure Digital (SD) cards. Due to the possibility of limited and intermittent connectivity, download or update times affect the performance of the system. Due to the resource constrained nature of the gateways in comparison to the cloud, high resource utilization and build times for containers add to the overhead.
As mentioned earlier AGILE relies on Docker for containerization. In a Docker based multi-container environment each micro-service has its own file system image generated using a "Dockerfile": the shell script like "recipe" to build, install, and run the service. To allow sharing content between containers and to speed up generation of images, Docker uses layered images where each layer contains a set of files adding, overwriting, or deleting files from the union of previous layers. The image generation starts from an image referred to as the "base image", followed by addition of build dependencies, build tools, build artifacts, runtime dependencies, and language runtimes. The overall size for all the images of all micro-services can be significant.
To reduce the overall size of the distribution and to address the aforementioned overhead we define a novel taxonomy to propose the following optimization techniques.
A. In-container image layering optimizations
In container image layering optimization is aimed at reducing overhead during the generation and deployment of individual micro-services. Techniques illustrated on Fig. 3 are:
1) self-contained Dockerfile: In this case, which serves as our baseline for the following optimizations, each of the above mentioned steps in the process of generating the final 2) multi-stage Dockerfile: In case of the multi-stage build, first a dockerized "build image" is created comprising of the base image, build dependencies and build steps. Leveraging the build image, a special "deployment image" is created containing only the runtime environment and the actual executables 1 .
3) image squashing: The image squashing technique creates a single-layered or monolithic image from a multilayered image. In a multi-layered image, the generation of a new layer makes all previous layers immutable. Thus, if a file is modified or deleted, the old content is residual, increasing overall image size. This overhead of larger image sizes is mitigated by the image squashing technique.
B. Cross-container optimizations
When multiple containers are deployed on the same device, a further optimization is possible as shown on Fig. 4: containers from the set of images can rely on common layers. 1 While previously multi-stage build required external tools, support was included in docker with version 17.05-ce. With this we have also migrated our containers to use the new framework.
1) baseline image collection:
Our baseline is a multicontainer setup where each image is generated by developers choosing their base images independently. In practice, this can result in a lack of common layers among the images, and the overall distribution size becomes the sum of individual images.
2) base image hierarchy based optimization: To overcome the previous problem, we have introduced images based on a hierarchy of base images with the objective of maximizing the number of shared base layers. This hierarchy 2 of base images is built as follows: firstly, a series of lean CPU architecture specific images are created. Secondly, based on each of these images, a series of Linux distribution specific images are created. The significance of this second layer is to provide broad support for installation of both build and runtime dependencies. In the third layer of the hierarchy, images of the previous layer are leveraged to generate images for every relevant language build environment and runtime. By forcing the selection of images from this hierarchy, we achieve two important goals: (i) the number of layers that are shared between our deployed images is maximized, for example a Java Runtime Environment (JRE) will only be deployed once, and not in all images running Java code. (ii) Since all base images in the hierarchy exist for all CPU architectures, supporting multiple architectures becomes straightforward by generating CPU specific versions of our own images.
V. OBSERVATION
In this section, we present our study of the performance of our multi-container microservice based deployment on the AGILE gateway. We study the performance improvements in terms of the container sizes and container download/installation times.
A. Experimental setup
For the experiments, we have used the makers' version of the gateway which consists of an ARM Cortex A53 processor clocked at 1.2 GHz, 1 GB of RAM and a 32 GB Samsung EVO Plus Class 10 SD card. The installed operating system on the device was Raspbian Jessie, and it was connected to the Internet with a dedicated 50/10 Mbps FTTC/VDSL2 line. The following containers comprising the AGILE stack were used for the experiment. The updates to the overall stack were pushed and released monthly while individual containers were updated biweekly on average. 1) agile-core: : The agile-core container is built from a ZuluJDK base image and is dependent on the following containers, (i) agile-dbus, (ii) agile-devicemanager, (iii) agileprotocolmanager and (iv) agile-devicefactory. If the containers required by agile-core are already built and available on the device, they are reused, or they are built as well during the build process.
2) agile-ble: : The agile-ble container is also built from a ZuluJDK base image and depends on the agile-core docker image. The agile-ble container registers a dbus object which can be leveraged by the agile-protocolmanager. The dbus object offers interfaces for methods supported by the BLE protocol including connection, discovery, read and write methods.
3) agile-nodered: : The agile-nodered container is built from a NodeJS base image and does not depend on other containers. This container runs the Node-Red application inside the container and exposes an endpoint to access the application. The Node-Red running inside the container also offers access to multiple other custom Node-Red nodes developed for the AGILE gateway which includes recommendation, cloud integration and device interfacing nodes.
B. Tests
The following tests are conducted to address two key issues for gateway devices mentioned in the state of the art. First, we study the sizes of the selected images as a function of in-container image optimization techniques, assessing their effectiveness. Second, we evaluate cross-container optimization by looking at the size and consequently download time of the AGILE stack. Time measurements are repeated ten times and average values are presented. Table I shows the effect of in-container optimization techniques on images of the three selected components. Note that image sizes contain base layers, language runtimes for different languages (Java or JavaScript) and dependencies, hence the relatively large image sizes. Multi-stage build optimization reduces image sizes to as low as 45%, in case of agile-core, of the baseline. Squashing, if applied on top of the already optimized multi-stage image, improves image size only marginally.
On the contrary, as we show next, squashing can be counterproductive for updates. Fig. 5 shows the download times for images built using in-container optimization. Although in reality we have introduced various optimizations gradually as the stack and its components evolved in the recent years, for this experiment we have regenerated each version of the components in a baseline, a multi-stage, and a squashed version. First, the generated images are pushed to repositories on Docker Hub with incremental tags for changing versions. These images are then iteratively downloaded in the following manners: (i) absolute, where previous versions of an image are removed before download, and (ii) incremental, emulating software update, where previous image versions are kept to download just the updated layers. The first column of figure 5 shows the absolute download times of a specific version of agile-core, considering the three aforementioned techniques. Download times are almost proportional to image sizes, although it is interesting to note that the squashed image, even if smaller, downloads slightly slower than the multi-stage one. This is due to the way image download and decompression is parallelized in Docker. Subsequent columns show incremental download performance, where multi-stage, even baseline outperforms squashing. In fact, squashing forces the whole image content to be downloaded again, while multi-stage allows small changes to transform into the update of only a few small layers.
Finally, we consider multiple containers on the same gateway, i.e. the deployment of the AGILE stack. To simplify discussion and avoid distortion from functionalities added during the evolution of the stack, we restrict deployment to the subset of images discussed before (agile-core, agile-ble, agile-nodered). Fig. 6 shows both the download time and the data amount as the stack evolved. In the initial versions, neither the base images were chosen considering common layers, nor in-container optimizations were used. Hence, the download amount was similar to the sum of baseline image sizes in Table I. The difference is due to the way data was obtained: in this experiment we use the original images forming a given version of the stack, while Table I contains the size of the newest version of each component regenerated using a given optimization technique. Incremental times and size also shows large values for v0.1.0, but only because this is the first stack deployed on a clean system. In the subsequent versions, absolute download times are reduced due to both cross-container and in-container optimizations. More specifically, multi-stage was introduced in v0.1.3 for agile-core and agile-ble, while only in v0.4.1 for agilenodered. Reduction in the overall size can clearly be seen in these cases. The use of the base image hierarchy, instead, was introduced in v0.1.4 and v0.2.0, respectively. Since this involved a change in the base images, incremental downloads increased for the specific release, but the overall effect to
VI. CONCLUSION
This paper summarizes the current state of the art on containerization techniques and suitability of containerization in the context of IoT systems. The existing gap in the current literature in terms of multi-container deployments on edge layer devices is addressed by illustrating a microservice architecture based container deployment and defining optimization techniques to improve the gateway performance.
The cross-container optimization technique proposed facilitates the reduction in build times for images and removes the redundancy among base images used in the stack. On the other hand, in-container optimization reduces the overall size of the AGILE stack and thus reduces download times for the images as well.
In future work, we will perform further measurements on multiple devices and device architectures to improve our proposed optimization techniques. We would also investigate optimization of pushing delta updates to the devices while minimizing the build and download times using the current study as a reference.
| 3,919 |
1906.06626
|
2952898185
|
This paper addresses the problem of very large-scale image retrieval, focusing on improving its accuracy and robustness. We target enhanced robustness of search to factors, such as variations in illumination, object appearance and scale, partial occlusions, and cluttered backgrounds—particularly important when a search is performed across very large datasets with significant variability. We propose a novel CNN-based global descriptor, called REMAP, which learns and aggregates a hierarchy of deep features from multiple CNN layers, and is trained end-to-end with a triplet loss. REMAP explicitly learns discriminative features which are mutually supportive and complementary at various semantic levels of visual abstraction. These dense local features are max-pooled spatially at each layer, within multi-scale overlapping regions, before aggregation into a single image-level descriptor. To identify the semantically useful regions and layers for retrieval, we propose to measure the information gain of each region and layer using KL-divergence. Our system effectively learns during training how useful various regions and layers are and weights them accordingly. We show that such relative entropy-guided aggregation outperforms classical CNN-based aggregation controlled by SGD. The entire framework is trained in an end-to-end fashion, outperforming the latest state-of-the-art results. On image retrieval datasets Holidays, Oxford, and MPEG, the REMAP descriptor achieves mAP of 95.5 , 91.5 , and 80.1 , respectively, outperforming any results published to date. REMAP also formed the core of the winning submission to the Google Landmark Retrieval Challenge on Kaggle.
|
Early approaches typically involve extracting multiple local descriptors (usually hand-crafted), and combining them into a fixed length image-level representation for fast matching. Local descriptors may be scale-invariant and centered on image feature points, such as for SIFT @cite_6 , or extracted on regular, dense grids, possibly at multiple scales independently of the image content @cite_20 . An impressive number of local descriptors have been developed over years, each claiming superiority, making it difficult to select the best one for the job - an attempt at comparative study can be found in @cite_18 . It should be noted that descriptor dimension (for hand-crafted features) is typically between 32 and 192, which is an order or two less than the number of deep features available for each image region.
|
{
"abstract": [
"Local feature descriptors underpin many diverse applications, supporting object recognition, image registration, database search, 3D reconstruction, and more. The recent phenomenal growth in mobile devices and mobile computing in general has created demand for descriptors that are not only discriminative, but also compact in size and fast to extract and match. In response, a large number of binary descriptors have been proposed, each claiming to overcome some limitations of the predecessors. This paper provides a comprehensive evaluation of several promising binary designs. We show that existing evaluation methodologies are not sufficient to fully characterize descriptors’ performance and propose a new evaluation protocol and a challenging dataset. In contrast to the previous reviews, we investigate the effects of the matching criteria, operating points, and compaction methods, showing that they all have a major impact on the systems’ design and performance. Finally, we provide descriptor extraction times for both general-purpose systems and mobile devices, in order to better understand the real complexity of the extraction task. The objective is to provide a comprehensive reference and a guide that will help in selection and design of the future descriptors.",
"This paper focuses on the image retrieval task. We propose the use of dense feature points computed on several color channels to improve the retrieval system. To validate our approach, an evaluation of various SIFT extraction strategies is performed. Detected SIFT are compared with dense SIFT. Dense color descriptors: C-SIFT and T-SIFT are then utilized. A comparison between standard and rotation invariant features is further achieved. Finally, several encoding strategies are studied: Bag of Visual Words (BOW), Fisher vectors, and vector of locally aggregated descriptors (VLAD). The presented approaches are evaluated on several datasets and we show a large improvement over the baseline.",
"This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance."
],
"cite_N": [
"@cite_18",
"@cite_20",
"@cite_6"
],
"mid": [
"2528954356",
"2037031790",
"2151103935"
]
}
|
REMAP: Multi-layer entropy-guided pooling of dense CNN features for image retrieval
|
Research in visual search has become one of the most popular directions in the area of pattern analysis and machine intelligence. With dramatic growth in the multimedia industry, the need for an effective and computationally efficient visual search engine has become increasingly important. Given a large corpus of images, the aim is to retrieve individual images depicting instances of a user-specified object, scene or location. Important applications include management of multimedia content, mobile commerce, surveillance, medical imaging, augmented reality, robotics, organization of personal photos and many more. Robust and accurate visual search is challenging due to factors such as changing object appearance, viewpoints and scale, partial occlusions, varying backgrounds and imaging conditions. Furthermore, today's systems must be scalable to billions of images due to the huge volumes of multimedia data available.
In order to overcome these challenges, a compact and discriminative image representation is required. Convolutional S. Husain Neural Networks (CNNs) delivered effective solutions to many computer vision tasks, including image classification. However, they have yet to bring anticipated performance gains to the image retrieval problem, especially on very large scales. The main reason is that two fundamental problems still remain largely open: (1) how to best aggregate deep features extracted by a CNN network into compact and discriminative imagelevel representations, and (2) how to train the resultant CNNaggregator architecture for image retrieval tasks.
This paper addresses the aforementioned problems by proposing a novel region-based aggregation approach employing multi-layered deep features, and developing the associated architecture which is trainable in an end-to-end fashion. Our descriptor is called REMAP for Region-Entropy 1 based Multi-layer Abstraction Pooling; the name reflecting the key innovations. Our key contributions include:
• we propose to aggregate a hierarchy of deep features from different CNN layers, representing various levels of visual abstraction, and -importantly-show how to train such a representation within an end-to-end framework, • we develop a novel approach to ensembling of multiresolution region-based features, which explicitly employs regions discriminative power, measured by the respective Kullback-Leibler (KL) divergence [1] values, to control the aggregation process, • we show that this relative entropy-guided aggregation outperforms conventional CNN-based aggregations: MAC [2], NetVLAD [3], Fisher Vector [4], GEM [5] and RMAC [6], • we compare the performance of three state-of-the-art base CNN architectures VGG16 [7], ResNet101 [8] and ResNeXt101 [9] when integrated with our novel REMAP representation and also against existing state-of-the-art models.
The overall architecture consists of a baseline CNN (e.g. VGG or ResNet) followed by the REMAP network. The CNN component produces dense, deep convolutional features that are aggregated by our REMAP method. The CNN filter weights and REMAP parameters (for multiple local regions) are trained simultaneously, adapting to the evolving distributions of deep descriptors and optimizing the multi-region aggregation parameters throughout the course of training. The proposed contributions are fully complementary and result in a system that outperforms not only the latest state-of-the-art in global descriptors, but can also compete with systems employing re-ranking based on local features. The significant performance improvements are demonstrated in detailed experimental evaluation, which uses classical datasets (Holidays [10], Oxford [11]) extended by the MPEG dataset [12] and with up-to 1M distractors. This paper is organized as follows. Related work is discussed in Section II. The REMAP novel components and the compact REMAP signature are presented in Section III. Our extensive experimentation is described in Section IV. Comparison with the state-of-the-art is presented in Section V and finally conclusions are drawn in Section VI.
A. Methods based on hand-crafted descriptors
Early approaches typically involve extracting multiple local descriptors (usually hand-crafted), and combining them into a fixed length image-level representation for fast matching. Local descriptors may be scale-invariant and centered on image feature points, such as for SIFT [13], or extracted on regular, dense grids, possibly at multiple scales independently of the image content [14]. An impressive number of local descriptors have been developed over years, each claiming superiority, making it difficult to select the best one for the job -an attempt at comparative study can be found in [15]. It should be noted that descriptor dimension (for hand-crafted features) is typically between 32 and 192, which is an order or two less than the number of deep features available for each image region.
Virtually all aggregation schemes rely on clustering in feature space, with varying degree of sophistication: Bag-of-Words (BOW) [16], Vector of Locally Aggregated Descriptors (VLAD) [17], Fisher Vector (FV) [4], and Robust Visual Descriptor (RVD) [18]. BOW is effectively a fixed length histogram with descriptors assigned to the closest visual word; VLAD additionally encodes the positions of local descriptors within each voronoi region by computing their residuals; the Fisher Vector (FV) aggregates local descriptors using the Fisher Kernel framework (second order statistics), and RVD combines rank-based multi-assignment with robust accumulation to reduce the impact of outliers.
B. Methods based on CNN descriptors
More recent approaches to image retrieval replace the lowlevel hand-crafted features with deep convolutional descriptors obtained from convolutional neural networks (CNNs), typically pre-trained on large-scale datasets such as the ImageNet. Azizpour et al. [19] compute an image-level representation by the max pooling aggregation of the last convolutional layer of VGGNet [7] and ALEXNET [20]. Babenko and Lempitsky [21] aggregated deep convolutional descriptors to form image signatures using Fisher Vectors (FV), Triangulation Embedding (TEMB) and Sum-pooling of convolutional features (SPoC). Kalantidis et al. [22] extended this work by introducing cross-dimensional weighting in aggregation of CNN features. The retrieval performance is further improved when the RVD-W method is used for aggregation of CNNbased deep descriptors [18]. Tolias et al. [2] proposed to extract Maximum Activations of Convolutions (MAC) descriptor from several multi-scale overlapping regions of the last convolutional layer feature map. The region-based descriptors are L2normalized, Principal Component Analysis (PCA)+whitened [23], L2-normalized again and finally sum-pooled to form a global signature called Regional Maximum Activations of Convolutions (RMAC). The RMAC dimensionality is equal to the number of filters of last convolutional layer and is independent of the image resolution and the number of regions. In [24], Seddati et al. provided an in-depth study of several RMAC-based architectures and proposed a modified RMAC signature that combines multi-scale and two-layer feature extraction with feature selection. A detailed survey of contentbased image retrieval (CBIR) methods based on hand-crafted and deep features is presented in [25].
C. Methods based on fine-tuned CNN descriptors
All of the aforementioned approaches use fixed pre-trained CNNs. However, these CNNs were trained for the purpose of image classification (e.g. 1000 classes of ImageNet), in a fashion blind to the aggregation method, and hence likely to perform sub-optimally in the task of image retrieval. To tackle this, Radenovic et al. [26], proposed to fine-tune MAC representation using the Flickr Landmarks dataset [27]. More precisely, the MAC layer is added to the last convolutional layer of VGG or ResNet. The resultant network is then trained with a siamese architecture [26], minimizing the contrastive loss. In [5], the MAC layer is replaced by trainable Generalized-Mean (GEM) pooling layer which significantly boosts retrieval accuracy. In [6], Gordo et al. trained a siamese architecture with ranking loss to enhance the RMAC representation. The recent NetVLAD [3] consists of a standard CNN followed by a Vector of Locally Aggregated Descriptors (VLAD) layer that aggregates the last convolutional features into a fixed dimensional signature and its parameters are trainable via back-propagation. Ong et al. [28] proposed SIAM-FV: an endto-end architecture which aggregates deep descriptors using Fisher Vector Pooling.
III. REMAP REPRESENTATION
The design of our REMAP descriptor addresses two issues fundamental to solving content-based image retrieval: (i) a novel aggregation mechanism for multi-layer deep convolutional features extracted by a CNN network, and (ii) an advanced assembling of multi-region and multi-layer representations with end-to-end training.
The first novelty of our approach is to aggregate a hierarchy of deep features from different CNN layers, which are explicitly trained to represent multiple and complementary levels of visual feature abstraction, significantly enhancing recognition. Importantly, our multi-layer architecture is trained fully end-to-end and specifically for recognition. This means [24], where no end-to-end training of the CNN is performed: fixed weights of the pre-trained CNN are used as a feature extractor. The important and novel component of our REMAP architecture is multi-layer end-to-end finetuning, where the CNN filter weights, relative entropy weights and PCA+Whitening weights are optimized simultaneously using Stochastic Gradient Descent (SGD) with the triplet loss function [6]. The end-to-end training of the CNN is critical, as it explicitly enforces intra-layer feature complementarity, significantly boosting performance. Without such joint multi-layer learning, the features from the additional layers -while coincidentally useful -are not-trained to be either discriminative nor complementary. The REMAP multilayer processing can be seen in Figure 1, where multiple parallel processing strands originate from the convolutional CNN layers, each including the ROI-pooling [2], L2-normalization, relative entropy weighting and Sum-pooling, before being concatenated into a single descriptor. The region entropy weighting is another important innovation proposed in our approach. The idea is to estimate how discriminatory individual features are in each local region, and to use this knowledge to optimally control the subsequent sum-pooling operation. The region entropy is defined as the relative entropy between the distributions of distances for matching and non-matching image descriptor pairs, measured using the KL-divergence function [1]. The regions which provide high separability (high KL-divergence) between matching and nonmatching distributions are more informative in recognition and are therefore assigned higher weights. Thanks to our entropy-controlled pooling we can combine a denser set of region-based features, without the risk of less informative regions overwhelming the best contributors. Practically, the KL-divergence Weighting (KLW) block in the REMAP architecture is implemented using a convolutional layer with weights initialized by the KL-divergence values and optimized using Stochastic Gradient Descent (SGD) on the triplet loss function.
REMAP ARCHITECTURE
The aggregated vectors are concatenated, PCA whitened and L2-normalized to form a global image descriptor.
All blocks in the REMAP network represent differentiable operations therefore the entire architecture can be trained endto-end. We perform training on the Landmarks-retrieval dataset using triplet loss -please see the Experimental Section for full details of the datasets and the training process. Additionally, the REMAP signatures for the test datasets are encoded using the Product Quantization (PQ) [29] approach to reduce the memory requirement and complexity of the retrieval system.
We will now describe in detail the components of the REMAP architecture, with reference to the Figure We can see that it comprises of a number of commonly used components, including the max-pool, sumpool and L2-norm functions. We denote these functions as M axp(x), Sump(x), L2(x) respectively, where x represents an input tensor. We also employ the Region Of Interest (ROI) function [2], ζ : R w,h,d → R r×d . The ROI function ζ splits an input tensor of size w × h × d into r overlapping spatial blocks using a rigid grid and performs spatial max-pooling within regions, producing a single d-dimensional vector for each region. More precisely, the ROI block extracts square regions from CNN response map at S different scales [2]. For each scale, the regions are extracted uniformly such that the overlap between consecutive regions is as close as possible to 40%. The number of regions r extracted by the ROI block depends on the image size (1024 × 768 × 3) and scale factor S. We performed experiments to determine the optimum number of regions for our REMAP network. It can be observed from Table I that the best retrieval accuracy is obtained using r=40. This is consistent across all the experiments.
A. CNN Layer Access Function
The base of the proposed REMAP architecture is formed by any of the existing CNN commonly used for retrieval, for example VGG16 [7], ResNet101 [8] and ResNeXt101 [9]. All these CNNs are essentially a sequential composition of L "convolutional layers". The exact nature of each of these blocks will differ between the CNNs. However, we can view each of these blocks as some function l i : R wi×hi×di → R w i ×h i ×d i , 1 ≤ i ≤ L, that transforms its respective input tensor into some output tensor, where w, h and d denote the width, height and depth of the input tensor into a certain block and w , h and d denote the height, width and depth of output tensor from that block.
The CNN can then be represented as the function composition: f (x) = l L (l L−1 (...(l 1 (x)))), where x is the input image of size w 0 × h 0 with d 0 channels. For our purpose, we would like to access the output of some intermediate convolutional layer. Therefore, we will create a "layer access" function:
(a) (b) (c) (d)f l (x) = l l (l l−1 (...(l 1 (x))))(1)
where 1 ≤ l ≤ L. f l will output the convolutional output of layer l.
B. Parallel Divergence-Guided ROI Streams
The proposed REMAP architecture performs separate and distinct transformations on different CNN layer outputs via parallel divergence-guided ROI streams. Each stream takes as input the convolutional output of some CNN layer and performs ROI pooling on it. The output vectors of the ROI pooling are L2-normalized, weighted (based on their informativeness), and linearly combined to form a single aggregated representation.
Specifically, suppose we would like to use the output tensor of the layer l from the CNN as input for ROI processing. Now, let o = f l (x), o ∈ R w,h,d be the output tensor from the CNN's l convolutional layer given an input image x. This is then given to the ROI block followed by L2 block, with the result denoted as: r = L2(ζ(o)). The linear combination of the region vectors is then carried out by weighted sum:
W (r) = r i=1 α i r(i)
where r(i) denotes the i th column of matrix r.
In summary, the ROI stream can be defined by the following function composition:
P (x; l , α) = W (L2(ζ(f l (x))); α)
where the set of linear combination weights is denoted as α = {α 1 , α 2 , ..., α r } In this work, the linear combination weights can be initialized differently, fixed as constants, or learnable in the SGD process. These in turn give rise to different existing CNN methods. In RMAC [6] architecture, the weights are fixed to 1 and not optimized during the end-to-end training stage: i.e. weight vector α = {1, 1, ..., 1}.
A drawback of the ROI-pooling method employed in RMAC is that it gives equal importance to all regional representations regardless of information content. We propose to measure the information gain of regions using the class-separability between the probability distributions of matching and nonmatching descriptor pairs for each region. Our algorithm to determine the relative entropy weights includes the following steps: (1) images of dimensionality 1024 × 768 × 3 are passed through the offline ResNeXt101 CNN, (2) the features from the ultimate convolution layers are then passed to the ROI block which splits an input tensor of size 32 × 24 × 2048 into 40 spatial blocks and performs spatial max-pooling within regions, producing a single 2048-dimensional vector per region/layer, (3) for each region and each layer, we compute P r(y/m) and P r(y/n) as the probability density function of observing a Euclidean distance y for a matching and nonmatching descriptor pair respectively. KL-divergence measure is employed to compute the separability between matching and non-matching pdfs. It can be observed from Figure 2 (a-e) that the KL-divergence value for different regions vary significantly. For example, region 13, 26 and 30 provides better separability (high KL-divergence) than region 24 and 37.
We propose to assign learnable weights to regional descriptors before aggregation into REMAP to enhance the ability to focus on important regions in the image. Thus we view our CNN as an information-flow network, where we control the impact of various channels based on the observed information gain. More precisely, the KL-divergence values for each region (Figure 2(f)) are used to initialize the ROI weight vector a. We enforce non-negativity on weight vector a during the training process.
Practically, the KL-divergence weighting layer (KLW) is implemented using a convolutional operation with weights that can be learned using stochastic gradient descent on the triplet loss function.
C. Final REMAP Architecture
We can now describe the proposed multi-stream REMAP. At the base is an existing Convolutional Neural Network (VGG or ResNet). The CNNs are essentially a sequential composition of L "convolutional layers", N of which are used in aggregation (N <= L). The output tensor of convolutional layer l can be accessed using f l (Eq. 1). We denote the N number of CNN layers that will be used in aggregation as: {l 1 , l 2 , ..., l N }, where l i ∈ {l 1 , l 2 , ..., l L } for each i = 1, 2, ..., N .
Associated with each of the above CNN layers l ∈ {l 1 , l 2 , ..., l N } is a set of ROI linear combination coefficients α l i = {α l i ,1 , ..., α l i ,r }. As a result, we have N parallel ROI streams, each with output P (x; l i , α l i ). The outputs of the N ROI streams are concatenated together into a highdimensional vector: p = [P (x; l 1 , α l 1 ), ..., P (x; l N , α l N )] T . We then pass p to a fully connected layer with weights initialized by PCA+Whitening coefficients [6].
In Table II, we perform experiments on Holidays, Oxford and MPEG to demonstrate how different convolutional layers of off-the-shelf ResNeXt101 perform when combined within the REMAP architecture. It is interesting to note that, individually, the best retrieval accuracy on the Holidays and MPEG datasets is provided by layer 2, and not by the bottleneck layer 1. Layer 1 (the last convolutional layer) delivers best performance only on the Oxford dataset. The performance of layer 3 is lowest since it is too sensitive to local deformation. However, the philosophy of our design is to combine different convolutional layers, so we investigate the performance of such combinations (shown in the lower half of the table). It can be observed from Table II that multi-layer REMAP significantly outperforms any single-layer representation. In the final REMAP representation we use the combination of the last two convolutions layers (layer 1+2), which are trained jointly, as this provides the best balance between the retrieval accuracy and the computational complexity of the training process. In Figure 3, we visualize the maximum activation responses of last two convolutional layers of off-the-shelf ResNeXt101. It can be seen that the two layers focuses on different but important features of the object thus justifying our multi-layer aggregation (MLA) approach. D. End-to-End Siamese learning for image retrieval An important feature of the REMAP architecture is that all its blocks are designed to represent differentiable operations. The fixed grid ROI pooling is differentiable [30]. Our novel component KL-divergence weighting (KLW ) can be implemented using 1D convolutional layer, with weights than can be optimized. The Sum-pooling of regional descriptors, L2-normalization and Concatenation of multilayer descriptors are also differentiable. The PCA+Whitening transformation can be implemented using a F ully-connected (F C) layer with bias. Therefore, we can learn the CNN filter weights and REMAP parameters (KLW weights and FC layer weights) simultaneously using SGD on the triplet loss function, adapting to the evolving distributions of deep features and optimizing the multi-region aggregation parameters over the course of training.
We proceed by removing the last pooling layer, prediction layer and loss layer from ResNeXt101 (trained on ImageNet) and adding REMAP pipeline to the last two convolutional layers. We then adopt a three stream siamese architecture to finetune the REMAP network using triplet loss [6]. More precisely, we are given a training dataset of T triplets of images, each triplet consists of a query image, a matching image and a closest non-matching image (non-matching image with the most similar descriptor to query image descriptor). More precisely, let p q be a REMAP descriptor extracted from the query image, p m be a descriptor from the matching image, and p n be a descriptor from a non-matching image. The triplet loss can be computed as:
L = 0.5 max(0, th + ||p q − p m || 2 − ||p q − p n || 2 ),(2)
where th parameter controls the margin of the classifier here i.e. the distance threshold parameter defining when the distance between matching and non-matching pairs is large enough not to be considered in the loss. The gradients with respect to loss L are back-propagated through the three streams of the REMAP network, and the convolutional layers, KLW layer and PCA+whitening layer parameters get updated.
E. Compact REMAP signature
Encoding high-dimensional image representation as compact signature provides benefit in storage, extraction and matching speeds, especially for large scale image retrieval tasks. This section focuses on deriving a small footprint image descriptor from the core REMAP representation. In the first approach, we pass an image thorough REMAP network to obtain D dimensional descriptor and select the top d dimensions out of D.
The second approach is based on Product Quantization (PQ) algorithm [17], in which D−dimensional REMAP descriptor is first split into m sub-parts of equal length D/m. Each subpart is quantized using a separate K-means quantizer with k = 256 cluster centres and encoded using n = log 2 (k) bits. The size of the PQ-embedded signature is B = m × n bits. At test time, the similarity between query vector and database vectors is computed using Asymmetric Distance Computation [17].
IV. EXPERIMENTAL EVALUATION
The purpose of this section is to evaluate the proposed REMAP architecture and compare it against latest state-of-theart CNN architectures. We first present the experimental setup which includes the datasets and evaluation protocols. We then analyze, the impact of the novel components that constitute our method, namely KL-divergence based weighting of region descriptors and Multi-layer aggregation. Furthermore, we compare the retrieval performance of different feature aggregation method including MAC, RMAC, Fisher Vectors and REMAP on four varied datasets with up-to 1Million distractors. A comparison with the different CNN representations is presented at the end of this section.
A. Training datasets
We train on a subset of the Landmarks dataset used in the work of Babenko et al. [31], which contains approximately 100k images depicting 650 famous landmarks. It was collected through textual queries in the Yandex image search engine, and therefore contains a significant proportion of images unrelated to the landmarks, which we filter out and remove. Furthermore, to guarantee unbiased test results we exclude all images that overlap with the MPEG, Holidays and Oxford5k datasets used in testing. We call this subset the Landmarks-retrieval dataset.
The process to remove images unrelated to the landmarks and to generate a list of matching image pairs for triplets generation is semi-automatic, and relies on local SIFT features detected with a Hessian-affine detector and aggregated with the RVDW descriptor [18]. For each of the 650 landmark classes we manually select a query image, depicting a particular landmark, and compute its similarity (based on the RVDW global descriptors) to all remaining images in the same class. We then remove the images whose distance from query are greater than a certain threshold (outliers), forming the Landmarks-retrieval subset of 25k images.
To generate matching image pairs we randomly select fifty image pairs from each class in the Landmarks-retrieval dataset. RANSAC algorithm is applied to matching SIFT descriptors in order to filter out the pairs that are difficult to match (the number of inliers are less than 5 -extreme hard examples) or very easy to match (the number of inliers greater than 30 -extreme easy examples). This way, about 15k matching image pairs are selected for the finetuning based on the triplet loss function.
B. Training configurations
We use MATLAB toolbox MatConvNet [32] to perform training and evaluation. The state-of-the-art networks VGG16, ResNet101 and ResNeXt101 (all pre-trained on ImageNet) are downloaded in MATLAB format and Batch-normalization layers are merged into preceding convolutional layers for finetuning.
Finetuning with triplet loss
Each aforementioned CNN is integrated with the REMAP network and the entire architecture is fine-tuned on Landmarks-retrieval dataset with triplet loss. The images are resized to 1024 × 768 pixels before passing through the network. Optimization is performed by the Stochastic Gradient Descent (SGD) algorithm with momentum 0.9, learning rate of 10 −3 and weight decay of 5 × 10 −5 . The triplet loss margin is set to 0.1.
An important consideration during training process is the generation of triplets, as generating them randomly will yield triplets that incur no loss. To address this issue, we divide the 15k matching image pairs from Landmarks-retrieval dataset into 5 groups. The REMAP descriptors are extracted from 25k images using the current model. For each matching pair, the closest non-matching (hard negative) example is then chosen, forming a triplet, consisting of the following: query example; matching example; non-matching example. The hard negatives are remined once per group, i.e. after every 3000 triplets.
Another consideration is the memory requirement during training, as the network trains with image size of 1024×768 pixels and with three streams at same time. Finetuning with deep architectures, VGG16, ResNet101 and ResNeXt101, is memory consuming and we could only fit one triplet at a time on a T IT AN X GPU with 12 GB of memory. To make the training process effective, we update the model parameters after every 64 triplets. The training process takes approximately 3 days to complete.
C. Test datasets
The INRIA Holidays dataset [10] contains 1491 holiday photos with a subset of 500 used as queries. Retrieval accuracy is measured by mean Average Precision (mAP), as defined in [11]. To evaluate model retrieval accuracy in a more challenging scenario, the Holidays dataset is combined with 1 million distractor images obtained from Flickr, forming Holidays1M [18].
The University of Kentucky Benchmark (UKB) [33] dataset comprises of 10200 images of 2550 objects. Here the performance measure is the average number of images returned in the first 4 positions (4 × Recall@4).
The Oxford5k dataset [11] contains 5063 images of Oxford landmarks. The performance is evaluated using mAP over 55 queries. To test large scale retrieval, this dataset is augmented with 100k and 1 million Flickr images [34], forming the Oxford105k [11] and Oxford1M [18] datasets respectively. We follow the state-of-the-art protocol for Oxford dataset and compute the image signature of query images using the cropped activations method [3] [26].
The Motion Picture Experts Group (MPEG) have developed a heterogeneous and challenging MPEG CDVS dataset for evaluating the retrieval performance of image signatures [12]. The dataset contains 33590 images from five image categories (1) Graphics including Book, DVD covers, documents and business cards, (2) Photographs of Paintings, (3) Video frames, (4) Landmarks and (5) Common objects. A total of 8313 queries are used to evaluate the retrieval performance in terms of mAP.
The dimensionality of input images to the CNN is limited to 1024×768 pixels. In order to illustrate clearly and fairly the benefits of the novel elements proposed in our framework, we selected the best state-of-the art RMAC representation and integrated it with the latest ResNeXt101 architecture. We then performed fine-tuning, using procedures outline before. Please note that our fully optimized and finetuned RMAC representation outperforms results reported earlier on Oxford5k, MPEG and Oxford105k datasets and we use our improved result as the reference performance shown in the Table III and Table IV. This represents the best state-of-the-art. We then introduce REMAP innovations, KL-divergence based weighting (KLW) and Multi-layer aggregation (MLA), and show the relative performance gain. Finally we combine all novel elements to show the overall improvement compared to baseline RMAC.
KL-divergence based weighting (KLW): We performed experiments to show that the initialization of the KLW block with relative entropy weights and then further optimization of weights using SGD is crucial to achieve optimum retrieval performance. We trained the following networks to compare:
• The baseline is the RMAC representation in which the ROI weights are fixed (a = {1, 1, ..., 1}) and not optimized during the training process. • In the second network RMAC+SGD, the weights are initialized with 1 (a = {1, 1, ..., 1}) and optimized using SGD on triplet loss function. • In the final network KLW, the relative entropy weights are initialized with KL-divergence values and further optimized using SGD on triplet loss. It can be observed from Table III that initialization of the KLW block with relative entropy weights is indeed significantly important for the network convergence and achieves best retrieval accuracy on all datasets. Furthermore, RMAC+SGD is not able to learn the optimum regional weights thus resulting in marginal improvement over RMAC. This is a very interesting result, which shows that optimization of the loss function alone may not always lead to optimal results and initialization (or optimization) of the network using the information gain may lead to improved performance.
Multi-layer aggregation (MLA): Next, we perform experiments to show the advantage of Multi-layer aggregation of deep features (MLA). It can be observed from Table IV that MLA brings an improvement of +1.8%, +2.6% and +2.4% on Oxford5k, MPEG datasets and Oxford105k compared to single layer aggregation as employed in RMAC.
Finally, we combine the KLW and MLA blocks to form our novel REMAP signature and compare the retrieval accuracy with the RMAC reference signature, as a function of descriptor dimensionality. Figure 4 clearly demonstrates that REMAP significantly outperforms RMAC on all state-of-theart datasets.
E. Multi-Scale REMAP (MS-REMAP)
In this section, we evaluate the retrieval performance of MS-REMAP representation computed at test time without any further training. In MS-REMAP, the descriptors are extracted from images re-sized at two different scales and then aggregated into a single signature [6]. More precisely, let X 1 and X 2 be REMAP descriptors extracted from two images of sizes 1024×768 and 1280×960 pixels respectively. The MS-REMAP descriptor X m is computed by weighted aggregation of X 1 and X 2 .
X m = (2 × X 1 ) + (1.4 × X 2 )(3)
It can be observed from Figure 5 that multi-scale representation brings an average gain of 1%, 1.8% and 1.5% on Holidays, Oxford5k and MPEG datasets compared to single scale representation.
F. Comparison of MAC, Fisher Vector, RMAC and REMAP networks
In this section we compare the best network REMAP with state-of-the-art representations: MAC [2], RMAC [6] and FV [4]. All the networks are trained end-to-end on Landmarksretrieval dataset using triplet loss. We use Multi-Scale version for all representations. In MAC pipeline, the MAX-pooling block is added to the last convolutional layer of the ResNeXt101. The MAX-pooling block is followed by PCA+Whitening and L2-Normalization blocks. The dimensonality of the output descriptor is 2048-D.
For the Fisher Vector method, the last convolution layer is followed by Fisher Vector aggregation block, PCA+Whitening block and L2-Normalization block. 16 cluster centers are used for the Fisher Vector GMM model, with their parameters initialized using the EM algorithm. This resulted in FV of dimensionality 32k. The parameters of the CNN and Fisher vectors are trained using stochastic gradient descent on the triplet loss function.
In the RMAC pipeline, the last convolutional layers features are passed through rigid grid ROI-pooling block. The region based descriptors are L2-normalized, whitened with PCA and L2-normalized again. Finally, the normalized descriptors are aggregated using Sum-pooling block resulting in a 2048 dimensional signature.
Following conclusions can be drawn from the Figure 6:
•
G. Convolutional Neural Network architectures
In this section, we evaluate the performance of three state-of-the-art CNN architectures VGG16, ResNet101 and ResNeXt101 when combined with our REMAP network. All the networks are trained end-to-end on Landmarks-retrieval datatset. We use the Multi-Scale representation of REMAP to compare the CNNs. From the results shown in Figure 7 we can observe that all three CNNs performed well on Holidays and Oxford dataset. The low performance on MPEG can be attributed to the fact that MPEG is a very diverse dataset (Graphics, Paintings, Videos, Landmarks and Objects) and our networks are finetuned only on landmarks. ResNeXt101 outperforms ResNet101 and VGG16 on all three datasets. Figure 8 demonstrates the performance of our method on the large scale datasets of Holidays1M, Oxford1M and MPEG1M. The retrieval performance (mAP) is presented as a function of database size. We show the results for four methods:
H. Large scale experiments
• the REMAP descriptor truncated to D = 128; • the RMAC descriptor truncated to D = 128; • the REMAP descriptor compressed to 16 It can be observed that the proposed REMAP outperforms all prior state-of-the-art methods. Compared to the RMAC [6], REMAP provides a significant improvement of +5%, +1% and +7.8% in mAP on Oxford, Holidays and MPEG datasets. Furthermore, the REMAP signatures achieves a gain of +3.7%, +1.6% and +6% on Oxford, Holidays and MPEG datasets, over recently published GEM signature [5]. The difference in retrieval accuracy, between REMAP and GEM, is even more significant on large scale datasets: Holidays100k (+3%) and Oxford105k (+5%). We also compare REMAP with our implementation ResNeXt+RMAC and the results show that REMAP representation is more robust and discriminative.
REMAP also formed the core of the winning submission to the Google Landmark Retrieval Challenge on Kaggle [35]. This gave us an opportunity to experimentally compare its performance on the Google landmark dataset [36]. This new dataset is the largest worldwide dataset for image retrieval research, comprising more than a million images of 15K unique landmarks. More importantly, the evaluation was performed by Kaggle on a private unseen subset, preventing (unintentional) over-training. We evaluated REMAP, RMAC, MAC and SPoC aggregation applied to the ResNeXt network, without any additional modules (no query-expansion, DB-augmentation)the results are shown in Table V. The REMAP architecture achieves mAP of 42.8% and offers over 8% gain over the closest competitor R-MAC. The classical SIFT-based system with geometric verification achieved only 12%, illustrating clearly the gain brought by the CNN-based architectures. Our winning solution, which combined multiple signatures with query expansion (QE) and database augmentation (DA) achieved mAP of 62.7%. It has recently become a standard technique to use Query Expansion (QE) [6], [5] to improve the retrieval accuracy. We applied QE to the REMAP representation and it can be observed from Table VI that REMAP+QE outperforms stateof-the-art results RMAC+QE [6] and GEM+QE [5].
For the visualization purposes, Figure 9 shows 5 queries from Holidays100K and MPEG datasets where difference in recall between REMAP and RMAC is the biggest. We demonstrate the query and top ranked results obtained by REMAP and RMAC representations using these queries, where correct matches are shown by green frame.
Compact image representation
This section focuses on a comparison of compact image signatures which are practicable in large-scale retrieval containing
VI. CONCLUSION
In this paper we propose a novel CNN-based architecture, called REMAP, which learns a hierarchy of deep features representing different and complementary levels of visual abstraction. We aggregate a dense set of such multi-level CNN features, pooled within multiple spatial regions and combine them with weights reflecting their discriminative power. The weights are initialized by KL-divergence values for each spatial region and optimized end-to-end using SGD, jointly with the CNN features. The entire framework is trained in an end-to-end fashion using triplet loss, and extensive tests demonstrate that REMAP significantly outperforms the latest state-of-the art. Miroslaw Bober is a Professor of Video Processing at the University of Surrey, U.K. In 2011 he cofounded Visual Atoms Ltd, a company specializing in visual analysis and search technologies. Between 1997 and 2011 he headed Mitsubishi Electric Corporate R&D Center Europe (MERCE-UK). He received BSc degree from AGH University of Science and Technology, and MSc and PhD degrees from University of Surrey. His research interests include computer vision, machine learning and AI, with a focus on analysis and understanding of visual and multimodal data, and efficient representation of its semantic content. Miroslaw led the development of ISO MPEG standards for over 20 years, chairing the MPEG-7, CDVS and CVDA groups. He is an inventor of over 80 patents, many deployed in products. His publication record includes over 100 refereed publications, including three books and book chapters, and his visual search technologies recently won the Google Landmark Retrieval Challenge on Kaggle.
| 6,066 |
1906.06626
|
2952898185
|
This paper addresses the problem of very large-scale image retrieval, focusing on improving its accuracy and robustness. We target enhanced robustness of search to factors, such as variations in illumination, object appearance and scale, partial occlusions, and cluttered backgrounds—particularly important when a search is performed across very large datasets with significant variability. We propose a novel CNN-based global descriptor, called REMAP, which learns and aggregates a hierarchy of deep features from multiple CNN layers, and is trained end-to-end with a triplet loss. REMAP explicitly learns discriminative features which are mutually supportive and complementary at various semantic levels of visual abstraction. These dense local features are max-pooled spatially at each layer, within multi-scale overlapping regions, before aggregation into a single image-level descriptor. To identify the semantically useful regions and layers for retrieval, we propose to measure the information gain of each region and layer using KL-divergence. Our system effectively learns during training how useful various regions and layers are and weights them accordingly. We show that such relative entropy-guided aggregation outperforms classical CNN-based aggregation controlled by SGD. The entire framework is trained in an end-to-end fashion, outperforming the latest state-of-the-art results. On image retrieval datasets Holidays, Oxford, and MPEG, the REMAP descriptor achieves mAP of 95.5 , 91.5 , and 80.1 , respectively, outperforming any results published to date. REMAP also formed the core of the winning submission to the Google Landmark Retrieval Challenge on Kaggle.
|
Virtually all aggregation schemes rely on clustering in feature space, with varying degree of sophistication: Bag-of-Words (BOW) @cite_14 , Vector of Locally Aggregated Descriptors (VLAD) @cite_28 , Fisher Vector (FV) @cite_21 , and Robust Visual Descriptor (RVD) @cite_7 . BOW is effectively a fixed length histogram with descriptors assigned to the closest visual word; VLAD additionally encodes the positions of local descriptors within each voronoi region by computing their residuals; the Fisher Vector (FV) aggregates local descriptors using the Fisher Kernel framework (second order statistics), and RVD combines rank-based multi-assignment with robust accumulation to reduce the impact of outliers.
|
{
"abstract": [
"This paper addresses the problem of large-scale image search. Three constraints have to be taken into account: search accuracy, efficiency, and memory usage. We first present and evaluate different ways of aggregating local image descriptors into a vector and show that the Fisher kernel achieves better performance than the reference bag-of-visual words approach for any given vector dimension. We then jointly optimize dimensionality reduction and indexing in order to obtain a precise vector comparison as well as a compact representation. The evaluation shows that the image representation can be reduced to a few dozen bytes while preserving high accuracy. Searching a 100 million image data set takes about 250 ms on one processor core.",
"We describe an approach to object and scene retrieval which searches for and localizes all the occurrences of a user outlined object in a video. The object is represented by a set of viewpoint invariant region descriptors so that recognition can proceed successfully despite changes in viewpoint, illumination and partial occlusion. The temporal continuity of the video within a shot is used to track the regions in order to reject unstable regions and reduce the effects of noise in the descriptors. The analogy with text retrieval is in the implementation where matches on descriptors are pre-computed (using vector quantization), and inverted file systems and document rankings are used. The result is that retrieved is immediate, returning a ranked list of key frames shots in the manner of Google. The method is illustrated for matching in two full length feature films.",
"The problem of large-scale image search has been traditionally addressed with the bag-of-visual-words (BOV). In this article, we propose to use as an alternative the Fisher kernel framework. We first show why the Fisher representation is well-suited to the retrieval problem: it describes an image by what makes it different from other images. One drawback of the Fisher vector is that it is high-dimensional and, as opposed to the BOV, it is dense. The resulting memory and computational costs do not make Fisher vectors directly amenable to large-scale retrieval. Therefore, we compress Fisher vectors to reduce their memory footprint and speed-up the retrieval. We compare three binarization approaches: a simple approach devised for this representation and two standard compression techniques. We show on two publicly available datasets that compressed Fisher vectors perform very well using as little as a few hundreds of bits per image, and significantly better than a very recent compressed BOV approach.",
"Visual search and image retrieval underpin numerous applications, however the task is still challenging predominantly due to the variability of object appearance and ever increasing size of the databases, often exceeding billions of images. Prior art methods rely on aggregation of local scale-invariant descriptors, such as SIFT, via mechanisms including Bag of Visual Words (BoW), Vector of Locally Aggregated Descriptors (VLAD) and Fisher Vectors (FV). However, their performance is still short of what is required. This paper presents a novel method for deriving a compact and distinctive representation of image content called Robust Visual Descriptor with Whitening (RVD-W). It significantly advances the state of the art and delivers world-class performance. In our approach local descriptors are rank-assigned to multiple clusters. Residual vectors are then computed in each cluster, normalized using a direction-preserving normalization function and aggregated based on the neighborhood rank. Importantly, the residual vectors are de-correlated and whitened in each cluster before aggregation, leading to a balanced energy distribution in each dimension and significantly improved performance. We also propose a new post-PCA normalization approach which improves separability between the matching and non-matching global descriptors. This new normalization benefits not only our RVD-W descriptor but also improves existing approaches based on FV and VLAD aggregation. Furthermore, we show that the aggregation framework developed using hand-crafted SIFT features also performs exceptionally well with Convolutional Neural Network (CNN) based features. The RVD-W pipeline outperforms state-of-the-art global descriptors on both the Holidays and Oxford datasets. On the large scale datasets, Holidays1M and Oxford1M, SIFT-based RVD-W representation obtains a mAP of 45.1 and 35.1 percent, while CNN-based RVD-W achieve a mAP of 63.5 and 44.8 percent, all yielding superior performance to the state-of-the-art."
],
"cite_N": [
"@cite_28",
"@cite_14",
"@cite_21",
"@cite_7"
],
"mid": [
"1984309565",
"2131846894",
"2071027807",
"2526145926"
]
}
|
REMAP: Multi-layer entropy-guided pooling of dense CNN features for image retrieval
|
Research in visual search has become one of the most popular directions in the area of pattern analysis and machine intelligence. With dramatic growth in the multimedia industry, the need for an effective and computationally efficient visual search engine has become increasingly important. Given a large corpus of images, the aim is to retrieve individual images depicting instances of a user-specified object, scene or location. Important applications include management of multimedia content, mobile commerce, surveillance, medical imaging, augmented reality, robotics, organization of personal photos and many more. Robust and accurate visual search is challenging due to factors such as changing object appearance, viewpoints and scale, partial occlusions, varying backgrounds and imaging conditions. Furthermore, today's systems must be scalable to billions of images due to the huge volumes of multimedia data available.
In order to overcome these challenges, a compact and discriminative image representation is required. Convolutional S. Husain Neural Networks (CNNs) delivered effective solutions to many computer vision tasks, including image classification. However, they have yet to bring anticipated performance gains to the image retrieval problem, especially on very large scales. The main reason is that two fundamental problems still remain largely open: (1) how to best aggregate deep features extracted by a CNN network into compact and discriminative imagelevel representations, and (2) how to train the resultant CNNaggregator architecture for image retrieval tasks.
This paper addresses the aforementioned problems by proposing a novel region-based aggregation approach employing multi-layered deep features, and developing the associated architecture which is trainable in an end-to-end fashion. Our descriptor is called REMAP for Region-Entropy 1 based Multi-layer Abstraction Pooling; the name reflecting the key innovations. Our key contributions include:
• we propose to aggregate a hierarchy of deep features from different CNN layers, representing various levels of visual abstraction, and -importantly-show how to train such a representation within an end-to-end framework, • we develop a novel approach to ensembling of multiresolution region-based features, which explicitly employs regions discriminative power, measured by the respective Kullback-Leibler (KL) divergence [1] values, to control the aggregation process, • we show that this relative entropy-guided aggregation outperforms conventional CNN-based aggregations: MAC [2], NetVLAD [3], Fisher Vector [4], GEM [5] and RMAC [6], • we compare the performance of three state-of-the-art base CNN architectures VGG16 [7], ResNet101 [8] and ResNeXt101 [9] when integrated with our novel REMAP representation and also against existing state-of-the-art models.
The overall architecture consists of a baseline CNN (e.g. VGG or ResNet) followed by the REMAP network. The CNN component produces dense, deep convolutional features that are aggregated by our REMAP method. The CNN filter weights and REMAP parameters (for multiple local regions) are trained simultaneously, adapting to the evolving distributions of deep descriptors and optimizing the multi-region aggregation parameters throughout the course of training. The proposed contributions are fully complementary and result in a system that outperforms not only the latest state-of-the-art in global descriptors, but can also compete with systems employing re-ranking based on local features. The significant performance improvements are demonstrated in detailed experimental evaluation, which uses classical datasets (Holidays [10], Oxford [11]) extended by the MPEG dataset [12] and with up-to 1M distractors. This paper is organized as follows. Related work is discussed in Section II. The REMAP novel components and the compact REMAP signature are presented in Section III. Our extensive experimentation is described in Section IV. Comparison with the state-of-the-art is presented in Section V and finally conclusions are drawn in Section VI.
A. Methods based on hand-crafted descriptors
Early approaches typically involve extracting multiple local descriptors (usually hand-crafted), and combining them into a fixed length image-level representation for fast matching. Local descriptors may be scale-invariant and centered on image feature points, such as for SIFT [13], or extracted on regular, dense grids, possibly at multiple scales independently of the image content [14]. An impressive number of local descriptors have been developed over years, each claiming superiority, making it difficult to select the best one for the job -an attempt at comparative study can be found in [15]. It should be noted that descriptor dimension (for hand-crafted features) is typically between 32 and 192, which is an order or two less than the number of deep features available for each image region.
Virtually all aggregation schemes rely on clustering in feature space, with varying degree of sophistication: Bag-of-Words (BOW) [16], Vector of Locally Aggregated Descriptors (VLAD) [17], Fisher Vector (FV) [4], and Robust Visual Descriptor (RVD) [18]. BOW is effectively a fixed length histogram with descriptors assigned to the closest visual word; VLAD additionally encodes the positions of local descriptors within each voronoi region by computing their residuals; the Fisher Vector (FV) aggregates local descriptors using the Fisher Kernel framework (second order statistics), and RVD combines rank-based multi-assignment with robust accumulation to reduce the impact of outliers.
B. Methods based on CNN descriptors
More recent approaches to image retrieval replace the lowlevel hand-crafted features with deep convolutional descriptors obtained from convolutional neural networks (CNNs), typically pre-trained on large-scale datasets such as the ImageNet. Azizpour et al. [19] compute an image-level representation by the max pooling aggregation of the last convolutional layer of VGGNet [7] and ALEXNET [20]. Babenko and Lempitsky [21] aggregated deep convolutional descriptors to form image signatures using Fisher Vectors (FV), Triangulation Embedding (TEMB) and Sum-pooling of convolutional features (SPoC). Kalantidis et al. [22] extended this work by introducing cross-dimensional weighting in aggregation of CNN features. The retrieval performance is further improved when the RVD-W method is used for aggregation of CNNbased deep descriptors [18]. Tolias et al. [2] proposed to extract Maximum Activations of Convolutions (MAC) descriptor from several multi-scale overlapping regions of the last convolutional layer feature map. The region-based descriptors are L2normalized, Principal Component Analysis (PCA)+whitened [23], L2-normalized again and finally sum-pooled to form a global signature called Regional Maximum Activations of Convolutions (RMAC). The RMAC dimensionality is equal to the number of filters of last convolutional layer and is independent of the image resolution and the number of regions. In [24], Seddati et al. provided an in-depth study of several RMAC-based architectures and proposed a modified RMAC signature that combines multi-scale and two-layer feature extraction with feature selection. A detailed survey of contentbased image retrieval (CBIR) methods based on hand-crafted and deep features is presented in [25].
C. Methods based on fine-tuned CNN descriptors
All of the aforementioned approaches use fixed pre-trained CNNs. However, these CNNs were trained for the purpose of image classification (e.g. 1000 classes of ImageNet), in a fashion blind to the aggregation method, and hence likely to perform sub-optimally in the task of image retrieval. To tackle this, Radenovic et al. [26], proposed to fine-tune MAC representation using the Flickr Landmarks dataset [27]. More precisely, the MAC layer is added to the last convolutional layer of VGG or ResNet. The resultant network is then trained with a siamese architecture [26], minimizing the contrastive loss. In [5], the MAC layer is replaced by trainable Generalized-Mean (GEM) pooling layer which significantly boosts retrieval accuracy. In [6], Gordo et al. trained a siamese architecture with ranking loss to enhance the RMAC representation. The recent NetVLAD [3] consists of a standard CNN followed by a Vector of Locally Aggregated Descriptors (VLAD) layer that aggregates the last convolutional features into a fixed dimensional signature and its parameters are trainable via back-propagation. Ong et al. [28] proposed SIAM-FV: an endto-end architecture which aggregates deep descriptors using Fisher Vector Pooling.
III. REMAP REPRESENTATION
The design of our REMAP descriptor addresses two issues fundamental to solving content-based image retrieval: (i) a novel aggregation mechanism for multi-layer deep convolutional features extracted by a CNN network, and (ii) an advanced assembling of multi-region and multi-layer representations with end-to-end training.
The first novelty of our approach is to aggregate a hierarchy of deep features from different CNN layers, which are explicitly trained to represent multiple and complementary levels of visual feature abstraction, significantly enhancing recognition. Importantly, our multi-layer architecture is trained fully end-to-end and specifically for recognition. This means [24], where no end-to-end training of the CNN is performed: fixed weights of the pre-trained CNN are used as a feature extractor. The important and novel component of our REMAP architecture is multi-layer end-to-end finetuning, where the CNN filter weights, relative entropy weights and PCA+Whitening weights are optimized simultaneously using Stochastic Gradient Descent (SGD) with the triplet loss function [6]. The end-to-end training of the CNN is critical, as it explicitly enforces intra-layer feature complementarity, significantly boosting performance. Without such joint multi-layer learning, the features from the additional layers -while coincidentally useful -are not-trained to be either discriminative nor complementary. The REMAP multilayer processing can be seen in Figure 1, where multiple parallel processing strands originate from the convolutional CNN layers, each including the ROI-pooling [2], L2-normalization, relative entropy weighting and Sum-pooling, before being concatenated into a single descriptor. The region entropy weighting is another important innovation proposed in our approach. The idea is to estimate how discriminatory individual features are in each local region, and to use this knowledge to optimally control the subsequent sum-pooling operation. The region entropy is defined as the relative entropy between the distributions of distances for matching and non-matching image descriptor pairs, measured using the KL-divergence function [1]. The regions which provide high separability (high KL-divergence) between matching and nonmatching distributions are more informative in recognition and are therefore assigned higher weights. Thanks to our entropy-controlled pooling we can combine a denser set of region-based features, without the risk of less informative regions overwhelming the best contributors. Practically, the KL-divergence Weighting (KLW) block in the REMAP architecture is implemented using a convolutional layer with weights initialized by the KL-divergence values and optimized using Stochastic Gradient Descent (SGD) on the triplet loss function.
REMAP ARCHITECTURE
The aggregated vectors are concatenated, PCA whitened and L2-normalized to form a global image descriptor.
All blocks in the REMAP network represent differentiable operations therefore the entire architecture can be trained endto-end. We perform training on the Landmarks-retrieval dataset using triplet loss -please see the Experimental Section for full details of the datasets and the training process. Additionally, the REMAP signatures for the test datasets are encoded using the Product Quantization (PQ) [29] approach to reduce the memory requirement and complexity of the retrieval system.
We will now describe in detail the components of the REMAP architecture, with reference to the Figure We can see that it comprises of a number of commonly used components, including the max-pool, sumpool and L2-norm functions. We denote these functions as M axp(x), Sump(x), L2(x) respectively, where x represents an input tensor. We also employ the Region Of Interest (ROI) function [2], ζ : R w,h,d → R r×d . The ROI function ζ splits an input tensor of size w × h × d into r overlapping spatial blocks using a rigid grid and performs spatial max-pooling within regions, producing a single d-dimensional vector for each region. More precisely, the ROI block extracts square regions from CNN response map at S different scales [2]. For each scale, the regions are extracted uniformly such that the overlap between consecutive regions is as close as possible to 40%. The number of regions r extracted by the ROI block depends on the image size (1024 × 768 × 3) and scale factor S. We performed experiments to determine the optimum number of regions for our REMAP network. It can be observed from Table I that the best retrieval accuracy is obtained using r=40. This is consistent across all the experiments.
A. CNN Layer Access Function
The base of the proposed REMAP architecture is formed by any of the existing CNN commonly used for retrieval, for example VGG16 [7], ResNet101 [8] and ResNeXt101 [9]. All these CNNs are essentially a sequential composition of L "convolutional layers". The exact nature of each of these blocks will differ between the CNNs. However, we can view each of these blocks as some function l i : R wi×hi×di → R w i ×h i ×d i , 1 ≤ i ≤ L, that transforms its respective input tensor into some output tensor, where w, h and d denote the width, height and depth of the input tensor into a certain block and w , h and d denote the height, width and depth of output tensor from that block.
The CNN can then be represented as the function composition: f (x) = l L (l L−1 (...(l 1 (x)))), where x is the input image of size w 0 × h 0 with d 0 channels. For our purpose, we would like to access the output of some intermediate convolutional layer. Therefore, we will create a "layer access" function:
(a) (b) (c) (d)f l (x) = l l (l l−1 (...(l 1 (x))))(1)
where 1 ≤ l ≤ L. f l will output the convolutional output of layer l.
B. Parallel Divergence-Guided ROI Streams
The proposed REMAP architecture performs separate and distinct transformations on different CNN layer outputs via parallel divergence-guided ROI streams. Each stream takes as input the convolutional output of some CNN layer and performs ROI pooling on it. The output vectors of the ROI pooling are L2-normalized, weighted (based on their informativeness), and linearly combined to form a single aggregated representation.
Specifically, suppose we would like to use the output tensor of the layer l from the CNN as input for ROI processing. Now, let o = f l (x), o ∈ R w,h,d be the output tensor from the CNN's l convolutional layer given an input image x. This is then given to the ROI block followed by L2 block, with the result denoted as: r = L2(ζ(o)). The linear combination of the region vectors is then carried out by weighted sum:
W (r) = r i=1 α i r(i)
where r(i) denotes the i th column of matrix r.
In summary, the ROI stream can be defined by the following function composition:
P (x; l , α) = W (L2(ζ(f l (x))); α)
where the set of linear combination weights is denoted as α = {α 1 , α 2 , ..., α r } In this work, the linear combination weights can be initialized differently, fixed as constants, or learnable in the SGD process. These in turn give rise to different existing CNN methods. In RMAC [6] architecture, the weights are fixed to 1 and not optimized during the end-to-end training stage: i.e. weight vector α = {1, 1, ..., 1}.
A drawback of the ROI-pooling method employed in RMAC is that it gives equal importance to all regional representations regardless of information content. We propose to measure the information gain of regions using the class-separability between the probability distributions of matching and nonmatching descriptor pairs for each region. Our algorithm to determine the relative entropy weights includes the following steps: (1) images of dimensionality 1024 × 768 × 3 are passed through the offline ResNeXt101 CNN, (2) the features from the ultimate convolution layers are then passed to the ROI block which splits an input tensor of size 32 × 24 × 2048 into 40 spatial blocks and performs spatial max-pooling within regions, producing a single 2048-dimensional vector per region/layer, (3) for each region and each layer, we compute P r(y/m) and P r(y/n) as the probability density function of observing a Euclidean distance y for a matching and nonmatching descriptor pair respectively. KL-divergence measure is employed to compute the separability between matching and non-matching pdfs. It can be observed from Figure 2 (a-e) that the KL-divergence value for different regions vary significantly. For example, region 13, 26 and 30 provides better separability (high KL-divergence) than region 24 and 37.
We propose to assign learnable weights to regional descriptors before aggregation into REMAP to enhance the ability to focus on important regions in the image. Thus we view our CNN as an information-flow network, where we control the impact of various channels based on the observed information gain. More precisely, the KL-divergence values for each region (Figure 2(f)) are used to initialize the ROI weight vector a. We enforce non-negativity on weight vector a during the training process.
Practically, the KL-divergence weighting layer (KLW) is implemented using a convolutional operation with weights that can be learned using stochastic gradient descent on the triplet loss function.
C. Final REMAP Architecture
We can now describe the proposed multi-stream REMAP. At the base is an existing Convolutional Neural Network (VGG or ResNet). The CNNs are essentially a sequential composition of L "convolutional layers", N of which are used in aggregation (N <= L). The output tensor of convolutional layer l can be accessed using f l (Eq. 1). We denote the N number of CNN layers that will be used in aggregation as: {l 1 , l 2 , ..., l N }, where l i ∈ {l 1 , l 2 , ..., l L } for each i = 1, 2, ..., N .
Associated with each of the above CNN layers l ∈ {l 1 , l 2 , ..., l N } is a set of ROI linear combination coefficients α l i = {α l i ,1 , ..., α l i ,r }. As a result, we have N parallel ROI streams, each with output P (x; l i , α l i ). The outputs of the N ROI streams are concatenated together into a highdimensional vector: p = [P (x; l 1 , α l 1 ), ..., P (x; l N , α l N )] T . We then pass p to a fully connected layer with weights initialized by PCA+Whitening coefficients [6].
In Table II, we perform experiments on Holidays, Oxford and MPEG to demonstrate how different convolutional layers of off-the-shelf ResNeXt101 perform when combined within the REMAP architecture. It is interesting to note that, individually, the best retrieval accuracy on the Holidays and MPEG datasets is provided by layer 2, and not by the bottleneck layer 1. Layer 1 (the last convolutional layer) delivers best performance only on the Oxford dataset. The performance of layer 3 is lowest since it is too sensitive to local deformation. However, the philosophy of our design is to combine different convolutional layers, so we investigate the performance of such combinations (shown in the lower half of the table). It can be observed from Table II that multi-layer REMAP significantly outperforms any single-layer representation. In the final REMAP representation we use the combination of the last two convolutions layers (layer 1+2), which are trained jointly, as this provides the best balance between the retrieval accuracy and the computational complexity of the training process. In Figure 3, we visualize the maximum activation responses of last two convolutional layers of off-the-shelf ResNeXt101. It can be seen that the two layers focuses on different but important features of the object thus justifying our multi-layer aggregation (MLA) approach. D. End-to-End Siamese learning for image retrieval An important feature of the REMAP architecture is that all its blocks are designed to represent differentiable operations. The fixed grid ROI pooling is differentiable [30]. Our novel component KL-divergence weighting (KLW ) can be implemented using 1D convolutional layer, with weights than can be optimized. The Sum-pooling of regional descriptors, L2-normalization and Concatenation of multilayer descriptors are also differentiable. The PCA+Whitening transformation can be implemented using a F ully-connected (F C) layer with bias. Therefore, we can learn the CNN filter weights and REMAP parameters (KLW weights and FC layer weights) simultaneously using SGD on the triplet loss function, adapting to the evolving distributions of deep features and optimizing the multi-region aggregation parameters over the course of training.
We proceed by removing the last pooling layer, prediction layer and loss layer from ResNeXt101 (trained on ImageNet) and adding REMAP pipeline to the last two convolutional layers. We then adopt a three stream siamese architecture to finetune the REMAP network using triplet loss [6]. More precisely, we are given a training dataset of T triplets of images, each triplet consists of a query image, a matching image and a closest non-matching image (non-matching image with the most similar descriptor to query image descriptor). More precisely, let p q be a REMAP descriptor extracted from the query image, p m be a descriptor from the matching image, and p n be a descriptor from a non-matching image. The triplet loss can be computed as:
L = 0.5 max(0, th + ||p q − p m || 2 − ||p q − p n || 2 ),(2)
where th parameter controls the margin of the classifier here i.e. the distance threshold parameter defining when the distance between matching and non-matching pairs is large enough not to be considered in the loss. The gradients with respect to loss L are back-propagated through the three streams of the REMAP network, and the convolutional layers, KLW layer and PCA+whitening layer parameters get updated.
E. Compact REMAP signature
Encoding high-dimensional image representation as compact signature provides benefit in storage, extraction and matching speeds, especially for large scale image retrieval tasks. This section focuses on deriving a small footprint image descriptor from the core REMAP representation. In the first approach, we pass an image thorough REMAP network to obtain D dimensional descriptor and select the top d dimensions out of D.
The second approach is based on Product Quantization (PQ) algorithm [17], in which D−dimensional REMAP descriptor is first split into m sub-parts of equal length D/m. Each subpart is quantized using a separate K-means quantizer with k = 256 cluster centres and encoded using n = log 2 (k) bits. The size of the PQ-embedded signature is B = m × n bits. At test time, the similarity between query vector and database vectors is computed using Asymmetric Distance Computation [17].
IV. EXPERIMENTAL EVALUATION
The purpose of this section is to evaluate the proposed REMAP architecture and compare it against latest state-of-theart CNN architectures. We first present the experimental setup which includes the datasets and evaluation protocols. We then analyze, the impact of the novel components that constitute our method, namely KL-divergence based weighting of region descriptors and Multi-layer aggregation. Furthermore, we compare the retrieval performance of different feature aggregation method including MAC, RMAC, Fisher Vectors and REMAP on four varied datasets with up-to 1Million distractors. A comparison with the different CNN representations is presented at the end of this section.
A. Training datasets
We train on a subset of the Landmarks dataset used in the work of Babenko et al. [31], which contains approximately 100k images depicting 650 famous landmarks. It was collected through textual queries in the Yandex image search engine, and therefore contains a significant proportion of images unrelated to the landmarks, which we filter out and remove. Furthermore, to guarantee unbiased test results we exclude all images that overlap with the MPEG, Holidays and Oxford5k datasets used in testing. We call this subset the Landmarks-retrieval dataset.
The process to remove images unrelated to the landmarks and to generate a list of matching image pairs for triplets generation is semi-automatic, and relies on local SIFT features detected with a Hessian-affine detector and aggregated with the RVDW descriptor [18]. For each of the 650 landmark classes we manually select a query image, depicting a particular landmark, and compute its similarity (based on the RVDW global descriptors) to all remaining images in the same class. We then remove the images whose distance from query are greater than a certain threshold (outliers), forming the Landmarks-retrieval subset of 25k images.
To generate matching image pairs we randomly select fifty image pairs from each class in the Landmarks-retrieval dataset. RANSAC algorithm is applied to matching SIFT descriptors in order to filter out the pairs that are difficult to match (the number of inliers are less than 5 -extreme hard examples) or very easy to match (the number of inliers greater than 30 -extreme easy examples). This way, about 15k matching image pairs are selected for the finetuning based on the triplet loss function.
B. Training configurations
We use MATLAB toolbox MatConvNet [32] to perform training and evaluation. The state-of-the-art networks VGG16, ResNet101 and ResNeXt101 (all pre-trained on ImageNet) are downloaded in MATLAB format and Batch-normalization layers are merged into preceding convolutional layers for finetuning.
Finetuning with triplet loss
Each aforementioned CNN is integrated with the REMAP network and the entire architecture is fine-tuned on Landmarks-retrieval dataset with triplet loss. The images are resized to 1024 × 768 pixels before passing through the network. Optimization is performed by the Stochastic Gradient Descent (SGD) algorithm with momentum 0.9, learning rate of 10 −3 and weight decay of 5 × 10 −5 . The triplet loss margin is set to 0.1.
An important consideration during training process is the generation of triplets, as generating them randomly will yield triplets that incur no loss. To address this issue, we divide the 15k matching image pairs from Landmarks-retrieval dataset into 5 groups. The REMAP descriptors are extracted from 25k images using the current model. For each matching pair, the closest non-matching (hard negative) example is then chosen, forming a triplet, consisting of the following: query example; matching example; non-matching example. The hard negatives are remined once per group, i.e. after every 3000 triplets.
Another consideration is the memory requirement during training, as the network trains with image size of 1024×768 pixels and with three streams at same time. Finetuning with deep architectures, VGG16, ResNet101 and ResNeXt101, is memory consuming and we could only fit one triplet at a time on a T IT AN X GPU with 12 GB of memory. To make the training process effective, we update the model parameters after every 64 triplets. The training process takes approximately 3 days to complete.
C. Test datasets
The INRIA Holidays dataset [10] contains 1491 holiday photos with a subset of 500 used as queries. Retrieval accuracy is measured by mean Average Precision (mAP), as defined in [11]. To evaluate model retrieval accuracy in a more challenging scenario, the Holidays dataset is combined with 1 million distractor images obtained from Flickr, forming Holidays1M [18].
The University of Kentucky Benchmark (UKB) [33] dataset comprises of 10200 images of 2550 objects. Here the performance measure is the average number of images returned in the first 4 positions (4 × Recall@4).
The Oxford5k dataset [11] contains 5063 images of Oxford landmarks. The performance is evaluated using mAP over 55 queries. To test large scale retrieval, this dataset is augmented with 100k and 1 million Flickr images [34], forming the Oxford105k [11] and Oxford1M [18] datasets respectively. We follow the state-of-the-art protocol for Oxford dataset and compute the image signature of query images using the cropped activations method [3] [26].
The Motion Picture Experts Group (MPEG) have developed a heterogeneous and challenging MPEG CDVS dataset for evaluating the retrieval performance of image signatures [12]. The dataset contains 33590 images from five image categories (1) Graphics including Book, DVD covers, documents and business cards, (2) Photographs of Paintings, (3) Video frames, (4) Landmarks and (5) Common objects. A total of 8313 queries are used to evaluate the retrieval performance in terms of mAP.
The dimensionality of input images to the CNN is limited to 1024×768 pixels. In order to illustrate clearly and fairly the benefits of the novel elements proposed in our framework, we selected the best state-of-the art RMAC representation and integrated it with the latest ResNeXt101 architecture. We then performed fine-tuning, using procedures outline before. Please note that our fully optimized and finetuned RMAC representation outperforms results reported earlier on Oxford5k, MPEG and Oxford105k datasets and we use our improved result as the reference performance shown in the Table III and Table IV. This represents the best state-of-the-art. We then introduce REMAP innovations, KL-divergence based weighting (KLW) and Multi-layer aggregation (MLA), and show the relative performance gain. Finally we combine all novel elements to show the overall improvement compared to baseline RMAC.
KL-divergence based weighting (KLW): We performed experiments to show that the initialization of the KLW block with relative entropy weights and then further optimization of weights using SGD is crucial to achieve optimum retrieval performance. We trained the following networks to compare:
• The baseline is the RMAC representation in which the ROI weights are fixed (a = {1, 1, ..., 1}) and not optimized during the training process. • In the second network RMAC+SGD, the weights are initialized with 1 (a = {1, 1, ..., 1}) and optimized using SGD on triplet loss function. • In the final network KLW, the relative entropy weights are initialized with KL-divergence values and further optimized using SGD on triplet loss. It can be observed from Table III that initialization of the KLW block with relative entropy weights is indeed significantly important for the network convergence and achieves best retrieval accuracy on all datasets. Furthermore, RMAC+SGD is not able to learn the optimum regional weights thus resulting in marginal improvement over RMAC. This is a very interesting result, which shows that optimization of the loss function alone may not always lead to optimal results and initialization (or optimization) of the network using the information gain may lead to improved performance.
Multi-layer aggregation (MLA): Next, we perform experiments to show the advantage of Multi-layer aggregation of deep features (MLA). It can be observed from Table IV that MLA brings an improvement of +1.8%, +2.6% and +2.4% on Oxford5k, MPEG datasets and Oxford105k compared to single layer aggregation as employed in RMAC.
Finally, we combine the KLW and MLA blocks to form our novel REMAP signature and compare the retrieval accuracy with the RMAC reference signature, as a function of descriptor dimensionality. Figure 4 clearly demonstrates that REMAP significantly outperforms RMAC on all state-of-theart datasets.
E. Multi-Scale REMAP (MS-REMAP)
In this section, we evaluate the retrieval performance of MS-REMAP representation computed at test time without any further training. In MS-REMAP, the descriptors are extracted from images re-sized at two different scales and then aggregated into a single signature [6]. More precisely, let X 1 and X 2 be REMAP descriptors extracted from two images of sizes 1024×768 and 1280×960 pixels respectively. The MS-REMAP descriptor X m is computed by weighted aggregation of X 1 and X 2 .
X m = (2 × X 1 ) + (1.4 × X 2 )(3)
It can be observed from Figure 5 that multi-scale representation brings an average gain of 1%, 1.8% and 1.5% on Holidays, Oxford5k and MPEG datasets compared to single scale representation.
F. Comparison of MAC, Fisher Vector, RMAC and REMAP networks
In this section we compare the best network REMAP with state-of-the-art representations: MAC [2], RMAC [6] and FV [4]. All the networks are trained end-to-end on Landmarksretrieval dataset using triplet loss. We use Multi-Scale version for all representations. In MAC pipeline, the MAX-pooling block is added to the last convolutional layer of the ResNeXt101. The MAX-pooling block is followed by PCA+Whitening and L2-Normalization blocks. The dimensonality of the output descriptor is 2048-D.
For the Fisher Vector method, the last convolution layer is followed by Fisher Vector aggregation block, PCA+Whitening block and L2-Normalization block. 16 cluster centers are used for the Fisher Vector GMM model, with their parameters initialized using the EM algorithm. This resulted in FV of dimensionality 32k. The parameters of the CNN and Fisher vectors are trained using stochastic gradient descent on the triplet loss function.
In the RMAC pipeline, the last convolutional layers features are passed through rigid grid ROI-pooling block. The region based descriptors are L2-normalized, whitened with PCA and L2-normalized again. Finally, the normalized descriptors are aggregated using Sum-pooling block resulting in a 2048 dimensional signature.
Following conclusions can be drawn from the Figure 6:
•
G. Convolutional Neural Network architectures
In this section, we evaluate the performance of three state-of-the-art CNN architectures VGG16, ResNet101 and ResNeXt101 when combined with our REMAP network. All the networks are trained end-to-end on Landmarks-retrieval datatset. We use the Multi-Scale representation of REMAP to compare the CNNs. From the results shown in Figure 7 we can observe that all three CNNs performed well on Holidays and Oxford dataset. The low performance on MPEG can be attributed to the fact that MPEG is a very diverse dataset (Graphics, Paintings, Videos, Landmarks and Objects) and our networks are finetuned only on landmarks. ResNeXt101 outperforms ResNet101 and VGG16 on all three datasets. Figure 8 demonstrates the performance of our method on the large scale datasets of Holidays1M, Oxford1M and MPEG1M. The retrieval performance (mAP) is presented as a function of database size. We show the results for four methods:
H. Large scale experiments
• the REMAP descriptor truncated to D = 128; • the RMAC descriptor truncated to D = 128; • the REMAP descriptor compressed to 16 It can be observed that the proposed REMAP outperforms all prior state-of-the-art methods. Compared to the RMAC [6], REMAP provides a significant improvement of +5%, +1% and +7.8% in mAP on Oxford, Holidays and MPEG datasets. Furthermore, the REMAP signatures achieves a gain of +3.7%, +1.6% and +6% on Oxford, Holidays and MPEG datasets, over recently published GEM signature [5]. The difference in retrieval accuracy, between REMAP and GEM, is even more significant on large scale datasets: Holidays100k (+3%) and Oxford105k (+5%). We also compare REMAP with our implementation ResNeXt+RMAC and the results show that REMAP representation is more robust and discriminative.
REMAP also formed the core of the winning submission to the Google Landmark Retrieval Challenge on Kaggle [35]. This gave us an opportunity to experimentally compare its performance on the Google landmark dataset [36]. This new dataset is the largest worldwide dataset for image retrieval research, comprising more than a million images of 15K unique landmarks. More importantly, the evaluation was performed by Kaggle on a private unseen subset, preventing (unintentional) over-training. We evaluated REMAP, RMAC, MAC and SPoC aggregation applied to the ResNeXt network, without any additional modules (no query-expansion, DB-augmentation)the results are shown in Table V. The REMAP architecture achieves mAP of 42.8% and offers over 8% gain over the closest competitor R-MAC. The classical SIFT-based system with geometric verification achieved only 12%, illustrating clearly the gain brought by the CNN-based architectures. Our winning solution, which combined multiple signatures with query expansion (QE) and database augmentation (DA) achieved mAP of 62.7%. It has recently become a standard technique to use Query Expansion (QE) [6], [5] to improve the retrieval accuracy. We applied QE to the REMAP representation and it can be observed from Table VI that REMAP+QE outperforms stateof-the-art results RMAC+QE [6] and GEM+QE [5].
For the visualization purposes, Figure 9 shows 5 queries from Holidays100K and MPEG datasets where difference in recall between REMAP and RMAC is the biggest. We demonstrate the query and top ranked results obtained by REMAP and RMAC representations using these queries, where correct matches are shown by green frame.
Compact image representation
This section focuses on a comparison of compact image signatures which are practicable in large-scale retrieval containing
VI. CONCLUSION
In this paper we propose a novel CNN-based architecture, called REMAP, which learns a hierarchy of deep features representing different and complementary levels of visual abstraction. We aggregate a dense set of such multi-level CNN features, pooled within multiple spatial regions and combine them with weights reflecting their discriminative power. The weights are initialized by KL-divergence values for each spatial region and optimized end-to-end using SGD, jointly with the CNN features. The entire framework is trained in an end-to-end fashion using triplet loss, and extensive tests demonstrate that REMAP significantly outperforms the latest state-of-the art. Miroslaw Bober is a Professor of Video Processing at the University of Surrey, U.K. In 2011 he cofounded Visual Atoms Ltd, a company specializing in visual analysis and search technologies. Between 1997 and 2011 he headed Mitsubishi Electric Corporate R&D Center Europe (MERCE-UK). He received BSc degree from AGH University of Science and Technology, and MSc and PhD degrees from University of Surrey. His research interests include computer vision, machine learning and AI, with a focus on analysis and understanding of visual and multimodal data, and efficient representation of its semantic content. Miroslaw led the development of ISO MPEG standards for over 20 years, chairing the MPEG-7, CDVS and CVDA groups. He is an inventor of over 80 patents, many deployed in products. His publication record includes over 100 refereed publications, including three books and book chapters, and his visual search technologies recently won the Google Landmark Retrieval Challenge on Kaggle.
| 6,066 |
1906.06626
|
2952898185
|
This paper addresses the problem of very large-scale image retrieval, focusing on improving its accuracy and robustness. We target enhanced robustness of search to factors, such as variations in illumination, object appearance and scale, partial occlusions, and cluttered backgrounds—particularly important when a search is performed across very large datasets with significant variability. We propose a novel CNN-based global descriptor, called REMAP, which learns and aggregates a hierarchy of deep features from multiple CNN layers, and is trained end-to-end with a triplet loss. REMAP explicitly learns discriminative features which are mutually supportive and complementary at various semantic levels of visual abstraction. These dense local features are max-pooled spatially at each layer, within multi-scale overlapping regions, before aggregation into a single image-level descriptor. To identify the semantically useful regions and layers for retrieval, we propose to measure the information gain of each region and layer using KL-divergence. Our system effectively learns during training how useful various regions and layers are and weights them accordingly. We show that such relative entropy-guided aggregation outperforms classical CNN-based aggregation controlled by SGD. The entire framework is trained in an end-to-end fashion, outperforming the latest state-of-the-art results. On image retrieval datasets Holidays, Oxford, and MPEG, the REMAP descriptor achieves mAP of 95.5 , 91.5 , and 80.1 , respectively, outperforming any results published to date. REMAP also formed the core of the winning submission to the Google Landmark Retrieval Challenge on Kaggle.
|
All of the aforementioned approaches use fixed pre-trained CNNs. However, these CNNs were trained for the purpose of image classification (e.g. 1000 classes of ImageNet), in a fashion blind to the aggregation method, and hence likely to perform sub-optimally in the task of image retrieval. To tackle this, @cite_23 , proposed to fine-tune MAC representation using the Flickr Landmarks dataset @cite_15 . More precisely, the MAC layer is added to the last convolutional layer of VGG or ResNet. The resultant network is then trained with a siamese architecture @cite_23 , minimizing the contrastive loss. In @cite_17 , the MAC layer is replaced by trainable Generalized-Mean (GEM) pooling layer which significantly boosts retrieval accuracy. In @cite_27 , trained a siamese architecture with ranking loss to enhance the RMAC representation. The recent NetVLAD @cite_13 consists of a standard CNN followed by a Vector of Locally Aggregated Descriptors (VLAD) layer that aggregates the last convolutional features into a fixed dimensional signature and its parameters are trainable via back-propagation. @cite_11 proposed SIAM-FV: an end-to-end architecture which aggregates deep descriptors using Fisher Vector Pooling.
|
{
"abstract": [
"",
"While deep learning has become a key ingredient in the top performing methods for many computer vision tasks, it has failed so far to bring similar improvements to instance-level image retrieval. In this article, we argue that reasons for the underwhelming results of deep methods on image retrieval are threefold: (1) noisy training data, (2) inappropriate deep architecture, and (3) suboptimal training procedure. We address all three issues. First, we leverage a large-scale but noisy landmark dataset and develop an automatic cleaning method that produces a suitable training set for deep retrieval. Second, we build on the recent R-MAC descriptor, show that it can be interpreted as a deep and differentiable architecture, and present improvements to enhance it. Last, we train this network with a siamese architecture that combines three streams with a triplet loss. At the end of the training process, the proposed architecture produces a global image representation in a single forward pass that is well suited for image retrieval. Extensive experiments show that our approach significantly outperforms previous retrieval approaches, including state-of-the-art methods based on costly local descriptor indexing and spatial verification. On Oxford 5k, Paris 6k and Holidays, we respectively report 94.7, 96.6, and 94.8 mean average precision. Our representations can also be heavily compressed using product quantization with little loss in accuracy.",
"Convolutional Neural Networks (CNNs) achieve state-of-the-art performance in many computer vision tasks. However, this achievement is preceded by extreme manual annotation in order to perform either training from scratch or fine-tuning for the target task. In this work, we propose to fine-tune CNN for image retrieval from a large collection of unordered images in a fully automated manner. We employ state-of-the-art retrieval and Structure-from-Motion (SfM) methods to obtain 3D models, which are used to guide the selection of the training data for CNN fine-tuning. We show that both hard positive and hard negative examples enhance the final performance in particular object retrieval with compact codes.",
"Structure-from-Motion for unordered image collections has significantly advanced in scale over the last decade. This impressive progress can be in part attributed to the introduction of efficient retrieval methods for those systems. While this boosts scalability, it also limits the amount of detail that the large-scale reconstruction systems are able to produce. In this paper, we propose a joint reconstruction and retrieval system that maintains the scalability of large-scale Structure-from-Motion systems while also recovering the often lost ability of reconstructing fine details of the scene. We demonstrate our proposed method on a large-scale dataset of 7.4 million images downloaded from the Internet.",
"We tackle the problem of large scale visual place recognition, where the task is to quickly and accurately recognize the location of a given query photograph. We present the following four principal contributions. First, we develop a convolutional neural network (CNN) architecture that is trainable in an end-to-end manner directly for the place recognition task. The main component of this architecture, NetVLAD, is a new generalized VLAD layer, inspired by the “Vector of Locally Aggregated Descriptors” image representation commonly used in image retrieval. The layer is readily pluggable into any CNN architecture and amenable to training via backpropagation. Second, we create a new weakly supervised ranking loss, which enables end-to-end learning of the architecture's parameters from images depicting the same places over time downloaded from Google Street View Time Machine. Third, we develop an efficient training procedure which can be applied on very large-scale weakly labelled tasks. Finally, we show that the proposed architecture and training procedure significantly outperform non-learnt image representations and off-the-shelf CNN descriptors on challenging place recognition and image retrieval benchmarks.",
"Image descriptors based on activations of Convolutional Neural Networks (CNNs) have become dominant in image retrieval due to their discriminative power, compactness of representation, and search efficiency. Training of CNNs, either from scratch or fine-tuning, requires a large amount of annotated data, where a high quality of annotation is often crucial. In this work, we propose to fine-tune CNNs for image retrieval on a large collection of unordered images in a fully automated manner. Reconstructed 3D models obtained by the state-of-the-art retrieval and structure-from-motion methods guide the selection of the training data. We show that both hard-positive and hard-negative examples, selected by exploiting the geometry and the camera positions available from the 3D models, enhance the performance of particular-object retrieval. CNN descriptor whitening discriminatively learned from the same training data outperforms commonly used PCA whitening. We propose a novel trainable Generalized-Mean (GeM) pooling layer that generalizes max and average pooling and show that it boosts retrieval performance. Applying the proposed method to the VGG network achieves state-of-the-art performance on the standard benchmarks: Oxford Buildings, Paris, and Holidays datasets."
],
"cite_N": [
"@cite_11",
"@cite_27",
"@cite_23",
"@cite_15",
"@cite_13",
"@cite_17"
],
"mid": [
"",
"2544587078",
"2963125676",
"1908016767",
"2620629206",
"2963588253"
]
}
|
REMAP: Multi-layer entropy-guided pooling of dense CNN features for image retrieval
|
Research in visual search has become one of the most popular directions in the area of pattern analysis and machine intelligence. With dramatic growth in the multimedia industry, the need for an effective and computationally efficient visual search engine has become increasingly important. Given a large corpus of images, the aim is to retrieve individual images depicting instances of a user-specified object, scene or location. Important applications include management of multimedia content, mobile commerce, surveillance, medical imaging, augmented reality, robotics, organization of personal photos and many more. Robust and accurate visual search is challenging due to factors such as changing object appearance, viewpoints and scale, partial occlusions, varying backgrounds and imaging conditions. Furthermore, today's systems must be scalable to billions of images due to the huge volumes of multimedia data available.
In order to overcome these challenges, a compact and discriminative image representation is required. Convolutional S. Husain Neural Networks (CNNs) delivered effective solutions to many computer vision tasks, including image classification. However, they have yet to bring anticipated performance gains to the image retrieval problem, especially on very large scales. The main reason is that two fundamental problems still remain largely open: (1) how to best aggregate deep features extracted by a CNN network into compact and discriminative imagelevel representations, and (2) how to train the resultant CNNaggregator architecture for image retrieval tasks.
This paper addresses the aforementioned problems by proposing a novel region-based aggregation approach employing multi-layered deep features, and developing the associated architecture which is trainable in an end-to-end fashion. Our descriptor is called REMAP for Region-Entropy 1 based Multi-layer Abstraction Pooling; the name reflecting the key innovations. Our key contributions include:
• we propose to aggregate a hierarchy of deep features from different CNN layers, representing various levels of visual abstraction, and -importantly-show how to train such a representation within an end-to-end framework, • we develop a novel approach to ensembling of multiresolution region-based features, which explicitly employs regions discriminative power, measured by the respective Kullback-Leibler (KL) divergence [1] values, to control the aggregation process, • we show that this relative entropy-guided aggregation outperforms conventional CNN-based aggregations: MAC [2], NetVLAD [3], Fisher Vector [4], GEM [5] and RMAC [6], • we compare the performance of three state-of-the-art base CNN architectures VGG16 [7], ResNet101 [8] and ResNeXt101 [9] when integrated with our novel REMAP representation and also against existing state-of-the-art models.
The overall architecture consists of a baseline CNN (e.g. VGG or ResNet) followed by the REMAP network. The CNN component produces dense, deep convolutional features that are aggregated by our REMAP method. The CNN filter weights and REMAP parameters (for multiple local regions) are trained simultaneously, adapting to the evolving distributions of deep descriptors and optimizing the multi-region aggregation parameters throughout the course of training. The proposed contributions are fully complementary and result in a system that outperforms not only the latest state-of-the-art in global descriptors, but can also compete with systems employing re-ranking based on local features. The significant performance improvements are demonstrated in detailed experimental evaluation, which uses classical datasets (Holidays [10], Oxford [11]) extended by the MPEG dataset [12] and with up-to 1M distractors. This paper is organized as follows. Related work is discussed in Section II. The REMAP novel components and the compact REMAP signature are presented in Section III. Our extensive experimentation is described in Section IV. Comparison with the state-of-the-art is presented in Section V and finally conclusions are drawn in Section VI.
A. Methods based on hand-crafted descriptors
Early approaches typically involve extracting multiple local descriptors (usually hand-crafted), and combining them into a fixed length image-level representation for fast matching. Local descriptors may be scale-invariant and centered on image feature points, such as for SIFT [13], or extracted on regular, dense grids, possibly at multiple scales independently of the image content [14]. An impressive number of local descriptors have been developed over years, each claiming superiority, making it difficult to select the best one for the job -an attempt at comparative study can be found in [15]. It should be noted that descriptor dimension (for hand-crafted features) is typically between 32 and 192, which is an order or two less than the number of deep features available for each image region.
Virtually all aggregation schemes rely on clustering in feature space, with varying degree of sophistication: Bag-of-Words (BOW) [16], Vector of Locally Aggregated Descriptors (VLAD) [17], Fisher Vector (FV) [4], and Robust Visual Descriptor (RVD) [18]. BOW is effectively a fixed length histogram with descriptors assigned to the closest visual word; VLAD additionally encodes the positions of local descriptors within each voronoi region by computing their residuals; the Fisher Vector (FV) aggregates local descriptors using the Fisher Kernel framework (second order statistics), and RVD combines rank-based multi-assignment with robust accumulation to reduce the impact of outliers.
B. Methods based on CNN descriptors
More recent approaches to image retrieval replace the lowlevel hand-crafted features with deep convolutional descriptors obtained from convolutional neural networks (CNNs), typically pre-trained on large-scale datasets such as the ImageNet. Azizpour et al. [19] compute an image-level representation by the max pooling aggregation of the last convolutional layer of VGGNet [7] and ALEXNET [20]. Babenko and Lempitsky [21] aggregated deep convolutional descriptors to form image signatures using Fisher Vectors (FV), Triangulation Embedding (TEMB) and Sum-pooling of convolutional features (SPoC). Kalantidis et al. [22] extended this work by introducing cross-dimensional weighting in aggregation of CNN features. The retrieval performance is further improved when the RVD-W method is used for aggregation of CNNbased deep descriptors [18]. Tolias et al. [2] proposed to extract Maximum Activations of Convolutions (MAC) descriptor from several multi-scale overlapping regions of the last convolutional layer feature map. The region-based descriptors are L2normalized, Principal Component Analysis (PCA)+whitened [23], L2-normalized again and finally sum-pooled to form a global signature called Regional Maximum Activations of Convolutions (RMAC). The RMAC dimensionality is equal to the number of filters of last convolutional layer and is independent of the image resolution and the number of regions. In [24], Seddati et al. provided an in-depth study of several RMAC-based architectures and proposed a modified RMAC signature that combines multi-scale and two-layer feature extraction with feature selection. A detailed survey of contentbased image retrieval (CBIR) methods based on hand-crafted and deep features is presented in [25].
C. Methods based on fine-tuned CNN descriptors
All of the aforementioned approaches use fixed pre-trained CNNs. However, these CNNs were trained for the purpose of image classification (e.g. 1000 classes of ImageNet), in a fashion blind to the aggregation method, and hence likely to perform sub-optimally in the task of image retrieval. To tackle this, Radenovic et al. [26], proposed to fine-tune MAC representation using the Flickr Landmarks dataset [27]. More precisely, the MAC layer is added to the last convolutional layer of VGG or ResNet. The resultant network is then trained with a siamese architecture [26], minimizing the contrastive loss. In [5], the MAC layer is replaced by trainable Generalized-Mean (GEM) pooling layer which significantly boosts retrieval accuracy. In [6], Gordo et al. trained a siamese architecture with ranking loss to enhance the RMAC representation. The recent NetVLAD [3] consists of a standard CNN followed by a Vector of Locally Aggregated Descriptors (VLAD) layer that aggregates the last convolutional features into a fixed dimensional signature and its parameters are trainable via back-propagation. Ong et al. [28] proposed SIAM-FV: an endto-end architecture which aggregates deep descriptors using Fisher Vector Pooling.
III. REMAP REPRESENTATION
The design of our REMAP descriptor addresses two issues fundamental to solving content-based image retrieval: (i) a novel aggregation mechanism for multi-layer deep convolutional features extracted by a CNN network, and (ii) an advanced assembling of multi-region and multi-layer representations with end-to-end training.
The first novelty of our approach is to aggregate a hierarchy of deep features from different CNN layers, which are explicitly trained to represent multiple and complementary levels of visual feature abstraction, significantly enhancing recognition. Importantly, our multi-layer architecture is trained fully end-to-end and specifically for recognition. This means [24], where no end-to-end training of the CNN is performed: fixed weights of the pre-trained CNN are used as a feature extractor. The important and novel component of our REMAP architecture is multi-layer end-to-end finetuning, where the CNN filter weights, relative entropy weights and PCA+Whitening weights are optimized simultaneously using Stochastic Gradient Descent (SGD) with the triplet loss function [6]. The end-to-end training of the CNN is critical, as it explicitly enforces intra-layer feature complementarity, significantly boosting performance. Without such joint multi-layer learning, the features from the additional layers -while coincidentally useful -are not-trained to be either discriminative nor complementary. The REMAP multilayer processing can be seen in Figure 1, where multiple parallel processing strands originate from the convolutional CNN layers, each including the ROI-pooling [2], L2-normalization, relative entropy weighting and Sum-pooling, before being concatenated into a single descriptor. The region entropy weighting is another important innovation proposed in our approach. The idea is to estimate how discriminatory individual features are in each local region, and to use this knowledge to optimally control the subsequent sum-pooling operation. The region entropy is defined as the relative entropy between the distributions of distances for matching and non-matching image descriptor pairs, measured using the KL-divergence function [1]. The regions which provide high separability (high KL-divergence) between matching and nonmatching distributions are more informative in recognition and are therefore assigned higher weights. Thanks to our entropy-controlled pooling we can combine a denser set of region-based features, without the risk of less informative regions overwhelming the best contributors. Practically, the KL-divergence Weighting (KLW) block in the REMAP architecture is implemented using a convolutional layer with weights initialized by the KL-divergence values and optimized using Stochastic Gradient Descent (SGD) on the triplet loss function.
REMAP ARCHITECTURE
The aggregated vectors are concatenated, PCA whitened and L2-normalized to form a global image descriptor.
All blocks in the REMAP network represent differentiable operations therefore the entire architecture can be trained endto-end. We perform training on the Landmarks-retrieval dataset using triplet loss -please see the Experimental Section for full details of the datasets and the training process. Additionally, the REMAP signatures for the test datasets are encoded using the Product Quantization (PQ) [29] approach to reduce the memory requirement and complexity of the retrieval system.
We will now describe in detail the components of the REMAP architecture, with reference to the Figure We can see that it comprises of a number of commonly used components, including the max-pool, sumpool and L2-norm functions. We denote these functions as M axp(x), Sump(x), L2(x) respectively, where x represents an input tensor. We also employ the Region Of Interest (ROI) function [2], ζ : R w,h,d → R r×d . The ROI function ζ splits an input tensor of size w × h × d into r overlapping spatial blocks using a rigid grid and performs spatial max-pooling within regions, producing a single d-dimensional vector for each region. More precisely, the ROI block extracts square regions from CNN response map at S different scales [2]. For each scale, the regions are extracted uniformly such that the overlap between consecutive regions is as close as possible to 40%. The number of regions r extracted by the ROI block depends on the image size (1024 × 768 × 3) and scale factor S. We performed experiments to determine the optimum number of regions for our REMAP network. It can be observed from Table I that the best retrieval accuracy is obtained using r=40. This is consistent across all the experiments.
A. CNN Layer Access Function
The base of the proposed REMAP architecture is formed by any of the existing CNN commonly used for retrieval, for example VGG16 [7], ResNet101 [8] and ResNeXt101 [9]. All these CNNs are essentially a sequential composition of L "convolutional layers". The exact nature of each of these blocks will differ between the CNNs. However, we can view each of these blocks as some function l i : R wi×hi×di → R w i ×h i ×d i , 1 ≤ i ≤ L, that transforms its respective input tensor into some output tensor, where w, h and d denote the width, height and depth of the input tensor into a certain block and w , h and d denote the height, width and depth of output tensor from that block.
The CNN can then be represented as the function composition: f (x) = l L (l L−1 (...(l 1 (x)))), where x is the input image of size w 0 × h 0 with d 0 channels. For our purpose, we would like to access the output of some intermediate convolutional layer. Therefore, we will create a "layer access" function:
(a) (b) (c) (d)f l (x) = l l (l l−1 (...(l 1 (x))))(1)
where 1 ≤ l ≤ L. f l will output the convolutional output of layer l.
B. Parallel Divergence-Guided ROI Streams
The proposed REMAP architecture performs separate and distinct transformations on different CNN layer outputs via parallel divergence-guided ROI streams. Each stream takes as input the convolutional output of some CNN layer and performs ROI pooling on it. The output vectors of the ROI pooling are L2-normalized, weighted (based on their informativeness), and linearly combined to form a single aggregated representation.
Specifically, suppose we would like to use the output tensor of the layer l from the CNN as input for ROI processing. Now, let o = f l (x), o ∈ R w,h,d be the output tensor from the CNN's l convolutional layer given an input image x. This is then given to the ROI block followed by L2 block, with the result denoted as: r = L2(ζ(o)). The linear combination of the region vectors is then carried out by weighted sum:
W (r) = r i=1 α i r(i)
where r(i) denotes the i th column of matrix r.
In summary, the ROI stream can be defined by the following function composition:
P (x; l , α) = W (L2(ζ(f l (x))); α)
where the set of linear combination weights is denoted as α = {α 1 , α 2 , ..., α r } In this work, the linear combination weights can be initialized differently, fixed as constants, or learnable in the SGD process. These in turn give rise to different existing CNN methods. In RMAC [6] architecture, the weights are fixed to 1 and not optimized during the end-to-end training stage: i.e. weight vector α = {1, 1, ..., 1}.
A drawback of the ROI-pooling method employed in RMAC is that it gives equal importance to all regional representations regardless of information content. We propose to measure the information gain of regions using the class-separability between the probability distributions of matching and nonmatching descriptor pairs for each region. Our algorithm to determine the relative entropy weights includes the following steps: (1) images of dimensionality 1024 × 768 × 3 are passed through the offline ResNeXt101 CNN, (2) the features from the ultimate convolution layers are then passed to the ROI block which splits an input tensor of size 32 × 24 × 2048 into 40 spatial blocks and performs spatial max-pooling within regions, producing a single 2048-dimensional vector per region/layer, (3) for each region and each layer, we compute P r(y/m) and P r(y/n) as the probability density function of observing a Euclidean distance y for a matching and nonmatching descriptor pair respectively. KL-divergence measure is employed to compute the separability between matching and non-matching pdfs. It can be observed from Figure 2 (a-e) that the KL-divergence value for different regions vary significantly. For example, region 13, 26 and 30 provides better separability (high KL-divergence) than region 24 and 37.
We propose to assign learnable weights to regional descriptors before aggregation into REMAP to enhance the ability to focus on important regions in the image. Thus we view our CNN as an information-flow network, where we control the impact of various channels based on the observed information gain. More precisely, the KL-divergence values for each region (Figure 2(f)) are used to initialize the ROI weight vector a. We enforce non-negativity on weight vector a during the training process.
Practically, the KL-divergence weighting layer (KLW) is implemented using a convolutional operation with weights that can be learned using stochastic gradient descent on the triplet loss function.
C. Final REMAP Architecture
We can now describe the proposed multi-stream REMAP. At the base is an existing Convolutional Neural Network (VGG or ResNet). The CNNs are essentially a sequential composition of L "convolutional layers", N of which are used in aggregation (N <= L). The output tensor of convolutional layer l can be accessed using f l (Eq. 1). We denote the N number of CNN layers that will be used in aggregation as: {l 1 , l 2 , ..., l N }, where l i ∈ {l 1 , l 2 , ..., l L } for each i = 1, 2, ..., N .
Associated with each of the above CNN layers l ∈ {l 1 , l 2 , ..., l N } is a set of ROI linear combination coefficients α l i = {α l i ,1 , ..., α l i ,r }. As a result, we have N parallel ROI streams, each with output P (x; l i , α l i ). The outputs of the N ROI streams are concatenated together into a highdimensional vector: p = [P (x; l 1 , α l 1 ), ..., P (x; l N , α l N )] T . We then pass p to a fully connected layer with weights initialized by PCA+Whitening coefficients [6].
In Table II, we perform experiments on Holidays, Oxford and MPEG to demonstrate how different convolutional layers of off-the-shelf ResNeXt101 perform when combined within the REMAP architecture. It is interesting to note that, individually, the best retrieval accuracy on the Holidays and MPEG datasets is provided by layer 2, and not by the bottleneck layer 1. Layer 1 (the last convolutional layer) delivers best performance only on the Oxford dataset. The performance of layer 3 is lowest since it is too sensitive to local deformation. However, the philosophy of our design is to combine different convolutional layers, so we investigate the performance of such combinations (shown in the lower half of the table). It can be observed from Table II that multi-layer REMAP significantly outperforms any single-layer representation. In the final REMAP representation we use the combination of the last two convolutions layers (layer 1+2), which are trained jointly, as this provides the best balance between the retrieval accuracy and the computational complexity of the training process. In Figure 3, we visualize the maximum activation responses of last two convolutional layers of off-the-shelf ResNeXt101. It can be seen that the two layers focuses on different but important features of the object thus justifying our multi-layer aggregation (MLA) approach. D. End-to-End Siamese learning for image retrieval An important feature of the REMAP architecture is that all its blocks are designed to represent differentiable operations. The fixed grid ROI pooling is differentiable [30]. Our novel component KL-divergence weighting (KLW ) can be implemented using 1D convolutional layer, with weights than can be optimized. The Sum-pooling of regional descriptors, L2-normalization and Concatenation of multilayer descriptors are also differentiable. The PCA+Whitening transformation can be implemented using a F ully-connected (F C) layer with bias. Therefore, we can learn the CNN filter weights and REMAP parameters (KLW weights and FC layer weights) simultaneously using SGD on the triplet loss function, adapting to the evolving distributions of deep features and optimizing the multi-region aggregation parameters over the course of training.
We proceed by removing the last pooling layer, prediction layer and loss layer from ResNeXt101 (trained on ImageNet) and adding REMAP pipeline to the last two convolutional layers. We then adopt a three stream siamese architecture to finetune the REMAP network using triplet loss [6]. More precisely, we are given a training dataset of T triplets of images, each triplet consists of a query image, a matching image and a closest non-matching image (non-matching image with the most similar descriptor to query image descriptor). More precisely, let p q be a REMAP descriptor extracted from the query image, p m be a descriptor from the matching image, and p n be a descriptor from a non-matching image. The triplet loss can be computed as:
L = 0.5 max(0, th + ||p q − p m || 2 − ||p q − p n || 2 ),(2)
where th parameter controls the margin of the classifier here i.e. the distance threshold parameter defining when the distance between matching and non-matching pairs is large enough not to be considered in the loss. The gradients with respect to loss L are back-propagated through the three streams of the REMAP network, and the convolutional layers, KLW layer and PCA+whitening layer parameters get updated.
E. Compact REMAP signature
Encoding high-dimensional image representation as compact signature provides benefit in storage, extraction and matching speeds, especially for large scale image retrieval tasks. This section focuses on deriving a small footprint image descriptor from the core REMAP representation. In the first approach, we pass an image thorough REMAP network to obtain D dimensional descriptor and select the top d dimensions out of D.
The second approach is based on Product Quantization (PQ) algorithm [17], in which D−dimensional REMAP descriptor is first split into m sub-parts of equal length D/m. Each subpart is quantized using a separate K-means quantizer with k = 256 cluster centres and encoded using n = log 2 (k) bits. The size of the PQ-embedded signature is B = m × n bits. At test time, the similarity between query vector and database vectors is computed using Asymmetric Distance Computation [17].
IV. EXPERIMENTAL EVALUATION
The purpose of this section is to evaluate the proposed REMAP architecture and compare it against latest state-of-theart CNN architectures. We first present the experimental setup which includes the datasets and evaluation protocols. We then analyze, the impact of the novel components that constitute our method, namely KL-divergence based weighting of region descriptors and Multi-layer aggregation. Furthermore, we compare the retrieval performance of different feature aggregation method including MAC, RMAC, Fisher Vectors and REMAP on four varied datasets with up-to 1Million distractors. A comparison with the different CNN representations is presented at the end of this section.
A. Training datasets
We train on a subset of the Landmarks dataset used in the work of Babenko et al. [31], which contains approximately 100k images depicting 650 famous landmarks. It was collected through textual queries in the Yandex image search engine, and therefore contains a significant proportion of images unrelated to the landmarks, which we filter out and remove. Furthermore, to guarantee unbiased test results we exclude all images that overlap with the MPEG, Holidays and Oxford5k datasets used in testing. We call this subset the Landmarks-retrieval dataset.
The process to remove images unrelated to the landmarks and to generate a list of matching image pairs for triplets generation is semi-automatic, and relies on local SIFT features detected with a Hessian-affine detector and aggregated with the RVDW descriptor [18]. For each of the 650 landmark classes we manually select a query image, depicting a particular landmark, and compute its similarity (based on the RVDW global descriptors) to all remaining images in the same class. We then remove the images whose distance from query are greater than a certain threshold (outliers), forming the Landmarks-retrieval subset of 25k images.
To generate matching image pairs we randomly select fifty image pairs from each class in the Landmarks-retrieval dataset. RANSAC algorithm is applied to matching SIFT descriptors in order to filter out the pairs that are difficult to match (the number of inliers are less than 5 -extreme hard examples) or very easy to match (the number of inliers greater than 30 -extreme easy examples). This way, about 15k matching image pairs are selected for the finetuning based on the triplet loss function.
B. Training configurations
We use MATLAB toolbox MatConvNet [32] to perform training and evaluation. The state-of-the-art networks VGG16, ResNet101 and ResNeXt101 (all pre-trained on ImageNet) are downloaded in MATLAB format and Batch-normalization layers are merged into preceding convolutional layers for finetuning.
Finetuning with triplet loss
Each aforementioned CNN is integrated with the REMAP network and the entire architecture is fine-tuned on Landmarks-retrieval dataset with triplet loss. The images are resized to 1024 × 768 pixels before passing through the network. Optimization is performed by the Stochastic Gradient Descent (SGD) algorithm with momentum 0.9, learning rate of 10 −3 and weight decay of 5 × 10 −5 . The triplet loss margin is set to 0.1.
An important consideration during training process is the generation of triplets, as generating them randomly will yield triplets that incur no loss. To address this issue, we divide the 15k matching image pairs from Landmarks-retrieval dataset into 5 groups. The REMAP descriptors are extracted from 25k images using the current model. For each matching pair, the closest non-matching (hard negative) example is then chosen, forming a triplet, consisting of the following: query example; matching example; non-matching example. The hard negatives are remined once per group, i.e. after every 3000 triplets.
Another consideration is the memory requirement during training, as the network trains with image size of 1024×768 pixels and with three streams at same time. Finetuning with deep architectures, VGG16, ResNet101 and ResNeXt101, is memory consuming and we could only fit one triplet at a time on a T IT AN X GPU with 12 GB of memory. To make the training process effective, we update the model parameters after every 64 triplets. The training process takes approximately 3 days to complete.
C. Test datasets
The INRIA Holidays dataset [10] contains 1491 holiday photos with a subset of 500 used as queries. Retrieval accuracy is measured by mean Average Precision (mAP), as defined in [11]. To evaluate model retrieval accuracy in a more challenging scenario, the Holidays dataset is combined with 1 million distractor images obtained from Flickr, forming Holidays1M [18].
The University of Kentucky Benchmark (UKB) [33] dataset comprises of 10200 images of 2550 objects. Here the performance measure is the average number of images returned in the first 4 positions (4 × Recall@4).
The Oxford5k dataset [11] contains 5063 images of Oxford landmarks. The performance is evaluated using mAP over 55 queries. To test large scale retrieval, this dataset is augmented with 100k and 1 million Flickr images [34], forming the Oxford105k [11] and Oxford1M [18] datasets respectively. We follow the state-of-the-art protocol for Oxford dataset and compute the image signature of query images using the cropped activations method [3] [26].
The Motion Picture Experts Group (MPEG) have developed a heterogeneous and challenging MPEG CDVS dataset for evaluating the retrieval performance of image signatures [12]. The dataset contains 33590 images from five image categories (1) Graphics including Book, DVD covers, documents and business cards, (2) Photographs of Paintings, (3) Video frames, (4) Landmarks and (5) Common objects. A total of 8313 queries are used to evaluate the retrieval performance in terms of mAP.
The dimensionality of input images to the CNN is limited to 1024×768 pixels. In order to illustrate clearly and fairly the benefits of the novel elements proposed in our framework, we selected the best state-of-the art RMAC representation and integrated it with the latest ResNeXt101 architecture. We then performed fine-tuning, using procedures outline before. Please note that our fully optimized and finetuned RMAC representation outperforms results reported earlier on Oxford5k, MPEG and Oxford105k datasets and we use our improved result as the reference performance shown in the Table III and Table IV. This represents the best state-of-the-art. We then introduce REMAP innovations, KL-divergence based weighting (KLW) and Multi-layer aggregation (MLA), and show the relative performance gain. Finally we combine all novel elements to show the overall improvement compared to baseline RMAC.
KL-divergence based weighting (KLW): We performed experiments to show that the initialization of the KLW block with relative entropy weights and then further optimization of weights using SGD is crucial to achieve optimum retrieval performance. We trained the following networks to compare:
• The baseline is the RMAC representation in which the ROI weights are fixed (a = {1, 1, ..., 1}) and not optimized during the training process. • In the second network RMAC+SGD, the weights are initialized with 1 (a = {1, 1, ..., 1}) and optimized using SGD on triplet loss function. • In the final network KLW, the relative entropy weights are initialized with KL-divergence values and further optimized using SGD on triplet loss. It can be observed from Table III that initialization of the KLW block with relative entropy weights is indeed significantly important for the network convergence and achieves best retrieval accuracy on all datasets. Furthermore, RMAC+SGD is not able to learn the optimum regional weights thus resulting in marginal improvement over RMAC. This is a very interesting result, which shows that optimization of the loss function alone may not always lead to optimal results and initialization (or optimization) of the network using the information gain may lead to improved performance.
Multi-layer aggregation (MLA): Next, we perform experiments to show the advantage of Multi-layer aggregation of deep features (MLA). It can be observed from Table IV that MLA brings an improvement of +1.8%, +2.6% and +2.4% on Oxford5k, MPEG datasets and Oxford105k compared to single layer aggregation as employed in RMAC.
Finally, we combine the KLW and MLA blocks to form our novel REMAP signature and compare the retrieval accuracy with the RMAC reference signature, as a function of descriptor dimensionality. Figure 4 clearly demonstrates that REMAP significantly outperforms RMAC on all state-of-theart datasets.
E. Multi-Scale REMAP (MS-REMAP)
In this section, we evaluate the retrieval performance of MS-REMAP representation computed at test time without any further training. In MS-REMAP, the descriptors are extracted from images re-sized at two different scales and then aggregated into a single signature [6]. More precisely, let X 1 and X 2 be REMAP descriptors extracted from two images of sizes 1024×768 and 1280×960 pixels respectively. The MS-REMAP descriptor X m is computed by weighted aggregation of X 1 and X 2 .
X m = (2 × X 1 ) + (1.4 × X 2 )(3)
It can be observed from Figure 5 that multi-scale representation brings an average gain of 1%, 1.8% and 1.5% on Holidays, Oxford5k and MPEG datasets compared to single scale representation.
F. Comparison of MAC, Fisher Vector, RMAC and REMAP networks
In this section we compare the best network REMAP with state-of-the-art representations: MAC [2], RMAC [6] and FV [4]. All the networks are trained end-to-end on Landmarksretrieval dataset using triplet loss. We use Multi-Scale version for all representations. In MAC pipeline, the MAX-pooling block is added to the last convolutional layer of the ResNeXt101. The MAX-pooling block is followed by PCA+Whitening and L2-Normalization blocks. The dimensonality of the output descriptor is 2048-D.
For the Fisher Vector method, the last convolution layer is followed by Fisher Vector aggregation block, PCA+Whitening block and L2-Normalization block. 16 cluster centers are used for the Fisher Vector GMM model, with their parameters initialized using the EM algorithm. This resulted in FV of dimensionality 32k. The parameters of the CNN and Fisher vectors are trained using stochastic gradient descent on the triplet loss function.
In the RMAC pipeline, the last convolutional layers features are passed through rigid grid ROI-pooling block. The region based descriptors are L2-normalized, whitened with PCA and L2-normalized again. Finally, the normalized descriptors are aggregated using Sum-pooling block resulting in a 2048 dimensional signature.
Following conclusions can be drawn from the Figure 6:
•
G. Convolutional Neural Network architectures
In this section, we evaluate the performance of three state-of-the-art CNN architectures VGG16, ResNet101 and ResNeXt101 when combined with our REMAP network. All the networks are trained end-to-end on Landmarks-retrieval datatset. We use the Multi-Scale representation of REMAP to compare the CNNs. From the results shown in Figure 7 we can observe that all three CNNs performed well on Holidays and Oxford dataset. The low performance on MPEG can be attributed to the fact that MPEG is a very diverse dataset (Graphics, Paintings, Videos, Landmarks and Objects) and our networks are finetuned only on landmarks. ResNeXt101 outperforms ResNet101 and VGG16 on all three datasets. Figure 8 demonstrates the performance of our method on the large scale datasets of Holidays1M, Oxford1M and MPEG1M. The retrieval performance (mAP) is presented as a function of database size. We show the results for four methods:
H. Large scale experiments
• the REMAP descriptor truncated to D = 128; • the RMAC descriptor truncated to D = 128; • the REMAP descriptor compressed to 16 It can be observed that the proposed REMAP outperforms all prior state-of-the-art methods. Compared to the RMAC [6], REMAP provides a significant improvement of +5%, +1% and +7.8% in mAP on Oxford, Holidays and MPEG datasets. Furthermore, the REMAP signatures achieves a gain of +3.7%, +1.6% and +6% on Oxford, Holidays and MPEG datasets, over recently published GEM signature [5]. The difference in retrieval accuracy, between REMAP and GEM, is even more significant on large scale datasets: Holidays100k (+3%) and Oxford105k (+5%). We also compare REMAP with our implementation ResNeXt+RMAC and the results show that REMAP representation is more robust and discriminative.
REMAP also formed the core of the winning submission to the Google Landmark Retrieval Challenge on Kaggle [35]. This gave us an opportunity to experimentally compare its performance on the Google landmark dataset [36]. This new dataset is the largest worldwide dataset for image retrieval research, comprising more than a million images of 15K unique landmarks. More importantly, the evaluation was performed by Kaggle on a private unseen subset, preventing (unintentional) over-training. We evaluated REMAP, RMAC, MAC and SPoC aggregation applied to the ResNeXt network, without any additional modules (no query-expansion, DB-augmentation)the results are shown in Table V. The REMAP architecture achieves mAP of 42.8% and offers over 8% gain over the closest competitor R-MAC. The classical SIFT-based system with geometric verification achieved only 12%, illustrating clearly the gain brought by the CNN-based architectures. Our winning solution, which combined multiple signatures with query expansion (QE) and database augmentation (DA) achieved mAP of 62.7%. It has recently become a standard technique to use Query Expansion (QE) [6], [5] to improve the retrieval accuracy. We applied QE to the REMAP representation and it can be observed from Table VI that REMAP+QE outperforms stateof-the-art results RMAC+QE [6] and GEM+QE [5].
For the visualization purposes, Figure 9 shows 5 queries from Holidays100K and MPEG datasets where difference in recall between REMAP and RMAC is the biggest. We demonstrate the query and top ranked results obtained by REMAP and RMAC representations using these queries, where correct matches are shown by green frame.
Compact image representation
This section focuses on a comparison of compact image signatures which are practicable in large-scale retrieval containing
VI. CONCLUSION
In this paper we propose a novel CNN-based architecture, called REMAP, which learns a hierarchy of deep features representing different and complementary levels of visual abstraction. We aggregate a dense set of such multi-level CNN features, pooled within multiple spatial regions and combine them with weights reflecting their discriminative power. The weights are initialized by KL-divergence values for each spatial region and optimized end-to-end using SGD, jointly with the CNN features. The entire framework is trained in an end-to-end fashion using triplet loss, and extensive tests demonstrate that REMAP significantly outperforms the latest state-of-the art. Miroslaw Bober is a Professor of Video Processing at the University of Surrey, U.K. In 2011 he cofounded Visual Atoms Ltd, a company specializing in visual analysis and search technologies. Between 1997 and 2011 he headed Mitsubishi Electric Corporate R&D Center Europe (MERCE-UK). He received BSc degree from AGH University of Science and Technology, and MSc and PhD degrees from University of Surrey. His research interests include computer vision, machine learning and AI, with a focus on analysis and understanding of visual and multimodal data, and efficient representation of its semantic content. Miroslaw led the development of ISO MPEG standards for over 20 years, chairing the MPEG-7, CDVS and CVDA groups. He is an inventor of over 80 patents, many deployed in products. His publication record includes over 100 refereed publications, including three books and book chapters, and his visual search technologies recently won the Google Landmark Retrieval Challenge on Kaggle.
| 6,066 |
1810.06325
|
2897282582
|
Artificial sound event detection (SED) aims to mimic the human ability to perceive and understand what is happening in the surroundings. Nowadays, deep learning offers valuable techniques for this goal, such as convolutional neural networks (CNNs). The capsule neural network (CapsNet) architecture has been recently introduced in the image processing field with the intent to overcome some of the known limitations of CNNs, specifically regarding the scarce robustness to affine transformations (i.e., perspective, size, and orientation) and the detection of overlapped images. This motivated the authors to employ CapsNets to deal with the polyphonic SED task, in which multiple sound events occur simultaneously. Specifically, we propose to exploit the capsule units to represent a set of distinctive properties for each individual sound event. Capsule units are connected through a so-called dynamic routing that encourages learning part-whole relationships and improves the detection performance in a polyphonic context. This paper reports extensive evaluations carried out on three publicly available datasets, showing how the CapsNet-based algorithm not only outperforms standard CNNs but also achieves the best results with respect to the state-of-the-art algorithms.
|
The use of deep learning models has been motivated by the increased availability of datasets and computational resources and resulted in significant performance improvements. The methods based on CNNs and RNNs have established the new state-of-the-art performance on the SED task, thanks to the capabilities to learn the non-linear relationship between time-frequency features of the audio signal and a target vector representing sound events. In @cite_3 , the authors show how local'' patterns can be learned by a CNN and can be exploited to improve the performance of detection and classification of non-speech acoustic events occurring in conversation scenes, in particular compared to a FNN-based system which processes multiple resolution spectrograms in parallel.
|
{
"abstract": [
"In recent years, deep learning has not only permeated the computer vision and speech recognition research fields but also fields such as acoustic event detection (AED). One of the aims of AED is to detect and classify non-speech acoustic events occurring in conversation scenes including those produced by both humans and the objects that surround us. In AED, deep learning has enabled modeling of detail-rich features, and among these, high resolution spectrograms have shown a significant advantage over existing predefined features (e.g., Mel-filter bank) that compress and reduce detail. In this paper, we further asses the importance of feature extraction for deep learning-based acoustic event detection. AED, based on spectrogram-input deep neural networks, exploits the fact that sounds have “global” spectral patterns, but sounds also have “local” properties such as being more transient or smoother in the time-frequency domain. These can be exposed by adjusting the time-frequency resolution used to compute the spectrogram, or by using a model that exploits locality leading us to explore two different feature extraction strategies in the context of deep learning: (1) using multiple resolution spectrograms simultaneously and analyzing the overall and event-wise influence to combine the results, and (2) introducing the use of convolutional neural networks (CNN), a state of the art 2D feature extraction model that exploits local structures, with log power spectrogram input for AED. An experimental evaluation shows that the approaches we describe outperform our state-of-the-art deep learning baseline with a noticeable gain in the CNN case and provides insights regarding CNN-based spectrogram characterization for AED."
],
"cite_N": [
"@cite_3"
],
"mid": [
"1846473900"
]
}
|
Polyphonic Sound Event Detection by using Capsule Neural Networks
|
H UMAN cognition relies on the ability to sense, process, and understand the surrounding environment and its sounds. Although the skill of listening and understanding their origin is so natural for living beings, it still results in a very challenging task for computers.
Sound event detection (SED), or acoustic event detection, has the aim to mimic this cognitive feature by means of artificial systems. Basically, a SED algorithm is designed to detect the onset and offset times for a variety of sound events captured in an audio recording and associate a textual descriptor, i.e., a label for each of these events.
In recent years, SED has received the interest from the computational auditory scene analysis community [1], due to its potential in several engineering applications. Indeed, the automatic recognition of sound events and scenes can have a considerable impact in a wide range of applications where sound or sound sensing is advantageous with respect to other modalities. This is the case of acoustic surveillance [2], healthcare monitoring [3], [4] or urban sound analysis [5], where the short duration of certain events (i.e., a human fall, a gunshot or a glass breaking) or the personal privacy motivate the exploitation of the audio information rather than, e.g., the image processing. In addition, audio processing is often less computationally demanding compared to other multimedia domains, thus embedded devices can be easily equipped with microphones and sufficient computational capacity to locally process the signal captured. These could be smart home devices for home automation purposes or sensors for wildlife and biodiversity monitoring (i.e., bird calls detection [6]).
SED algorithms in a real-life scenario face many challenges. These include simultaneous events, environmental noise and events of the same class produced by different sources [7]. Since multiple events are very likely to overlap, a polyphonic SED algorithm, i.e., an algorithm able to detect multiple simultaneous events, needs to be designed. Finally, the effects of noise and intra-class variability represent further challenges for SED in real-life situations.
Traditionally, the polyphonic acoustic event analysis has been approached with statistical modelling methods, including Hidden Markov Models (HMM) [8], Gaussian Mixture Models (GMM) [9], Non-negative Matrix Factorization (NMF) [10] and support vector machines (SVM) [11]. In the recent era of the "Deep Learning", different neural network architectures have been successfully used for sound event detection and classification tasks, including feed-forward neural networks (FNN) [12], deep belief networks [13], convolutional neural networks (CNNs) [14] and Recurrent Neural Networks (RNNs) [15]. In addition, these architectures laid the foundation for end-to-end systems [16], [17], in which the feature representation of the audio input is automatically learnt from the raw audio signal waveforms.
B. Contribution
The proposed system is a fully data-driven approach based on the CapsNet deep neural architecture presented by Sabour et al. [23]. This architecture has shown promising results on highly overlapped digital numbers classification. In the audio field, a similar condition can be found in the detection of multiple concomitant sound events from acoustic spectral representations, thereby we propose to employ the CapsNet for the polyphonic-SED in real-life recordings. The novel computational structure based on capsules, combined to the routing mechanism allows to be invariant to intra-class affine transformations and to identify part-whole relationships between data features. In the SED case study, it is hypothesized that this characteristic confers to CapsNet the ability to effectively select most representative spectral features of each individual sound event and separate them from overlapped descriptions of the other sounds in the mixture. This hypothesis is supported by previously mentioned related works. Specifically, in [21], the CapsNet is exploited in order to obtain the prediction of the presence of heterogeneous polyphonic sounds (i.e., bird calls) on unseen audio files recorded in various conditions. In [22] the dynamic routing yields promising results for SED with a weakly labeled training dataset, thus with unavailable ground truths for the onset and offset times of the sound events. The algorithm has to detect sound events without supervision and in this context the routing can be considered as an attention mechanism.
In this paper, we present an extensive analysis of SED conducted on real-life audio datasets and compare the results with state-of-the-art methods. In addition, we propose a variant of the dynamic routing procedure which takes into account the temporal dependence of adjacent frames. The proposed method outperforms previous SED approaches in terms of detection error rate in the case of polyphonic SED, while it has comparable performance with respect to CNNs in the case of monophonic SED.
The whole system is composed of a feature extraction stage and a detection stage. The feature extraction stage transforms the audio signal into acoustic spectral features, while the second stage processes these features to detect the onset and offset times of specific sound events. In this latter stage we include the capsule units. The network parameters are obtained by supervised learning using annotations of sound events activity as target vectors. We have evaluated the proposed method against three datasets of real-life recordings and we have compared its performance both with the results of experiments with a traditional CNN architecture, and with the performance of well-established algorithms which have been assessed on the same datasets.
The rest of the paper is organized as follows. In Section II the task of polyphonic SED is formally described and the stages of the approach we propose are detailed, including a presentation of the CapsNet architecture characteristics. In Section III, we present the evaluation set-up used to accomplish the performance of the algorithm we propose and the comparative methods. In Section IV the results of experiments are discussed and compared with baseline methods. Section V finally presents our conclusions for this work.
II. PROPOSED METHOD
The aim of polyphonic SED is to find and classify any sound event present in an audio signal. The algorithm we propose is composed of two main stages: sound representation and polyphonic detection. In the sound representation stage, the audio signal is transformed in a two-dimensional timefrequency representation to obtain, for each frame t of the audio signal, a feature vector x t ∈ R F , where F represents the number of frequency bands.
Sound events possess temporal characteristics that can be exploited for SED, thus certain events can be efficiently distinguished by their time evolution. Impulsive sounds are extremely compact in time (e.g., gunshot, object impact), while other sound events have indefinite length (i.e., wind blowing, people walking). Other events can be distinguished from their spectral evolution (e.g., bird singing, car passing by). Long-term time domain information is very beneficial for SED and motivates for the use of a temporal context allowing the algorithm to extract information from a chronological sequence of input features. Consequently, these are presented as a context window matrix X t:t+T −1 ∈ R T ×F ×C , where T ∈ N is the number of frames that defines the sequence length of the temporal context, F ∈ N is the number of frequency bands and C is the number of audio channels. Differently, the target output matrix is defined as Y t:t+T −1 ∈ N T ×K , where K is the number of sound event classes.
In the SED stage, the task is to estimate the probabilities p(Y t:t+T −1 |X t:t+T −1 , θ) ∈ R T ×K , where θ denotes the parameters of the neural network. The network outputs, i.e., the event activity probabilities, are then compared with a threshold in order to obtain event activity predictionsŶ t:t+T −1 ∈ N T ×K . The parameters θ are trained by supervised learning, using the frame-based annotation of the sound event class as target output, thus, if class k is active during frame t, Y (t, k) is equal to 1, and is set to 0 otherwise. The case of polyphonic SED implies that this target output matrix can have multiple non-zero elements K in the same frame t, since several classes can be simultaneously present.
Indeed, polyphonic SED can be formulated as a multi-label classification problem in which the sound event classes are detected by multi-label annotations over consecutive time frames. The onset and offset time for each sound event are obtained by combining the classification results over consequent time frames. The trained model will then be used to predict the activity of the sound event classes in an audio stream without any further post-processing operations and prior knowledge on the events locations.
A. Feature Extraction
For our purpose, we exploit two acoustic spectral representation, the magnitude of the Short Time Fourier Transform (STFT) and the LogMel coefficients, obtained from all the audio channels and extensively used for other SED algorithms. Except where differently stated, we study the performance of binaural audio features and compare it with those extracted from a single channel audio signal. In all cases, we operate with audio signals sampled at 16 kHz and we calculate the STFT with a frame size equal to 40 ms and a frame step equal to 20 ms. Furthermore, the audio signals are normalized to the range [−1, 1] in order to have the same dynamic range for all the recordings.
The STFT is computed on 1024 points for each frame, while LogMel coefficients are obtained by filtering the STFT magnitude spectrum with a filter-bank composed of 40 filters evenly spaced in the mel frequency scale. In both cases, the logarithm of the energy of each frequency band is computed. The input matrix X t:t+T −1 concatenates T = 256 consequent STFT or LogMel vectors for each channel C = {1, 2}, thus the resulting feature tensor is X t:t+T −1 ∈ R 256×F ×C , where F is equal to 513 for the STFT and equal to 40 for the LogMels. The range of feature values is then normalized according to the mean and the standard deviation computed on the training sets of the neural networks.
B. CapsNet Architecture
The CapsNet architecture relies on the CNN architecture and includes its computational units in the first layers of the network as invariant features extractor from the input hereafter referred as X, omitting the subscript t:t+T −1 for simplicity of notation.
The essential unit of a CNN model is named kernel and it is composed of multiple neurons which process the whole input matrix by computing the linear convolution between the input and the kernel itself. The outputs of a CNN layer are called feature maps, and they represent translated replicas of highlevel features. The feature maps are obtained by applying a non linear function to the sum of a bias term and the linear filtered version of the input data. Denoting with W m ∈ R K m 1 ×K m 2 the m-th kernel and with b m ∈ R T ×F the bias vector of a generic convolutional layer, the m-th feature map H m ∈ R T ×F is given by:
H m = ϕ (W m * X + b m ) ,(1)
where * represents the convolution operation, ϕ(·) the differentiable non-linear activation function. The coefficients of W m and b m are learned during the model training. The dimension of the m-th feature map H m depends on the zero padding of the input tensor: here, padding is performed in order to preserve the dimension of the input. Moreover, Eq. (1) is typically followed by a max-pooling layer, which in this case operates only on the frequency axis. Following Hinton's preliminary works [24], in the CapsNet presented in [23] two layers are divided into many small groups of neurons called capsules. In those layers, the scalaroutput feature detectors of CNNs are replaced with vectoroutput capsules and the dynamic routing, or routing-byagreement algorithm is used in place of max-pooling, in order to replicate learned knowledge across space. Formally, we can rewrite Eq. (1) as
H m = α 11 W 11 X 1 + . . . + α M 1 W 1M X M . . . α 1K W K1 X 1 + . . . + α M K W KM X M .(2)
In Eq. (2), (W * X) has been partitioned into K groups, or capsules, so that each row in the column vector corresponds to an output capsule (the bias term b has been omitted for simplicity). Similarly, X has been partitioned into M capsules, where X i denotes an input capsule i, and W has been partitioned into submatrices W ij called transformation matrices. Conceptually, a capsule incorporates a set of properties of a particular entity that is present in the input data. With this purpose, coefficients α ij have been introduced. They are called coupling coefficients and if we set all the α ij = 1, Eq. (1) is obtained again. The coefficients α ij affect the learning dynamics with the aim to represent the amount of agreement between an input capsule and an output capsule. In particular, they measure how likely capsule i may activate capsule j, so the value of α ij should be relatively accentuated if the properties of capsule i coincide with the properties of capsule j in the layer above. The coupling coefficients are calculated by the iterative process of dynamic routing to fulfill the idea of assigning parts to wholes. Capsules in the higher layers should comprehend capsules in the layer below in terms of the entity they identify, while dynamic routing iteratively attempts to find these associations and supports capsules to learn features that ensure these connections.
In this work, we employ two layers composed of capsule units, and we denote them as Primary Capsules for the lower layer and Detection Capsules for the output layer throughout the rest of the paper.
1) Dynamic Routing: After giving a qualitative description of routing, we describe the method used in [23] to compute the coupling coefficients. The activation of a capsule unit is a vector which holds the properties of the entity it represents in its direction. The vector's magnitude indicates instead the probability that the entity represented by the capsule is present in the current input. To interpret the magnitude as a probability, a squashing non-linear function is used, which is given by:
v j = s j 2 1 + s j 2 s j s j ,(3)
where v j is the vector output of capsule j and s j is its total input. s j is a weighted sum over all the outputs u i of a capsule in the Primary Capsule layer multiplied by the coupling matrix W ij :
s j = i α ijûj|i ,û j|i = W ij u i .(4)
The routing procedure work as follows. The coefficient β ij measures the coupling between the i-th capsule from the Primary Capsule layer and the j-th capsule of the Detection Capsule layer. The β ij are initialized to zero, then they are iteratively updated by measuring the agreement between the current output v j of each capsule in the layer j and the predictionû j|i produced by the capsule i in the layer below. The agreement is computed as the scalar product
c ij = v j ·û j|i ,(5)
between the aforementioned capsule outputs. It is a measure of how similar the directions (i.e., the proprieties of the entity they represent) of capsules i and j are. The β ij coefficients are treated as if they were log likelihoods, thus the agreement value is added to the value owned at previous routing step:
β ij (r + 1) = β ij (r) + c ij (r) = β ij (r) + v j ·û j|i (r),(6)
where r represents the routing iteration. In this way the new values for all the coupling coefficients linking capsule i to higher level capsules are computed. To ensure that the coupling coefficients α ij represent log prior probabilities, the softmax function to β ij is computed at the start of each new routing iteration. Formally:
α ij = exp(β ij ) k exp(β ik ) ,(7)
so j α ij = 1. Thus, α ij can be seen as the probability that the entity represented by capsule Primary Capsule i is a part of the entity represented by the Detection Capsule j as opposed to any other capsule in the layer above.
2) Margin loss function: The length of the vector v j is used to represent the probability that the entity represented by the capsule j exists. The CapsNet have to be trained to produce long instantiation vector at the corresponding k th capsule if the event that it represents is present in the input audio sequence. A separate margin loss is defined for each target class k as:
L k = T k max(0, m + − v j ) 2 + λ(1 − T k ) max(0, v j − m − ) 2 (8)
where T k = 1 if an event of class k is present, while λ is a down-weighting factor of the loss for absent sound event classes classes. m + , m − and λ are respectively set equal to 0.9, 0.1 and 0.5 as suggested in [23]. The total loss is simply the sum of the losses of all the Detection Capsules.
C. CapsNet for Polyphonic Sound Event Detection
The architecture of the neural network is shown in Fig. II-C. The first stages of the model are traditional CNN blocks which act as feature extractors on the input X. After each block, max-pooling is used to halve the dimensions only on the frequency axis. The feature maps obtained by the CNN layers are then fed to the Primary Capsule Layer that represents the lowest level of multi-dimensional entities. Basically, it is a convolutional layer with J · M filters, i.e., it contains M convolutional capsules with J kernels each. Its output is then reshaped and squashed using Eq. (3). The final layer, or Detection Capsule layer, is a time-distributed capsule layer (i.e., it applies the same weights and biases to each frame element) and it is composed of K densely connected capsule units of dimension G. Since the previous layer is also a capsule layer, the dynamic routing algorithm is used to compute the output. The background class was included in the set of K target events, in order to represent its instance with a dedicated capsule unit and train the system to recognize the absence of events. In the evaluation, however, we consider only the outputs relative to the target sound events. The model predictions are obtained by computing the Euclidean norm of the output of each Detection Capsule. These values represent the probabilities that one of the target events is active in a frame t of the input feature matrix X, thus we consider them as the network output predictions.
In [23], the authors propose a series of densely connected neuron layers stacked at the bottom of the CapsNet, with the aim to regularize the weights training by reconstructing the input image X ∈ N 28×28 . Here, this technique entails an excessive complexity of the model to train, due to the higher number of units needed to reconstruct X ∈ R T ×F ×C , yielding poor performance in our preliminary experiments. We decided, thus, to use dropout [25] and L 2 weight normalization [26] as regularization techniques, as done in [22].
III. EXPERIMENTAL SET-UP
In order to test the proposed method, we performed a series of experiments on three datasets provided to the participants of different editions of the DCASE challenge [27], [28]. We evaluated the results by comparing the system based on the Capsule architecture with the traditional CNN. The hyperparameters of each network have been optimized with a random search strategy [29]. Furthermore, we reported the baselines and the best state-of-the-art performance provided by the challenge organizers.
A. Dataset
We assessed the proposed method on three datasets, two containing stereo recordings from real-life environments and one artificially generated monophonic mixtures of isolated sound events and real background audio.
In order to evaluate the proposed method in polyphonic reallife conditions, we used the TUT Sound Events 2016 & 2017 datasets, which were included in the corresponding editions of the DCASE Challenge. For the monophonic-SED case study, we used the TUT Rare Sound Events 2017 which represents the task 2 of the DCASE 2017 Challenge.
1) TUT Sound Events 2016: The TUT Sound events 2016 (TUT-SED 2016) 1 dataset consists of recordings from two acoustic scenes, respectively "Home" (indoor) and "Residential area" (outdoor) which we considered as two separate subsets. These acoustic scenes were selected from the challenge organizers to represent common environments of interest in applications for safety and surveillance (outside home) and human activity monitoring or home surveillance [28]. A total amount of around 54 and 59 minutes of audio are provided respectively for "Home" and "Residential area" scenarios. Sound events present in each recording were manually annotated without any further cross-verification, due to the high level of subjectivity inherent to the problem. For the "Home" scenario 1 http://www.cs.tut.fi/sgn/arg/dcase2016/ a total of 11 classes were defined, while for the "Residential Area" scenario 7 classes were annotated.
Each scenario of the TUT-SED 2016 has been divided into two subsets: development dataset and evaluation dataset. The split was done based on the number of examples available for each sound event class. In addition, for the development dataset a cross-validation setup is provided in order to easily compare the results of different approaches on this dataset. The setup consists of 4 folds, so that each recording is used exactly once as test data. In detail, "Residential area" sound events data consists of 5 recordings in the evaluation set and 12 recordings in the development set while "Home" sound events data consists of 5 recordings in the evaluation set and 10 recordings in turn divided into 4 folds as training and validation subsets. It is a subset of the TUT Acoustic scenes 2016 dataset [28], from which also TUT-SED 2016 dataset was taken. Thus, the recording setup, the annotation procedure, the dataset splitting, and the cross-validation setup is the same described above. The 6 target sound event classes were selected to represent common sounds related to human presence and traffic, and they include brakes squeaking, car, children, large vehicle, people speaking, people walking. The evaluation set of the TUT-SED 2017 consists of 29 minutes of audio, whereas the development set is composed of 92 minutes of audio which are employed in the cross-validation procedure.
3) TUT Rare Sound Events 2017: The TUT Rare Sound Events 2017 (TUT-Rare 2017) 2 [27] consists of isolated sounds of three different target event classes (respectively, baby crying, glass breaking and gunshot) and 30-second long recordings of everyday acoustic scenes to serve as background, such as park, home, street, cafe, train, etc. [28]. In this case we consider a monophonic-SED, since the sound events are artificially mixed with the background sequences without overlap. In addition, the event potentially present in each test file is known a-priori thus it is possible to train different models, each one specialized for a sound event. In the development set, we used a number of sequences equal to 750, 750 and 1250 for training respectively of the baby cry, glass-break and gunshot models, while we used 100 sequences as validation set and 500 sequences as test set for all of them. In the evaluation set, the training and test sequences of the development set are combined into a single training set, while the validation set is the same used in the Development dataset. The system is evaluated against an "unseen" set of 1500 samples (500 for each target class) with a sound event presence probability for each class equal to 0.5.
B. Evaluation Metrics
In this work we used the Error Rate (ER) as primary evaluation metric to ensure comparability with the reference systems. In particular, for the evaluations on the TUT-SED 2016 and 2017 datasets we consider a segment-based ER with a one-second segment length, while for the TUT-Rare 2017 the evaluation metric is event-based error rate calculated using onset-only condition with a collar of 500 ms. In the segmentbased ER the ground truth and system output are compared in a fixed time grid, thus sound events are marked as active or inactive in each segment. For the event-based ER the ground truth and system output are compared at event instance level.
ER score is calculated in a single time frame of one second length from intermediate statistics, i.e., the number of substitutions (S(t 1 )), insertions (I(t 1 )), deletions (D(t 1 )) and active sound events from annotations (N (t 1 )) for a segment t 1 . Specifically: 1) Substitutions S(t 1 ) are the number of ground truth events for which we have a false positive and one false negative in the same segment; 2) Insertions I(t 1 ) are events in system output that are not present in the ground truth, thus the false positives which cannot be counted as substitutions; 3) Deletions D(t 1 ) are events in ground truth that are not correctly detected by the system, thus the false negatives which cannot be counted as substitutions; These intermediate statistics are accumulated over the segments of the whole test set to compute the evaluation metric ER. Thus, the total error rate is calculated as:
ER = T t1=1 S(t 1 ) + T t1=1 I(t 1 ) + T t1=1 D(t 1 ) T t1=1 N (t 1 ) ,(9)
where T is the total number of segments t 1 .
If there are multiple scenes in the dataset, such as in the TUT-SED 2016, evaluation metrics are calculated for each scene separately and then the results are presented as the average across the scenes. A detailed and visualized explanation of segment-based ER score in multi label setting can be found in [30].
C. Comparative Algorithms
Since the datasets we used were employed to develop and evaluate the algorithms proposed from the participants of the DCASE challenges, we can compare our results with the most recent approaches in the state-of-the-art. In addition, each challenge task came along with a baseline method that consists in a basic approach for the SED. It represents a reference for the participants of the challenges while they were developing their systems.
1) TUT-SED 2016: The baseline system is based on mel frequency cepstral coefficients (MFCC) acoustic features and multiple GMM-based classifiers. In detail, for each event class, a binary classifier is trained using the audio segments annotated as belonging to the model representing the event class, and the rest of the audio to the model which represents the negative class. The decision is based on likelihood ratio between the positive and negative models for each individual class, with a sliding window of one second. To the best of our knowledge, the most performing method for this dataset is an algorithm we proposed [31] in 2017, based on binaural MFCC features and a Multi Layer Perceptron (MLP) neural network used as classifier. The detection task is performed by an adaptive energy Voice Activity Detector (VAD) which precedes the MLP and determines the starting and ending point of an event-active audio sequence.
2) TUT-SED 2017: In this case the baseline method relies on an MLP architecture using 40 LogMels as audio representation [27]. The network is fed with a feature vector comprehending 5-frame as temporal context. The neural network is composed of two dense layers of 50 hidden units per layer with the 20% of dropout, while the network output layer contains K sigmoid units (where K is the number of classes) that can be active at the same time and represent the network prediction of event activity for each context window. The state of the art algorithm is based on the CRNN architecture [32]. The authors compared both monaural and binaural acoustic features, observing that binaural features in general have similar performance as single channel features on the development dataset although the best result on the evaluation dataset is obtained using monaural LogMels as network inputs. According to the authors, this can suggest that the dataset was possibly not large enough to train the CRNN fed with this kind of features.
3) TUT-Rare 2017: The baseline [28] and the state-of-theart methods of the DCASE 2017 challenge (Rare-SED) were based on a very similar architectures to that employed for the TUT-SED 2016 and described above. For the baseline method, the only difference relies in the output layer, which in this case is composed of a single sigmoid unit. The first classified algorithm [33] takes 128 LogMels as input and process them frame-wise by means of a CRNN with 1D filters on the first stage.
D. Neural Network configuration
We performed a hyperparameter search by running a series of experiments over predetermined ranges. We selected the configuration that leads, for each network architecture, to the best results from the cross-validation procedure on the development dataset of each task, and used this architecture to compute the results on the corresponding evaluation dataset.
The number and shape of convolutional layers, the nonlinear activation function, the regularizers in addition to the capsules dimensions and the maximum number of routing iterations have been varied for a total of 100 configurations. Details of searched hyperparameters and their ranges are reported in Table I. The neural networks training was accomplished by the AdaDelta stochastic gradient-based optimization algorithm [34] for a maximum of 100 epochs and batch size equal to 20 on the margin loss function. The optimizer hyperparameters were set according to [34] (i.e., initial learning rate lr = 1.0, ρ = 0.95, = 10 −6 ). The trainable weights were initialized according to the glorot-uniform scheme [35] and an early stopping strategy was employed during the training in order to [25]. The algorithm has been implemented in the Python language using Keras [36] and Tensorflow [37] as deep learning libraries, while Librosa [38] has been used for feature extraction.
For the CNN models, we performed a similar random hyperparameters search procedure for each dataset, considering only the first two blocks of the Table I and by replacing the capsule layers with feedforward layers with sigmoid activation function.
On TUT-SED 2016 and 2017 datasets, the event activity probabilities are simply thresholded at a fixed value equal to 0.5, in order to obtain the binary activity matrix used to compute the reference metric. On the TUT-Rare 2017 the network output signal is processed as proposed in [39], thus it is convolved with an exponential decay window then it is processed with a sliding median filter with a local window-size and finally a threshold is applied.
IV. RESULTS
In this section, we present the results for all the datasets and experiments described in Section III. The evaluation of Capsule and CNNs based methods have been conducted on the development sets of each examined dataset using random combinations of hyperparameters given in Table I.
A. TUT-SED 2016
Results on TUT-SET 2016 dataset are shown in Table III, while Table II reports the configurations which yielded the best performance on the evaluation dataset. All the found models have ReLU as non-linear activation function and use dropout technique as weight regularization, while the batchnormalization applied after each convolutional layer seems to be effective only for the CapsNet. In Table III results are reported considering each combination of architecture and features we evaluated. The best performing setups are highlighted with bold face. The use of STFT as acoustic representation results to be beneficial for both the architectures with respect to the LogMels. In particular, the CapsNet obtains the lowest ER on the cross-validation performed on Development dataset when is fed by the binaural version of such features. On the two scenarios of the evaluation dataset, a model based on CapsNet and binaural STFT obtains an averaged ER equal to 0.69, which is largely below both the challenge baseline [28] (-0.19) and the best score reported in literature [31] (-0.10). The comparative method based on CNNs seems not to fit at all when LogMels are used as input, while the performance is aligned with the challenge baseline based on GMM classifiers when the models are fed by monaural STFT. This discrepancy can be motivated by the enhanced ability of CapsNet to exploit small training datasets, in particular due to the effect of the routing mechanism on the weight training. In fact, the TUT-SED 2016 is composed of a small amount of audio and the sounds events occur sparsely (i.e., only 49 minutes of the total audio contain at least one event active), thus, the overall results of the comparative methods (CNNs, Baseline and SoA) on this dataset are quite low compared to the other datasets.
Another CapsNet property that is worth to highlight is the lower number of free parameters that compose the models compared to evaluated CNNs. As shown in Table II, the considered architectures have 267K and 252K free parameters respectively for the "Home" and the "Residential area" scenario. It is a relatively low number of parameters to be trained (e.g., a popular deep architecture for image classification such as AlexNet [40] is composed of 60M parameters), and the best performing CapsNets of each considered scenario have even less parameters with respect to the CNNs (-22% and -64% respectively for the "Home" and the "Residential area" scenario). Thus, the high performance of CapsNet can be explained with the architectural advantage rather than the model complexity. In addition, there can be a significant performance shift for the same type of networks with the same number of parameters, which means that a suitable hyperparameters search action (e.g., number of filters on the convolutional layers, dimension of the capsule units) is crucial in finding the best performing network structure.
1) Closer Look at Network Outputs: A comparative study on the neural network outputs, which are regarded as event activity probabilities is presented in Fig. 2. The monaural STFT from a 40 seconds sequence of the "Residential area" dataset is shown along with event annotations and the network outputs of the CapsNet and the CNN best performing models. For this example, we chose the monaural STFT as input feature because generally it yields the best results over all the considered datasets. Fig. 2 shows bird singing lasting for the whole sequence and correctly detected by both the architectures. When the car passing by event overlaps the bird singing, the CapsNet detects more clearly its presence. The people speaking event is slightly detected by both the models, while the object banging activates the relative Capsule exactly only in correspondence of the event annotation. It must be noted that the dataset is composed of unverified manually labelled real-life recordings, that may present a degree of subjectivity, thus, affecting the training. Nevertheless, the CapsNet exhibits remarkable detection capability especially in the condition of overlapping events, while the CNN outputs are definitely more "blurred" and the event people walking is wrongly detected in this sequence.
B. TUT-SED 2017
The bottom of Table III reports the results obtained with the TUT-SED 2017. As in the TUT-SED 2016, the best performing models on the Development dataset are those fed by the Binaural STFT of the input signal. In this case we can also observe interesting performance obtained by the CNNs, which on the Evaluation dataset obtain a lower ER (i.e., equal to 0.65) with respect to the state-of-the-art algorithm [32], based on CRNNs. CapsNet confirms its effectiveness and it obtains lowest ER equal to 0.58 with LogMel features, although with a slight margin with respect to the other inputs (i.e., -0.03 compared to the STFT features, -0.06 compared to both the binaural version of LogMels and STFT spectrograms).
It is interesting to notice that in the development crossvalidation, the CapsNet models yielded significantly better performance with respect to the other reported approaches, while the CNNs have decidedly worse performance. On the Evaluation dataset, it was not possible to use the earlystopping strategy, thus the ER scores of the CapsNets suffer Notwithstanding this weakness, the absolute performance obtained both with monaural and binaural spectral features is consistent and improves the state-of-the-art result, with a reduction of the ER of up to 0.21 in the best case. This is particularly evident in Fig. 3, that shows the output of the two best performing systems for a sequence of approximately 20 seconds length which contains highly overlapping sounds. The event classes "people walking" and "large vehicle" are overlapped for almost all the sequence duration and they are well detected by the CapsNet, although they are of different nature: the "large vehicle" has a typical timber and is almost stationary, while the class "people walking" comprehend impulsive and desultory sounds. The CNN seems not to be able to distinguish between the "large vehicle" and the "car" classes, detecting confidently only the latter, while the activation corresponding "people walking" class is modest. The presence of the "brakes squeaking" class, which has a specific spectral profile mostly located in the highest frequency bands (as shown in the spectrogram), is detected only by the CapsNet. We can assume this as a concrete experimental validation of the routing effectiveness. The number of free parameters amounts to 223K for the best configuration shown in Table II and it is similar to those found for the TUT-SED 2016, which consists also in this case in a reduction equal to 35% with respect to the best CNN layout.
C. TUT-Rare SED 2017
The advantage given by the routing procedure to the Caps-Net is particularly effective in the case of polyphonic SED. This is confirmed by the results obtained with the TUT-Rare SED 2017 which are shown in Table V. In this case the metric is not anymore segment-based, but it is the eventbased ER calculated using onset-only condition. We performed a separate random-search for each of the three sound event classes both for CapsNets and CNNs and we report the averaged score over the three classes. The setups that obtained the best performance on the Evaluation dataset are shown in Table IV. This is the largest dataset we evaluated and its characteristic is the high unbalance between the amount of background sounds versus the target sound events.
From the analysis of partial results on the Evaluation set (unfortunately not included for the sake of conciseness) we notice that both architectures achieve the best performance on glass break sound (0.25 and 0.24 respectively for CNNs and CapsNet with LogMels features), due to its clear spectral fingerprint compared to the background sound. The worst performing class is the gunshot (ER equal to 0.58 for the CapsNet), although the noise produced by different instances of this class involves similar spectral components. The low
3 × 3 - 3 × 3 - 3 × 3 - Primary Capsules dimension J 8 - 8 - 8 - Detection Capsules dimension G 14 - 14 - 6 - Routing iterations 5 - 5 - 1 - # Params 131K 84K 131K 84K 30K 211K
performance is probably motivated due to the fast decay of this sound, which means that in this case the routing procedure is not sufficient to avoid confusing the gunshot with other background noises, especially in the case of dataset unbalancing and low event-to-background ratio. A solution to this issue can be find in the combination of CapsNet with RNN units, as proposed in [19] for the CNNs which yields an efficient modelling of the gunshot by CRNN and improves the detection abilities even in polyphonic conditions. The babycry consists of short, harmonic sounds is detected almost equally from the two architectures due to the property of frequency shift invariance given by the convolutional kernel processing. Finally, the CNN shows better generalization performance to the CapsNet, although the ER score is far from state-of-theart which involves the use of the aforementioned CRNNs [33] or a hierarchical framework [39]. In addition, in this case are the CNN models to have a reduced number of weights to train (36%) with respect the CapsNets, except for the "gunshot" case but, as mentioned, it is also the configuration that gets the worst results.
D. Alternative Dynamic Routing for SED
We observed that the original routing procedure implies the initialization of the coefficients β ij to zero each time the procedure is restarted, i.e, after each input sample has been processed. This is reasonable in the case of image classification, for which the CapsNet has been originally proposed. In the case of audio task, we clearly expect a higher correlation between samples belonging to adjacent temporal frames X. We thus investigated the chance to initialize the coefficients β ij to zero only at the very first iteration, while for subsequent X to assign them the last values they had at the end of the previous iterative procedure. We experimented this variant considering the best performing models of the analyzed scenarios for polyphonic SED, taking into account only the systems fed with the monaural STFT. As shown in Table VI, the modification we propose in the routing procedure is effective in particular on the evaluation datasets, conferring improved generalization properties to the models we tested even without accomplishing a specific hyperparameters optimization.
V. CONCLUSION
In this work, we proposed to apply a novel neural network architecture, the CapsNet, to the polyphonic SED task. The architecture is based on both convolutional and capsule layers. The convolutional layers extract high-level time-frequency feature maps from input matrices which provide an acoustic spectral representation with long temporal context. The obtained feature maps are then used to feed the Primary Capsule layer which is connected to the Detection Capsule layer, finally extracting the event activity probabilities. These last two layers are involved in the iterative routing-by-agreement procedure, which computes the outputs based on a measure of likelihood between a capsule and its parent capsules. This architecture combines, thus, the ability of convolutional layers to learn local translation invariant filters with the ability of capsules to learn part-whole relations by using the routing procedure.
Part of the novelty of this work resides in the adaptation of the CapsNet architecture for the audio event detection task, with a special care on the input data, the layers interconnection and the regularization techniques. The routing procedure is also modified to confer a more appropriate acoustic rationale, with a further average performance improvement of 6% among the polyphonic-SED tasks.
An extensive evaluation of the algorithm is proposed with comparison to recent state-of-the-art methods on three different datasets. The experimental results demonstrate that the use of dynamic routing procedure during the training is effective and provides significant performance improvement in the case of overlapping sound events compared to traditional CNNs, and other established methods in polyphonic SED. Interestingly, the CNN based method obtained the best performance in the monophonic SED case study, thus emphasizing the suitability of the CapsNet architecture in dealing with overlapping sounds. We showed that this model is particularly effective with small sized datasets, such as TUT-SED 2016 which contains a total 78 minutes of audio for the development of the models of which one third is background noise.
Furthermore, the network trainable parameters are reduced with respect to other deep learning architectures, confirming the architectural advantage given by the introduced features also in the task of polyphonic SED.
Despite the improvement in performance, we identified a limitation of the proposed method. As presented in Section IV, the performance of the CapsNet is more sensible to the number of training iterations. This affects the generalization capabilities of the algorithm, yielding a greater relative deterioration of the performance on evaluation datasets with respect to the comparative methods.
The results we observed in this work are consistent with many other classification tasks in various domains [41]- [43], prove that the CapsNet is an effective approach which enhances the well-established representation capabilities of the CNNs also in the audio field. As a future work, regularization methods can be investigated to overcome the lack of generalization which seems to affect the CapsNets. Furthermore, regarding the SED task the addition of recurrent units may be explored to enhance the detection of particular (i.e., inpulsive) sound events in real-life audio and the recently-proposed variant of routing, based on the Expectation Maximization algorithm (EM) [44], can be investigated in this context. he collaborated to several regional and european projects on audio signal processing. Dr. Principi is author and coauthor of several international scientific peer-reviewed articles in the area of speech enhancement for robust speech and speaker recognition and intelligent audio analysis. He is member of the IEEE CIS Task Force on Computational Audio Processing, and is reviewer for several international journals. His current research interests are in the area of machine learning and digital signal processing for the smart grid (energy task scheduling, nonintrusive load monitoring, computational Intelligence for vehicle to grid) and intelligent audio analysis (multi-room voice activity detection and speaker localization, acoustic event detection, fall detection).
| 7,491 |
1810.06325
|
2897282582
|
Artificial sound event detection (SED) aims to mimic the human ability to perceive and understand what is happening in the surroundings. Nowadays, deep learning offers valuable techniques for this goal, such as convolutional neural networks (CNNs). The capsule neural network (CapsNet) architecture has been recently introduced in the image processing field with the intent to overcome some of the known limitations of CNNs, specifically regarding the scarce robustness to affine transformations (i.e., perspective, size, and orientation) and the detection of overlapped images. This motivated the authors to employ CapsNets to deal with the polyphonic SED task, in which multiple sound events occur simultaneously. Specifically, we propose to exploit the capsule units to represent a set of distinctive properties for each individual sound event. Capsule units are connected through a so-called dynamic routing that encourages learning part-whole relationships and improves the detection performance in a polyphonic context. This paper reports extensive evaluations carried out on three publicly available datasets, showing how the CapsNet-based algorithm not only outperforms standard CNNs but also achieves the best results with respect to the state-of-the-art algorithms.
|
The combination of the CNN structure with recurrent units has increased the detection performance by taking advantage of the characteristics of each architecture. This is the case of convolutional recurrent neural networks (CRNNs) @cite_27 , which provided state-of-the-art performance especially in the case of polyphonic SED. CRNNs consolidate the CNN property of local shift invariance with the capability to model short and long term temporal dependencies provided by the RNN layers. This architecture has been also employed in almost all of the most performing algorithms proposed in the recent editions of research challenges such as the IEEE Audio and Acoustic Signal Processing (AASP) Challenge on Detection and Classification of Acoustic Scenes and Events (DCASE) @cite_31 . On the other hand, if the datasets are not sufficiently large, problems such as overfitting can be encountered with these models, which typically are composed of a considerable number of free-parameters (i.e., more than 1M).
|
{
"abstract": [
"Sound events often occur in unstructured environments where they exhibit wide variations in their frequency content and temporal structure. Convolutional neural networks CNNs are able to extract higher level features that are invariant to local spectral and temporal variations. Recurrent neural networks RNNs are powerful in learning the longer term temporal context in the audio signals. CNNs and RNNs as classifiers have recently shown improved performances over established methods in various sound recognition tasks. We combine these two approaches in a convolutional recurrent neural network CRNN and apply it on a polyphonic sound event detection task. We compare the performance of the proposed CRNN method with CNN, RNN, and other established methods, and observe a considerable improvement for four different datasets consisting of everyday sound events.",
"The authors propose an audio events detection system tailored to surveillance applications.The method has been tested on a huge and challenging data set, made publicly available.The performance analysis has been done for low SNR values and under various conditions.A comparative analysis with other methods from the literature has been performed. In this paper we propose a novel method for the detection of audio events for surveillance applications. The method is based on the bag of words approach, adapted to deal with the specific issues of audio surveillance: the need to recognize both short and long sounds, the presence of a significant noise level and of superimposed background sounds of intensity comparable to the audio events to be detected. In order to test the proposed method in complex, realistic scenarios, we have built a large, publicly available dataset of audio events. The dataset has allowed us to evaluate the robustness of our method with respect to varying levels of the Signal-to-Noise Ratio; the experimentation has confirmed its applicability under real world conditions, and has shown a significant performance improvement with respect to other methods from the literature."
],
"cite_N": [
"@cite_27",
"@cite_31"
],
"mid": [
"2591013610",
"821549425"
]
}
|
Polyphonic Sound Event Detection by using Capsule Neural Networks
|
H UMAN cognition relies on the ability to sense, process, and understand the surrounding environment and its sounds. Although the skill of listening and understanding their origin is so natural for living beings, it still results in a very challenging task for computers.
Sound event detection (SED), or acoustic event detection, has the aim to mimic this cognitive feature by means of artificial systems. Basically, a SED algorithm is designed to detect the onset and offset times for a variety of sound events captured in an audio recording and associate a textual descriptor, i.e., a label for each of these events.
In recent years, SED has received the interest from the computational auditory scene analysis community [1], due to its potential in several engineering applications. Indeed, the automatic recognition of sound events and scenes can have a considerable impact in a wide range of applications where sound or sound sensing is advantageous with respect to other modalities. This is the case of acoustic surveillance [2], healthcare monitoring [3], [4] or urban sound analysis [5], where the short duration of certain events (i.e., a human fall, a gunshot or a glass breaking) or the personal privacy motivate the exploitation of the audio information rather than, e.g., the image processing. In addition, audio processing is often less computationally demanding compared to other multimedia domains, thus embedded devices can be easily equipped with microphones and sufficient computational capacity to locally process the signal captured. These could be smart home devices for home automation purposes or sensors for wildlife and biodiversity monitoring (i.e., bird calls detection [6]).
SED algorithms in a real-life scenario face many challenges. These include simultaneous events, environmental noise and events of the same class produced by different sources [7]. Since multiple events are very likely to overlap, a polyphonic SED algorithm, i.e., an algorithm able to detect multiple simultaneous events, needs to be designed. Finally, the effects of noise and intra-class variability represent further challenges for SED in real-life situations.
Traditionally, the polyphonic acoustic event analysis has been approached with statistical modelling methods, including Hidden Markov Models (HMM) [8], Gaussian Mixture Models (GMM) [9], Non-negative Matrix Factorization (NMF) [10] and support vector machines (SVM) [11]. In the recent era of the "Deep Learning", different neural network architectures have been successfully used for sound event detection and classification tasks, including feed-forward neural networks (FNN) [12], deep belief networks [13], convolutional neural networks (CNNs) [14] and Recurrent Neural Networks (RNNs) [15]. In addition, these architectures laid the foundation for end-to-end systems [16], [17], in which the feature representation of the audio input is automatically learnt from the raw audio signal waveforms.
B. Contribution
The proposed system is a fully data-driven approach based on the CapsNet deep neural architecture presented by Sabour et al. [23]. This architecture has shown promising results on highly overlapped digital numbers classification. In the audio field, a similar condition can be found in the detection of multiple concomitant sound events from acoustic spectral representations, thereby we propose to employ the CapsNet for the polyphonic-SED in real-life recordings. The novel computational structure based on capsules, combined to the routing mechanism allows to be invariant to intra-class affine transformations and to identify part-whole relationships between data features. In the SED case study, it is hypothesized that this characteristic confers to CapsNet the ability to effectively select most representative spectral features of each individual sound event and separate them from overlapped descriptions of the other sounds in the mixture. This hypothesis is supported by previously mentioned related works. Specifically, in [21], the CapsNet is exploited in order to obtain the prediction of the presence of heterogeneous polyphonic sounds (i.e., bird calls) on unseen audio files recorded in various conditions. In [22] the dynamic routing yields promising results for SED with a weakly labeled training dataset, thus with unavailable ground truths for the onset and offset times of the sound events. The algorithm has to detect sound events without supervision and in this context the routing can be considered as an attention mechanism.
In this paper, we present an extensive analysis of SED conducted on real-life audio datasets and compare the results with state-of-the-art methods. In addition, we propose a variant of the dynamic routing procedure which takes into account the temporal dependence of adjacent frames. The proposed method outperforms previous SED approaches in terms of detection error rate in the case of polyphonic SED, while it has comparable performance with respect to CNNs in the case of monophonic SED.
The whole system is composed of a feature extraction stage and a detection stage. The feature extraction stage transforms the audio signal into acoustic spectral features, while the second stage processes these features to detect the onset and offset times of specific sound events. In this latter stage we include the capsule units. The network parameters are obtained by supervised learning using annotations of sound events activity as target vectors. We have evaluated the proposed method against three datasets of real-life recordings and we have compared its performance both with the results of experiments with a traditional CNN architecture, and with the performance of well-established algorithms which have been assessed on the same datasets.
The rest of the paper is organized as follows. In Section II the task of polyphonic SED is formally described and the stages of the approach we propose are detailed, including a presentation of the CapsNet architecture characteristics. In Section III, we present the evaluation set-up used to accomplish the performance of the algorithm we propose and the comparative methods. In Section IV the results of experiments are discussed and compared with baseline methods. Section V finally presents our conclusions for this work.
II. PROPOSED METHOD
The aim of polyphonic SED is to find and classify any sound event present in an audio signal. The algorithm we propose is composed of two main stages: sound representation and polyphonic detection. In the sound representation stage, the audio signal is transformed in a two-dimensional timefrequency representation to obtain, for each frame t of the audio signal, a feature vector x t ∈ R F , where F represents the number of frequency bands.
Sound events possess temporal characteristics that can be exploited for SED, thus certain events can be efficiently distinguished by their time evolution. Impulsive sounds are extremely compact in time (e.g., gunshot, object impact), while other sound events have indefinite length (i.e., wind blowing, people walking). Other events can be distinguished from their spectral evolution (e.g., bird singing, car passing by). Long-term time domain information is very beneficial for SED and motivates for the use of a temporal context allowing the algorithm to extract information from a chronological sequence of input features. Consequently, these are presented as a context window matrix X t:t+T −1 ∈ R T ×F ×C , where T ∈ N is the number of frames that defines the sequence length of the temporal context, F ∈ N is the number of frequency bands and C is the number of audio channels. Differently, the target output matrix is defined as Y t:t+T −1 ∈ N T ×K , where K is the number of sound event classes.
In the SED stage, the task is to estimate the probabilities p(Y t:t+T −1 |X t:t+T −1 , θ) ∈ R T ×K , where θ denotes the parameters of the neural network. The network outputs, i.e., the event activity probabilities, are then compared with a threshold in order to obtain event activity predictionsŶ t:t+T −1 ∈ N T ×K . The parameters θ are trained by supervised learning, using the frame-based annotation of the sound event class as target output, thus, if class k is active during frame t, Y (t, k) is equal to 1, and is set to 0 otherwise. The case of polyphonic SED implies that this target output matrix can have multiple non-zero elements K in the same frame t, since several classes can be simultaneously present.
Indeed, polyphonic SED can be formulated as a multi-label classification problem in which the sound event classes are detected by multi-label annotations over consecutive time frames. The onset and offset time for each sound event are obtained by combining the classification results over consequent time frames. The trained model will then be used to predict the activity of the sound event classes in an audio stream without any further post-processing operations and prior knowledge on the events locations.
A. Feature Extraction
For our purpose, we exploit two acoustic spectral representation, the magnitude of the Short Time Fourier Transform (STFT) and the LogMel coefficients, obtained from all the audio channels and extensively used for other SED algorithms. Except where differently stated, we study the performance of binaural audio features and compare it with those extracted from a single channel audio signal. In all cases, we operate with audio signals sampled at 16 kHz and we calculate the STFT with a frame size equal to 40 ms and a frame step equal to 20 ms. Furthermore, the audio signals are normalized to the range [−1, 1] in order to have the same dynamic range for all the recordings.
The STFT is computed on 1024 points for each frame, while LogMel coefficients are obtained by filtering the STFT magnitude spectrum with a filter-bank composed of 40 filters evenly spaced in the mel frequency scale. In both cases, the logarithm of the energy of each frequency band is computed. The input matrix X t:t+T −1 concatenates T = 256 consequent STFT or LogMel vectors for each channel C = {1, 2}, thus the resulting feature tensor is X t:t+T −1 ∈ R 256×F ×C , where F is equal to 513 for the STFT and equal to 40 for the LogMels. The range of feature values is then normalized according to the mean and the standard deviation computed on the training sets of the neural networks.
B. CapsNet Architecture
The CapsNet architecture relies on the CNN architecture and includes its computational units in the first layers of the network as invariant features extractor from the input hereafter referred as X, omitting the subscript t:t+T −1 for simplicity of notation.
The essential unit of a CNN model is named kernel and it is composed of multiple neurons which process the whole input matrix by computing the linear convolution between the input and the kernel itself. The outputs of a CNN layer are called feature maps, and they represent translated replicas of highlevel features. The feature maps are obtained by applying a non linear function to the sum of a bias term and the linear filtered version of the input data. Denoting with W m ∈ R K m 1 ×K m 2 the m-th kernel and with b m ∈ R T ×F the bias vector of a generic convolutional layer, the m-th feature map H m ∈ R T ×F is given by:
H m = ϕ (W m * X + b m ) ,(1)
where * represents the convolution operation, ϕ(·) the differentiable non-linear activation function. The coefficients of W m and b m are learned during the model training. The dimension of the m-th feature map H m depends on the zero padding of the input tensor: here, padding is performed in order to preserve the dimension of the input. Moreover, Eq. (1) is typically followed by a max-pooling layer, which in this case operates only on the frequency axis. Following Hinton's preliminary works [24], in the CapsNet presented in [23] two layers are divided into many small groups of neurons called capsules. In those layers, the scalaroutput feature detectors of CNNs are replaced with vectoroutput capsules and the dynamic routing, or routing-byagreement algorithm is used in place of max-pooling, in order to replicate learned knowledge across space. Formally, we can rewrite Eq. (1) as
H m = α 11 W 11 X 1 + . . . + α M 1 W 1M X M . . . α 1K W K1 X 1 + . . . + α M K W KM X M .(2)
In Eq. (2), (W * X) has been partitioned into K groups, or capsules, so that each row in the column vector corresponds to an output capsule (the bias term b has been omitted for simplicity). Similarly, X has been partitioned into M capsules, where X i denotes an input capsule i, and W has been partitioned into submatrices W ij called transformation matrices. Conceptually, a capsule incorporates a set of properties of a particular entity that is present in the input data. With this purpose, coefficients α ij have been introduced. They are called coupling coefficients and if we set all the α ij = 1, Eq. (1) is obtained again. The coefficients α ij affect the learning dynamics with the aim to represent the amount of agreement between an input capsule and an output capsule. In particular, they measure how likely capsule i may activate capsule j, so the value of α ij should be relatively accentuated if the properties of capsule i coincide with the properties of capsule j in the layer above. The coupling coefficients are calculated by the iterative process of dynamic routing to fulfill the idea of assigning parts to wholes. Capsules in the higher layers should comprehend capsules in the layer below in terms of the entity they identify, while dynamic routing iteratively attempts to find these associations and supports capsules to learn features that ensure these connections.
In this work, we employ two layers composed of capsule units, and we denote them as Primary Capsules for the lower layer and Detection Capsules for the output layer throughout the rest of the paper.
1) Dynamic Routing: After giving a qualitative description of routing, we describe the method used in [23] to compute the coupling coefficients. The activation of a capsule unit is a vector which holds the properties of the entity it represents in its direction. The vector's magnitude indicates instead the probability that the entity represented by the capsule is present in the current input. To interpret the magnitude as a probability, a squashing non-linear function is used, which is given by:
v j = s j 2 1 + s j 2 s j s j ,(3)
where v j is the vector output of capsule j and s j is its total input. s j is a weighted sum over all the outputs u i of a capsule in the Primary Capsule layer multiplied by the coupling matrix W ij :
s j = i α ijûj|i ,û j|i = W ij u i .(4)
The routing procedure work as follows. The coefficient β ij measures the coupling between the i-th capsule from the Primary Capsule layer and the j-th capsule of the Detection Capsule layer. The β ij are initialized to zero, then they are iteratively updated by measuring the agreement between the current output v j of each capsule in the layer j and the predictionû j|i produced by the capsule i in the layer below. The agreement is computed as the scalar product
c ij = v j ·û j|i ,(5)
between the aforementioned capsule outputs. It is a measure of how similar the directions (i.e., the proprieties of the entity they represent) of capsules i and j are. The β ij coefficients are treated as if they were log likelihoods, thus the agreement value is added to the value owned at previous routing step:
β ij (r + 1) = β ij (r) + c ij (r) = β ij (r) + v j ·û j|i (r),(6)
where r represents the routing iteration. In this way the new values for all the coupling coefficients linking capsule i to higher level capsules are computed. To ensure that the coupling coefficients α ij represent log prior probabilities, the softmax function to β ij is computed at the start of each new routing iteration. Formally:
α ij = exp(β ij ) k exp(β ik ) ,(7)
so j α ij = 1. Thus, α ij can be seen as the probability that the entity represented by capsule Primary Capsule i is a part of the entity represented by the Detection Capsule j as opposed to any other capsule in the layer above.
2) Margin loss function: The length of the vector v j is used to represent the probability that the entity represented by the capsule j exists. The CapsNet have to be trained to produce long instantiation vector at the corresponding k th capsule if the event that it represents is present in the input audio sequence. A separate margin loss is defined for each target class k as:
L k = T k max(0, m + − v j ) 2 + λ(1 − T k ) max(0, v j − m − ) 2 (8)
where T k = 1 if an event of class k is present, while λ is a down-weighting factor of the loss for absent sound event classes classes. m + , m − and λ are respectively set equal to 0.9, 0.1 and 0.5 as suggested in [23]. The total loss is simply the sum of the losses of all the Detection Capsules.
C. CapsNet for Polyphonic Sound Event Detection
The architecture of the neural network is shown in Fig. II-C. The first stages of the model are traditional CNN blocks which act as feature extractors on the input X. After each block, max-pooling is used to halve the dimensions only on the frequency axis. The feature maps obtained by the CNN layers are then fed to the Primary Capsule Layer that represents the lowest level of multi-dimensional entities. Basically, it is a convolutional layer with J · M filters, i.e., it contains M convolutional capsules with J kernels each. Its output is then reshaped and squashed using Eq. (3). The final layer, or Detection Capsule layer, is a time-distributed capsule layer (i.e., it applies the same weights and biases to each frame element) and it is composed of K densely connected capsule units of dimension G. Since the previous layer is also a capsule layer, the dynamic routing algorithm is used to compute the output. The background class was included in the set of K target events, in order to represent its instance with a dedicated capsule unit and train the system to recognize the absence of events. In the evaluation, however, we consider only the outputs relative to the target sound events. The model predictions are obtained by computing the Euclidean norm of the output of each Detection Capsule. These values represent the probabilities that one of the target events is active in a frame t of the input feature matrix X, thus we consider them as the network output predictions.
In [23], the authors propose a series of densely connected neuron layers stacked at the bottom of the CapsNet, with the aim to regularize the weights training by reconstructing the input image X ∈ N 28×28 . Here, this technique entails an excessive complexity of the model to train, due to the higher number of units needed to reconstruct X ∈ R T ×F ×C , yielding poor performance in our preliminary experiments. We decided, thus, to use dropout [25] and L 2 weight normalization [26] as regularization techniques, as done in [22].
III. EXPERIMENTAL SET-UP
In order to test the proposed method, we performed a series of experiments on three datasets provided to the participants of different editions of the DCASE challenge [27], [28]. We evaluated the results by comparing the system based on the Capsule architecture with the traditional CNN. The hyperparameters of each network have been optimized with a random search strategy [29]. Furthermore, we reported the baselines and the best state-of-the-art performance provided by the challenge organizers.
A. Dataset
We assessed the proposed method on three datasets, two containing stereo recordings from real-life environments and one artificially generated monophonic mixtures of isolated sound events and real background audio.
In order to evaluate the proposed method in polyphonic reallife conditions, we used the TUT Sound Events 2016 & 2017 datasets, which were included in the corresponding editions of the DCASE Challenge. For the monophonic-SED case study, we used the TUT Rare Sound Events 2017 which represents the task 2 of the DCASE 2017 Challenge.
1) TUT Sound Events 2016: The TUT Sound events 2016 (TUT-SED 2016) 1 dataset consists of recordings from two acoustic scenes, respectively "Home" (indoor) and "Residential area" (outdoor) which we considered as two separate subsets. These acoustic scenes were selected from the challenge organizers to represent common environments of interest in applications for safety and surveillance (outside home) and human activity monitoring or home surveillance [28]. A total amount of around 54 and 59 minutes of audio are provided respectively for "Home" and "Residential area" scenarios. Sound events present in each recording were manually annotated without any further cross-verification, due to the high level of subjectivity inherent to the problem. For the "Home" scenario 1 http://www.cs.tut.fi/sgn/arg/dcase2016/ a total of 11 classes were defined, while for the "Residential Area" scenario 7 classes were annotated.
Each scenario of the TUT-SED 2016 has been divided into two subsets: development dataset and evaluation dataset. The split was done based on the number of examples available for each sound event class. In addition, for the development dataset a cross-validation setup is provided in order to easily compare the results of different approaches on this dataset. The setup consists of 4 folds, so that each recording is used exactly once as test data. In detail, "Residential area" sound events data consists of 5 recordings in the evaluation set and 12 recordings in the development set while "Home" sound events data consists of 5 recordings in the evaluation set and 10 recordings in turn divided into 4 folds as training and validation subsets. It is a subset of the TUT Acoustic scenes 2016 dataset [28], from which also TUT-SED 2016 dataset was taken. Thus, the recording setup, the annotation procedure, the dataset splitting, and the cross-validation setup is the same described above. The 6 target sound event classes were selected to represent common sounds related to human presence and traffic, and they include brakes squeaking, car, children, large vehicle, people speaking, people walking. The evaluation set of the TUT-SED 2017 consists of 29 minutes of audio, whereas the development set is composed of 92 minutes of audio which are employed in the cross-validation procedure.
3) TUT Rare Sound Events 2017: The TUT Rare Sound Events 2017 (TUT-Rare 2017) 2 [27] consists of isolated sounds of three different target event classes (respectively, baby crying, glass breaking and gunshot) and 30-second long recordings of everyday acoustic scenes to serve as background, such as park, home, street, cafe, train, etc. [28]. In this case we consider a monophonic-SED, since the sound events are artificially mixed with the background sequences without overlap. In addition, the event potentially present in each test file is known a-priori thus it is possible to train different models, each one specialized for a sound event. In the development set, we used a number of sequences equal to 750, 750 and 1250 for training respectively of the baby cry, glass-break and gunshot models, while we used 100 sequences as validation set and 500 sequences as test set for all of them. In the evaluation set, the training and test sequences of the development set are combined into a single training set, while the validation set is the same used in the Development dataset. The system is evaluated against an "unseen" set of 1500 samples (500 for each target class) with a sound event presence probability for each class equal to 0.5.
B. Evaluation Metrics
In this work we used the Error Rate (ER) as primary evaluation metric to ensure comparability with the reference systems. In particular, for the evaluations on the TUT-SED 2016 and 2017 datasets we consider a segment-based ER with a one-second segment length, while for the TUT-Rare 2017 the evaluation metric is event-based error rate calculated using onset-only condition with a collar of 500 ms. In the segmentbased ER the ground truth and system output are compared in a fixed time grid, thus sound events are marked as active or inactive in each segment. For the event-based ER the ground truth and system output are compared at event instance level.
ER score is calculated in a single time frame of one second length from intermediate statistics, i.e., the number of substitutions (S(t 1 )), insertions (I(t 1 )), deletions (D(t 1 )) and active sound events from annotations (N (t 1 )) for a segment t 1 . Specifically: 1) Substitutions S(t 1 ) are the number of ground truth events for which we have a false positive and one false negative in the same segment; 2) Insertions I(t 1 ) are events in system output that are not present in the ground truth, thus the false positives which cannot be counted as substitutions; 3) Deletions D(t 1 ) are events in ground truth that are not correctly detected by the system, thus the false negatives which cannot be counted as substitutions; These intermediate statistics are accumulated over the segments of the whole test set to compute the evaluation metric ER. Thus, the total error rate is calculated as:
ER = T t1=1 S(t 1 ) + T t1=1 I(t 1 ) + T t1=1 D(t 1 ) T t1=1 N (t 1 ) ,(9)
where T is the total number of segments t 1 .
If there are multiple scenes in the dataset, such as in the TUT-SED 2016, evaluation metrics are calculated for each scene separately and then the results are presented as the average across the scenes. A detailed and visualized explanation of segment-based ER score in multi label setting can be found in [30].
C. Comparative Algorithms
Since the datasets we used were employed to develop and evaluate the algorithms proposed from the participants of the DCASE challenges, we can compare our results with the most recent approaches in the state-of-the-art. In addition, each challenge task came along with a baseline method that consists in a basic approach for the SED. It represents a reference for the participants of the challenges while they were developing their systems.
1) TUT-SED 2016: The baseline system is based on mel frequency cepstral coefficients (MFCC) acoustic features and multiple GMM-based classifiers. In detail, for each event class, a binary classifier is trained using the audio segments annotated as belonging to the model representing the event class, and the rest of the audio to the model which represents the negative class. The decision is based on likelihood ratio between the positive and negative models for each individual class, with a sliding window of one second. To the best of our knowledge, the most performing method for this dataset is an algorithm we proposed [31] in 2017, based on binaural MFCC features and a Multi Layer Perceptron (MLP) neural network used as classifier. The detection task is performed by an adaptive energy Voice Activity Detector (VAD) which precedes the MLP and determines the starting and ending point of an event-active audio sequence.
2) TUT-SED 2017: In this case the baseline method relies on an MLP architecture using 40 LogMels as audio representation [27]. The network is fed with a feature vector comprehending 5-frame as temporal context. The neural network is composed of two dense layers of 50 hidden units per layer with the 20% of dropout, while the network output layer contains K sigmoid units (where K is the number of classes) that can be active at the same time and represent the network prediction of event activity for each context window. The state of the art algorithm is based on the CRNN architecture [32]. The authors compared both monaural and binaural acoustic features, observing that binaural features in general have similar performance as single channel features on the development dataset although the best result on the evaluation dataset is obtained using monaural LogMels as network inputs. According to the authors, this can suggest that the dataset was possibly not large enough to train the CRNN fed with this kind of features.
3) TUT-Rare 2017: The baseline [28] and the state-of-theart methods of the DCASE 2017 challenge (Rare-SED) were based on a very similar architectures to that employed for the TUT-SED 2016 and described above. For the baseline method, the only difference relies in the output layer, which in this case is composed of a single sigmoid unit. The first classified algorithm [33] takes 128 LogMels as input and process them frame-wise by means of a CRNN with 1D filters on the first stage.
D. Neural Network configuration
We performed a hyperparameter search by running a series of experiments over predetermined ranges. We selected the configuration that leads, for each network architecture, to the best results from the cross-validation procedure on the development dataset of each task, and used this architecture to compute the results on the corresponding evaluation dataset.
The number and shape of convolutional layers, the nonlinear activation function, the regularizers in addition to the capsules dimensions and the maximum number of routing iterations have been varied for a total of 100 configurations. Details of searched hyperparameters and their ranges are reported in Table I. The neural networks training was accomplished by the AdaDelta stochastic gradient-based optimization algorithm [34] for a maximum of 100 epochs and batch size equal to 20 on the margin loss function. The optimizer hyperparameters were set according to [34] (i.e., initial learning rate lr = 1.0, ρ = 0.95, = 10 −6 ). The trainable weights were initialized according to the glorot-uniform scheme [35] and an early stopping strategy was employed during the training in order to [25]. The algorithm has been implemented in the Python language using Keras [36] and Tensorflow [37] as deep learning libraries, while Librosa [38] has been used for feature extraction.
For the CNN models, we performed a similar random hyperparameters search procedure for each dataset, considering only the first two blocks of the Table I and by replacing the capsule layers with feedforward layers with sigmoid activation function.
On TUT-SED 2016 and 2017 datasets, the event activity probabilities are simply thresholded at a fixed value equal to 0.5, in order to obtain the binary activity matrix used to compute the reference metric. On the TUT-Rare 2017 the network output signal is processed as proposed in [39], thus it is convolved with an exponential decay window then it is processed with a sliding median filter with a local window-size and finally a threshold is applied.
IV. RESULTS
In this section, we present the results for all the datasets and experiments described in Section III. The evaluation of Capsule and CNNs based methods have been conducted on the development sets of each examined dataset using random combinations of hyperparameters given in Table I.
A. TUT-SED 2016
Results on TUT-SET 2016 dataset are shown in Table III, while Table II reports the configurations which yielded the best performance on the evaluation dataset. All the found models have ReLU as non-linear activation function and use dropout technique as weight regularization, while the batchnormalization applied after each convolutional layer seems to be effective only for the CapsNet. In Table III results are reported considering each combination of architecture and features we evaluated. The best performing setups are highlighted with bold face. The use of STFT as acoustic representation results to be beneficial for both the architectures with respect to the LogMels. In particular, the CapsNet obtains the lowest ER on the cross-validation performed on Development dataset when is fed by the binaural version of such features. On the two scenarios of the evaluation dataset, a model based on CapsNet and binaural STFT obtains an averaged ER equal to 0.69, which is largely below both the challenge baseline [28] (-0.19) and the best score reported in literature [31] (-0.10). The comparative method based on CNNs seems not to fit at all when LogMels are used as input, while the performance is aligned with the challenge baseline based on GMM classifiers when the models are fed by monaural STFT. This discrepancy can be motivated by the enhanced ability of CapsNet to exploit small training datasets, in particular due to the effect of the routing mechanism on the weight training. In fact, the TUT-SED 2016 is composed of a small amount of audio and the sounds events occur sparsely (i.e., only 49 minutes of the total audio contain at least one event active), thus, the overall results of the comparative methods (CNNs, Baseline and SoA) on this dataset are quite low compared to the other datasets.
Another CapsNet property that is worth to highlight is the lower number of free parameters that compose the models compared to evaluated CNNs. As shown in Table II, the considered architectures have 267K and 252K free parameters respectively for the "Home" and the "Residential area" scenario. It is a relatively low number of parameters to be trained (e.g., a popular deep architecture for image classification such as AlexNet [40] is composed of 60M parameters), and the best performing CapsNets of each considered scenario have even less parameters with respect to the CNNs (-22% and -64% respectively for the "Home" and the "Residential area" scenario). Thus, the high performance of CapsNet can be explained with the architectural advantage rather than the model complexity. In addition, there can be a significant performance shift for the same type of networks with the same number of parameters, which means that a suitable hyperparameters search action (e.g., number of filters on the convolutional layers, dimension of the capsule units) is crucial in finding the best performing network structure.
1) Closer Look at Network Outputs: A comparative study on the neural network outputs, which are regarded as event activity probabilities is presented in Fig. 2. The monaural STFT from a 40 seconds sequence of the "Residential area" dataset is shown along with event annotations and the network outputs of the CapsNet and the CNN best performing models. For this example, we chose the monaural STFT as input feature because generally it yields the best results over all the considered datasets. Fig. 2 shows bird singing lasting for the whole sequence and correctly detected by both the architectures. When the car passing by event overlaps the bird singing, the CapsNet detects more clearly its presence. The people speaking event is slightly detected by both the models, while the object banging activates the relative Capsule exactly only in correspondence of the event annotation. It must be noted that the dataset is composed of unverified manually labelled real-life recordings, that may present a degree of subjectivity, thus, affecting the training. Nevertheless, the CapsNet exhibits remarkable detection capability especially in the condition of overlapping events, while the CNN outputs are definitely more "blurred" and the event people walking is wrongly detected in this sequence.
B. TUT-SED 2017
The bottom of Table III reports the results obtained with the TUT-SED 2017. As in the TUT-SED 2016, the best performing models on the Development dataset are those fed by the Binaural STFT of the input signal. In this case we can also observe interesting performance obtained by the CNNs, which on the Evaluation dataset obtain a lower ER (i.e., equal to 0.65) with respect to the state-of-the-art algorithm [32], based on CRNNs. CapsNet confirms its effectiveness and it obtains lowest ER equal to 0.58 with LogMel features, although with a slight margin with respect to the other inputs (i.e., -0.03 compared to the STFT features, -0.06 compared to both the binaural version of LogMels and STFT spectrograms).
It is interesting to notice that in the development crossvalidation, the CapsNet models yielded significantly better performance with respect to the other reported approaches, while the CNNs have decidedly worse performance. On the Evaluation dataset, it was not possible to use the earlystopping strategy, thus the ER scores of the CapsNets suffer Notwithstanding this weakness, the absolute performance obtained both with monaural and binaural spectral features is consistent and improves the state-of-the-art result, with a reduction of the ER of up to 0.21 in the best case. This is particularly evident in Fig. 3, that shows the output of the two best performing systems for a sequence of approximately 20 seconds length which contains highly overlapping sounds. The event classes "people walking" and "large vehicle" are overlapped for almost all the sequence duration and they are well detected by the CapsNet, although they are of different nature: the "large vehicle" has a typical timber and is almost stationary, while the class "people walking" comprehend impulsive and desultory sounds. The CNN seems not to be able to distinguish between the "large vehicle" and the "car" classes, detecting confidently only the latter, while the activation corresponding "people walking" class is modest. The presence of the "brakes squeaking" class, which has a specific spectral profile mostly located in the highest frequency bands (as shown in the spectrogram), is detected only by the CapsNet. We can assume this as a concrete experimental validation of the routing effectiveness. The number of free parameters amounts to 223K for the best configuration shown in Table II and it is similar to those found for the TUT-SED 2016, which consists also in this case in a reduction equal to 35% with respect to the best CNN layout.
C. TUT-Rare SED 2017
The advantage given by the routing procedure to the Caps-Net is particularly effective in the case of polyphonic SED. This is confirmed by the results obtained with the TUT-Rare SED 2017 which are shown in Table V. In this case the metric is not anymore segment-based, but it is the eventbased ER calculated using onset-only condition. We performed a separate random-search for each of the three sound event classes both for CapsNets and CNNs and we report the averaged score over the three classes. The setups that obtained the best performance on the Evaluation dataset are shown in Table IV. This is the largest dataset we evaluated and its characteristic is the high unbalance between the amount of background sounds versus the target sound events.
From the analysis of partial results on the Evaluation set (unfortunately not included for the sake of conciseness) we notice that both architectures achieve the best performance on glass break sound (0.25 and 0.24 respectively for CNNs and CapsNet with LogMels features), due to its clear spectral fingerprint compared to the background sound. The worst performing class is the gunshot (ER equal to 0.58 for the CapsNet), although the noise produced by different instances of this class involves similar spectral components. The low
3 × 3 - 3 × 3 - 3 × 3 - Primary Capsules dimension J 8 - 8 - 8 - Detection Capsules dimension G 14 - 14 - 6 - Routing iterations 5 - 5 - 1 - # Params 131K 84K 131K 84K 30K 211K
performance is probably motivated due to the fast decay of this sound, which means that in this case the routing procedure is not sufficient to avoid confusing the gunshot with other background noises, especially in the case of dataset unbalancing and low event-to-background ratio. A solution to this issue can be find in the combination of CapsNet with RNN units, as proposed in [19] for the CNNs which yields an efficient modelling of the gunshot by CRNN and improves the detection abilities even in polyphonic conditions. The babycry consists of short, harmonic sounds is detected almost equally from the two architectures due to the property of frequency shift invariance given by the convolutional kernel processing. Finally, the CNN shows better generalization performance to the CapsNet, although the ER score is far from state-of-theart which involves the use of the aforementioned CRNNs [33] or a hierarchical framework [39]. In addition, in this case are the CNN models to have a reduced number of weights to train (36%) with respect the CapsNets, except for the "gunshot" case but, as mentioned, it is also the configuration that gets the worst results.
D. Alternative Dynamic Routing for SED
We observed that the original routing procedure implies the initialization of the coefficients β ij to zero each time the procedure is restarted, i.e, after each input sample has been processed. This is reasonable in the case of image classification, for which the CapsNet has been originally proposed. In the case of audio task, we clearly expect a higher correlation between samples belonging to adjacent temporal frames X. We thus investigated the chance to initialize the coefficients β ij to zero only at the very first iteration, while for subsequent X to assign them the last values they had at the end of the previous iterative procedure. We experimented this variant considering the best performing models of the analyzed scenarios for polyphonic SED, taking into account only the systems fed with the monaural STFT. As shown in Table VI, the modification we propose in the routing procedure is effective in particular on the evaluation datasets, conferring improved generalization properties to the models we tested even without accomplishing a specific hyperparameters optimization.
V. CONCLUSION
In this work, we proposed to apply a novel neural network architecture, the CapsNet, to the polyphonic SED task. The architecture is based on both convolutional and capsule layers. The convolutional layers extract high-level time-frequency feature maps from input matrices which provide an acoustic spectral representation with long temporal context. The obtained feature maps are then used to feed the Primary Capsule layer which is connected to the Detection Capsule layer, finally extracting the event activity probabilities. These last two layers are involved in the iterative routing-by-agreement procedure, which computes the outputs based on a measure of likelihood between a capsule and its parent capsules. This architecture combines, thus, the ability of convolutional layers to learn local translation invariant filters with the ability of capsules to learn part-whole relations by using the routing procedure.
Part of the novelty of this work resides in the adaptation of the CapsNet architecture for the audio event detection task, with a special care on the input data, the layers interconnection and the regularization techniques. The routing procedure is also modified to confer a more appropriate acoustic rationale, with a further average performance improvement of 6% among the polyphonic-SED tasks.
An extensive evaluation of the algorithm is proposed with comparison to recent state-of-the-art methods on three different datasets. The experimental results demonstrate that the use of dynamic routing procedure during the training is effective and provides significant performance improvement in the case of overlapping sound events compared to traditional CNNs, and other established methods in polyphonic SED. Interestingly, the CNN based method obtained the best performance in the monophonic SED case study, thus emphasizing the suitability of the CapsNet architecture in dealing with overlapping sounds. We showed that this model is particularly effective with small sized datasets, such as TUT-SED 2016 which contains a total 78 minutes of audio for the development of the models of which one third is background noise.
Furthermore, the network trainable parameters are reduced with respect to other deep learning architectures, confirming the architectural advantage given by the introduced features also in the task of polyphonic SED.
Despite the improvement in performance, we identified a limitation of the proposed method. As presented in Section IV, the performance of the CapsNet is more sensible to the number of training iterations. This affects the generalization capabilities of the algorithm, yielding a greater relative deterioration of the performance on evaluation datasets with respect to the comparative methods.
The results we observed in this work are consistent with many other classification tasks in various domains [41]- [43], prove that the CapsNet is an effective approach which enhances the well-established representation capabilities of the CNNs also in the audio field. As a future work, regularization methods can be investigated to overcome the lack of generalization which seems to affect the CapsNets. Furthermore, regarding the SED task the addition of recurrent units may be explored to enhance the detection of particular (i.e., inpulsive) sound events in real-life audio and the recently-proposed variant of routing, based on the Expectation Maximization algorithm (EM) [44], can be investigated in this context. he collaborated to several regional and european projects on audio signal processing. Dr. Principi is author and coauthor of several international scientific peer-reviewed articles in the area of speech enhancement for robust speech and speaker recognition and intelligent audio analysis. He is member of the IEEE CIS Task Force on Computational Audio Processing, and is reviewer for several international journals. His current research interests are in the area of machine learning and digital signal processing for the smart grid (energy task scheduling, nonintrusive load monitoring, computational Intelligence for vehicle to grid) and intelligent audio analysis (multi-room voice activity detection and speaker localization, acoustic event detection, fall detection).
| 7,491 |
1810.06325
|
2897282582
|
Artificial sound event detection (SED) aims to mimic the human ability to perceive and understand what is happening in the surroundings. Nowadays, deep learning offers valuable techniques for this goal, such as convolutional neural networks (CNNs). The capsule neural network (CapsNet) architecture has been recently introduced in the image processing field with the intent to overcome some of the known limitations of CNNs, specifically regarding the scarce robustness to affine transformations (i.e., perspective, size, and orientation) and the detection of overlapped images. This motivated the authors to employ CapsNets to deal with the polyphonic SED task, in which multiple sound events occur simultaneously. Specifically, we propose to exploit the capsule units to represent a set of distinctive properties for each individual sound event. Capsule units are connected through a so-called dynamic routing that encourages learning part-whole relationships and improves the detection performance in a polyphonic context. This paper reports extensive evaluations carried out on three publicly available datasets, showing how the CapsNet-based algorithm not only outperforms standard CNNs but also achieves the best results with respect to the state-of-the-art algorithms.
|
The authors of @cite_21 show that CapsNets outperform state-of-the-art approaches based on CNNs for digit recognition in the MNIST dataset case study. They designed the CapsNet to learn how to assign the suited partial information to the entities that the neural network has to predict in the final classification. This property should overcome the limitations of solutions such as max-pooling, currently employed in CNNs to provide local translation invariance, but often reported to cause an excessive information loss. Theoretically, the introduction of the dynamic routing can supply invariances for any property captured by a capsule, allowing also to adequately train the model without requiring extensive data augmentation or dedicated domain adaptation procedures.
|
{
"abstract": [
"A capsule is a group of neurons whose activity vector represents the instantiation parameters of a specific type of entity such as an object or object part. We use the length of the activity vector to represent the probability that the entity exists and its orientation to represent the instantiation parameters. Active capsules at one level make predictions, via transformation matrices, for the instantiation parameters of higher-level capsules. When multiple predictions agree, a higher level capsule becomes active. We show that a discrimininatively trained, multi-layer capsule system achieves state-of-the-art performance on MNIST and is considerably better than a convolutional net at recognizing highly overlapping digits. To achieve these results we use an iterative routing-by-agreement mechanism: A lower-level capsule prefers to send its output to higher level capsules whose activity vectors have a big scalar product with the prediction coming from the lower-level capsule."
],
"cite_N": [
"@cite_21"
],
"mid": [
"2963703618"
]
}
|
Polyphonic Sound Event Detection by using Capsule Neural Networks
|
H UMAN cognition relies on the ability to sense, process, and understand the surrounding environment and its sounds. Although the skill of listening and understanding their origin is so natural for living beings, it still results in a very challenging task for computers.
Sound event detection (SED), or acoustic event detection, has the aim to mimic this cognitive feature by means of artificial systems. Basically, a SED algorithm is designed to detect the onset and offset times for a variety of sound events captured in an audio recording and associate a textual descriptor, i.e., a label for each of these events.
In recent years, SED has received the interest from the computational auditory scene analysis community [1], due to its potential in several engineering applications. Indeed, the automatic recognition of sound events and scenes can have a considerable impact in a wide range of applications where sound or sound sensing is advantageous with respect to other modalities. This is the case of acoustic surveillance [2], healthcare monitoring [3], [4] or urban sound analysis [5], where the short duration of certain events (i.e., a human fall, a gunshot or a glass breaking) or the personal privacy motivate the exploitation of the audio information rather than, e.g., the image processing. In addition, audio processing is often less computationally demanding compared to other multimedia domains, thus embedded devices can be easily equipped with microphones and sufficient computational capacity to locally process the signal captured. These could be smart home devices for home automation purposes or sensors for wildlife and biodiversity monitoring (i.e., bird calls detection [6]).
SED algorithms in a real-life scenario face many challenges. These include simultaneous events, environmental noise and events of the same class produced by different sources [7]. Since multiple events are very likely to overlap, a polyphonic SED algorithm, i.e., an algorithm able to detect multiple simultaneous events, needs to be designed. Finally, the effects of noise and intra-class variability represent further challenges for SED in real-life situations.
Traditionally, the polyphonic acoustic event analysis has been approached with statistical modelling methods, including Hidden Markov Models (HMM) [8], Gaussian Mixture Models (GMM) [9], Non-negative Matrix Factorization (NMF) [10] and support vector machines (SVM) [11]. In the recent era of the "Deep Learning", different neural network architectures have been successfully used for sound event detection and classification tasks, including feed-forward neural networks (FNN) [12], deep belief networks [13], convolutional neural networks (CNNs) [14] and Recurrent Neural Networks (RNNs) [15]. In addition, these architectures laid the foundation for end-to-end systems [16], [17], in which the feature representation of the audio input is automatically learnt from the raw audio signal waveforms.
B. Contribution
The proposed system is a fully data-driven approach based on the CapsNet deep neural architecture presented by Sabour et al. [23]. This architecture has shown promising results on highly overlapped digital numbers classification. In the audio field, a similar condition can be found in the detection of multiple concomitant sound events from acoustic spectral representations, thereby we propose to employ the CapsNet for the polyphonic-SED in real-life recordings. The novel computational structure based on capsules, combined to the routing mechanism allows to be invariant to intra-class affine transformations and to identify part-whole relationships between data features. In the SED case study, it is hypothesized that this characteristic confers to CapsNet the ability to effectively select most representative spectral features of each individual sound event and separate them from overlapped descriptions of the other sounds in the mixture. This hypothesis is supported by previously mentioned related works. Specifically, in [21], the CapsNet is exploited in order to obtain the prediction of the presence of heterogeneous polyphonic sounds (i.e., bird calls) on unseen audio files recorded in various conditions. In [22] the dynamic routing yields promising results for SED with a weakly labeled training dataset, thus with unavailable ground truths for the onset and offset times of the sound events. The algorithm has to detect sound events without supervision and in this context the routing can be considered as an attention mechanism.
In this paper, we present an extensive analysis of SED conducted on real-life audio datasets and compare the results with state-of-the-art methods. In addition, we propose a variant of the dynamic routing procedure which takes into account the temporal dependence of adjacent frames. The proposed method outperforms previous SED approaches in terms of detection error rate in the case of polyphonic SED, while it has comparable performance with respect to CNNs in the case of monophonic SED.
The whole system is composed of a feature extraction stage and a detection stage. The feature extraction stage transforms the audio signal into acoustic spectral features, while the second stage processes these features to detect the onset and offset times of specific sound events. In this latter stage we include the capsule units. The network parameters are obtained by supervised learning using annotations of sound events activity as target vectors. We have evaluated the proposed method against three datasets of real-life recordings and we have compared its performance both with the results of experiments with a traditional CNN architecture, and with the performance of well-established algorithms which have been assessed on the same datasets.
The rest of the paper is organized as follows. In Section II the task of polyphonic SED is formally described and the stages of the approach we propose are detailed, including a presentation of the CapsNet architecture characteristics. In Section III, we present the evaluation set-up used to accomplish the performance of the algorithm we propose and the comparative methods. In Section IV the results of experiments are discussed and compared with baseline methods. Section V finally presents our conclusions for this work.
II. PROPOSED METHOD
The aim of polyphonic SED is to find and classify any sound event present in an audio signal. The algorithm we propose is composed of two main stages: sound representation and polyphonic detection. In the sound representation stage, the audio signal is transformed in a two-dimensional timefrequency representation to obtain, for each frame t of the audio signal, a feature vector x t ∈ R F , where F represents the number of frequency bands.
Sound events possess temporal characteristics that can be exploited for SED, thus certain events can be efficiently distinguished by their time evolution. Impulsive sounds are extremely compact in time (e.g., gunshot, object impact), while other sound events have indefinite length (i.e., wind blowing, people walking). Other events can be distinguished from their spectral evolution (e.g., bird singing, car passing by). Long-term time domain information is very beneficial for SED and motivates for the use of a temporal context allowing the algorithm to extract information from a chronological sequence of input features. Consequently, these are presented as a context window matrix X t:t+T −1 ∈ R T ×F ×C , where T ∈ N is the number of frames that defines the sequence length of the temporal context, F ∈ N is the number of frequency bands and C is the number of audio channels. Differently, the target output matrix is defined as Y t:t+T −1 ∈ N T ×K , where K is the number of sound event classes.
In the SED stage, the task is to estimate the probabilities p(Y t:t+T −1 |X t:t+T −1 , θ) ∈ R T ×K , where θ denotes the parameters of the neural network. The network outputs, i.e., the event activity probabilities, are then compared with a threshold in order to obtain event activity predictionsŶ t:t+T −1 ∈ N T ×K . The parameters θ are trained by supervised learning, using the frame-based annotation of the sound event class as target output, thus, if class k is active during frame t, Y (t, k) is equal to 1, and is set to 0 otherwise. The case of polyphonic SED implies that this target output matrix can have multiple non-zero elements K in the same frame t, since several classes can be simultaneously present.
Indeed, polyphonic SED can be formulated as a multi-label classification problem in which the sound event classes are detected by multi-label annotations over consecutive time frames. The onset and offset time for each sound event are obtained by combining the classification results over consequent time frames. The trained model will then be used to predict the activity of the sound event classes in an audio stream without any further post-processing operations and prior knowledge on the events locations.
A. Feature Extraction
For our purpose, we exploit two acoustic spectral representation, the magnitude of the Short Time Fourier Transform (STFT) and the LogMel coefficients, obtained from all the audio channels and extensively used for other SED algorithms. Except where differently stated, we study the performance of binaural audio features and compare it with those extracted from a single channel audio signal. In all cases, we operate with audio signals sampled at 16 kHz and we calculate the STFT with a frame size equal to 40 ms and a frame step equal to 20 ms. Furthermore, the audio signals are normalized to the range [−1, 1] in order to have the same dynamic range for all the recordings.
The STFT is computed on 1024 points for each frame, while LogMel coefficients are obtained by filtering the STFT magnitude spectrum with a filter-bank composed of 40 filters evenly spaced in the mel frequency scale. In both cases, the logarithm of the energy of each frequency band is computed. The input matrix X t:t+T −1 concatenates T = 256 consequent STFT or LogMel vectors for each channel C = {1, 2}, thus the resulting feature tensor is X t:t+T −1 ∈ R 256×F ×C , where F is equal to 513 for the STFT and equal to 40 for the LogMels. The range of feature values is then normalized according to the mean and the standard deviation computed on the training sets of the neural networks.
B. CapsNet Architecture
The CapsNet architecture relies on the CNN architecture and includes its computational units in the first layers of the network as invariant features extractor from the input hereafter referred as X, omitting the subscript t:t+T −1 for simplicity of notation.
The essential unit of a CNN model is named kernel and it is composed of multiple neurons which process the whole input matrix by computing the linear convolution between the input and the kernel itself. The outputs of a CNN layer are called feature maps, and they represent translated replicas of highlevel features. The feature maps are obtained by applying a non linear function to the sum of a bias term and the linear filtered version of the input data. Denoting with W m ∈ R K m 1 ×K m 2 the m-th kernel and with b m ∈ R T ×F the bias vector of a generic convolutional layer, the m-th feature map H m ∈ R T ×F is given by:
H m = ϕ (W m * X + b m ) ,(1)
where * represents the convolution operation, ϕ(·) the differentiable non-linear activation function. The coefficients of W m and b m are learned during the model training. The dimension of the m-th feature map H m depends on the zero padding of the input tensor: here, padding is performed in order to preserve the dimension of the input. Moreover, Eq. (1) is typically followed by a max-pooling layer, which in this case operates only on the frequency axis. Following Hinton's preliminary works [24], in the CapsNet presented in [23] two layers are divided into many small groups of neurons called capsules. In those layers, the scalaroutput feature detectors of CNNs are replaced with vectoroutput capsules and the dynamic routing, or routing-byagreement algorithm is used in place of max-pooling, in order to replicate learned knowledge across space. Formally, we can rewrite Eq. (1) as
H m = α 11 W 11 X 1 + . . . + α M 1 W 1M X M . . . α 1K W K1 X 1 + . . . + α M K W KM X M .(2)
In Eq. (2), (W * X) has been partitioned into K groups, or capsules, so that each row in the column vector corresponds to an output capsule (the bias term b has been omitted for simplicity). Similarly, X has been partitioned into M capsules, where X i denotes an input capsule i, and W has been partitioned into submatrices W ij called transformation matrices. Conceptually, a capsule incorporates a set of properties of a particular entity that is present in the input data. With this purpose, coefficients α ij have been introduced. They are called coupling coefficients and if we set all the α ij = 1, Eq. (1) is obtained again. The coefficients α ij affect the learning dynamics with the aim to represent the amount of agreement between an input capsule and an output capsule. In particular, they measure how likely capsule i may activate capsule j, so the value of α ij should be relatively accentuated if the properties of capsule i coincide with the properties of capsule j in the layer above. The coupling coefficients are calculated by the iterative process of dynamic routing to fulfill the idea of assigning parts to wholes. Capsules in the higher layers should comprehend capsules in the layer below in terms of the entity they identify, while dynamic routing iteratively attempts to find these associations and supports capsules to learn features that ensure these connections.
In this work, we employ two layers composed of capsule units, and we denote them as Primary Capsules for the lower layer and Detection Capsules for the output layer throughout the rest of the paper.
1) Dynamic Routing: After giving a qualitative description of routing, we describe the method used in [23] to compute the coupling coefficients. The activation of a capsule unit is a vector which holds the properties of the entity it represents in its direction. The vector's magnitude indicates instead the probability that the entity represented by the capsule is present in the current input. To interpret the magnitude as a probability, a squashing non-linear function is used, which is given by:
v j = s j 2 1 + s j 2 s j s j ,(3)
where v j is the vector output of capsule j and s j is its total input. s j is a weighted sum over all the outputs u i of a capsule in the Primary Capsule layer multiplied by the coupling matrix W ij :
s j = i α ijûj|i ,û j|i = W ij u i .(4)
The routing procedure work as follows. The coefficient β ij measures the coupling between the i-th capsule from the Primary Capsule layer and the j-th capsule of the Detection Capsule layer. The β ij are initialized to zero, then they are iteratively updated by measuring the agreement between the current output v j of each capsule in the layer j and the predictionû j|i produced by the capsule i in the layer below. The agreement is computed as the scalar product
c ij = v j ·û j|i ,(5)
between the aforementioned capsule outputs. It is a measure of how similar the directions (i.e., the proprieties of the entity they represent) of capsules i and j are. The β ij coefficients are treated as if they were log likelihoods, thus the agreement value is added to the value owned at previous routing step:
β ij (r + 1) = β ij (r) + c ij (r) = β ij (r) + v j ·û j|i (r),(6)
where r represents the routing iteration. In this way the new values for all the coupling coefficients linking capsule i to higher level capsules are computed. To ensure that the coupling coefficients α ij represent log prior probabilities, the softmax function to β ij is computed at the start of each new routing iteration. Formally:
α ij = exp(β ij ) k exp(β ik ) ,(7)
so j α ij = 1. Thus, α ij can be seen as the probability that the entity represented by capsule Primary Capsule i is a part of the entity represented by the Detection Capsule j as opposed to any other capsule in the layer above.
2) Margin loss function: The length of the vector v j is used to represent the probability that the entity represented by the capsule j exists. The CapsNet have to be trained to produce long instantiation vector at the corresponding k th capsule if the event that it represents is present in the input audio sequence. A separate margin loss is defined for each target class k as:
L k = T k max(0, m + − v j ) 2 + λ(1 − T k ) max(0, v j − m − ) 2 (8)
where T k = 1 if an event of class k is present, while λ is a down-weighting factor of the loss for absent sound event classes classes. m + , m − and λ are respectively set equal to 0.9, 0.1 and 0.5 as suggested in [23]. The total loss is simply the sum of the losses of all the Detection Capsules.
C. CapsNet for Polyphonic Sound Event Detection
The architecture of the neural network is shown in Fig. II-C. The first stages of the model are traditional CNN blocks which act as feature extractors on the input X. After each block, max-pooling is used to halve the dimensions only on the frequency axis. The feature maps obtained by the CNN layers are then fed to the Primary Capsule Layer that represents the lowest level of multi-dimensional entities. Basically, it is a convolutional layer with J · M filters, i.e., it contains M convolutional capsules with J kernels each. Its output is then reshaped and squashed using Eq. (3). The final layer, or Detection Capsule layer, is a time-distributed capsule layer (i.e., it applies the same weights and biases to each frame element) and it is composed of K densely connected capsule units of dimension G. Since the previous layer is also a capsule layer, the dynamic routing algorithm is used to compute the output. The background class was included in the set of K target events, in order to represent its instance with a dedicated capsule unit and train the system to recognize the absence of events. In the evaluation, however, we consider only the outputs relative to the target sound events. The model predictions are obtained by computing the Euclidean norm of the output of each Detection Capsule. These values represent the probabilities that one of the target events is active in a frame t of the input feature matrix X, thus we consider them as the network output predictions.
In [23], the authors propose a series of densely connected neuron layers stacked at the bottom of the CapsNet, with the aim to regularize the weights training by reconstructing the input image X ∈ N 28×28 . Here, this technique entails an excessive complexity of the model to train, due to the higher number of units needed to reconstruct X ∈ R T ×F ×C , yielding poor performance in our preliminary experiments. We decided, thus, to use dropout [25] and L 2 weight normalization [26] as regularization techniques, as done in [22].
III. EXPERIMENTAL SET-UP
In order to test the proposed method, we performed a series of experiments on three datasets provided to the participants of different editions of the DCASE challenge [27], [28]. We evaluated the results by comparing the system based on the Capsule architecture with the traditional CNN. The hyperparameters of each network have been optimized with a random search strategy [29]. Furthermore, we reported the baselines and the best state-of-the-art performance provided by the challenge organizers.
A. Dataset
We assessed the proposed method on three datasets, two containing stereo recordings from real-life environments and one artificially generated monophonic mixtures of isolated sound events and real background audio.
In order to evaluate the proposed method in polyphonic reallife conditions, we used the TUT Sound Events 2016 & 2017 datasets, which were included in the corresponding editions of the DCASE Challenge. For the monophonic-SED case study, we used the TUT Rare Sound Events 2017 which represents the task 2 of the DCASE 2017 Challenge.
1) TUT Sound Events 2016: The TUT Sound events 2016 (TUT-SED 2016) 1 dataset consists of recordings from two acoustic scenes, respectively "Home" (indoor) and "Residential area" (outdoor) which we considered as two separate subsets. These acoustic scenes were selected from the challenge organizers to represent common environments of interest in applications for safety and surveillance (outside home) and human activity monitoring or home surveillance [28]. A total amount of around 54 and 59 minutes of audio are provided respectively for "Home" and "Residential area" scenarios. Sound events present in each recording were manually annotated without any further cross-verification, due to the high level of subjectivity inherent to the problem. For the "Home" scenario 1 http://www.cs.tut.fi/sgn/arg/dcase2016/ a total of 11 classes were defined, while for the "Residential Area" scenario 7 classes were annotated.
Each scenario of the TUT-SED 2016 has been divided into two subsets: development dataset and evaluation dataset. The split was done based on the number of examples available for each sound event class. In addition, for the development dataset a cross-validation setup is provided in order to easily compare the results of different approaches on this dataset. The setup consists of 4 folds, so that each recording is used exactly once as test data. In detail, "Residential area" sound events data consists of 5 recordings in the evaluation set and 12 recordings in the development set while "Home" sound events data consists of 5 recordings in the evaluation set and 10 recordings in turn divided into 4 folds as training and validation subsets. It is a subset of the TUT Acoustic scenes 2016 dataset [28], from which also TUT-SED 2016 dataset was taken. Thus, the recording setup, the annotation procedure, the dataset splitting, and the cross-validation setup is the same described above. The 6 target sound event classes were selected to represent common sounds related to human presence and traffic, and they include brakes squeaking, car, children, large vehicle, people speaking, people walking. The evaluation set of the TUT-SED 2017 consists of 29 minutes of audio, whereas the development set is composed of 92 minutes of audio which are employed in the cross-validation procedure.
3) TUT Rare Sound Events 2017: The TUT Rare Sound Events 2017 (TUT-Rare 2017) 2 [27] consists of isolated sounds of three different target event classes (respectively, baby crying, glass breaking and gunshot) and 30-second long recordings of everyday acoustic scenes to serve as background, such as park, home, street, cafe, train, etc. [28]. In this case we consider a monophonic-SED, since the sound events are artificially mixed with the background sequences without overlap. In addition, the event potentially present in each test file is known a-priori thus it is possible to train different models, each one specialized for a sound event. In the development set, we used a number of sequences equal to 750, 750 and 1250 for training respectively of the baby cry, glass-break and gunshot models, while we used 100 sequences as validation set and 500 sequences as test set for all of them. In the evaluation set, the training and test sequences of the development set are combined into a single training set, while the validation set is the same used in the Development dataset. The system is evaluated against an "unseen" set of 1500 samples (500 for each target class) with a sound event presence probability for each class equal to 0.5.
B. Evaluation Metrics
In this work we used the Error Rate (ER) as primary evaluation metric to ensure comparability with the reference systems. In particular, for the evaluations on the TUT-SED 2016 and 2017 datasets we consider a segment-based ER with a one-second segment length, while for the TUT-Rare 2017 the evaluation metric is event-based error rate calculated using onset-only condition with a collar of 500 ms. In the segmentbased ER the ground truth and system output are compared in a fixed time grid, thus sound events are marked as active or inactive in each segment. For the event-based ER the ground truth and system output are compared at event instance level.
ER score is calculated in a single time frame of one second length from intermediate statistics, i.e., the number of substitutions (S(t 1 )), insertions (I(t 1 )), deletions (D(t 1 )) and active sound events from annotations (N (t 1 )) for a segment t 1 . Specifically: 1) Substitutions S(t 1 ) are the number of ground truth events for which we have a false positive and one false negative in the same segment; 2) Insertions I(t 1 ) are events in system output that are not present in the ground truth, thus the false positives which cannot be counted as substitutions; 3) Deletions D(t 1 ) are events in ground truth that are not correctly detected by the system, thus the false negatives which cannot be counted as substitutions; These intermediate statistics are accumulated over the segments of the whole test set to compute the evaluation metric ER. Thus, the total error rate is calculated as:
ER = T t1=1 S(t 1 ) + T t1=1 I(t 1 ) + T t1=1 D(t 1 ) T t1=1 N (t 1 ) ,(9)
where T is the total number of segments t 1 .
If there are multiple scenes in the dataset, such as in the TUT-SED 2016, evaluation metrics are calculated for each scene separately and then the results are presented as the average across the scenes. A detailed and visualized explanation of segment-based ER score in multi label setting can be found in [30].
C. Comparative Algorithms
Since the datasets we used were employed to develop and evaluate the algorithms proposed from the participants of the DCASE challenges, we can compare our results with the most recent approaches in the state-of-the-art. In addition, each challenge task came along with a baseline method that consists in a basic approach for the SED. It represents a reference for the participants of the challenges while they were developing their systems.
1) TUT-SED 2016: The baseline system is based on mel frequency cepstral coefficients (MFCC) acoustic features and multiple GMM-based classifiers. In detail, for each event class, a binary classifier is trained using the audio segments annotated as belonging to the model representing the event class, and the rest of the audio to the model which represents the negative class. The decision is based on likelihood ratio between the positive and negative models for each individual class, with a sliding window of one second. To the best of our knowledge, the most performing method for this dataset is an algorithm we proposed [31] in 2017, based on binaural MFCC features and a Multi Layer Perceptron (MLP) neural network used as classifier. The detection task is performed by an adaptive energy Voice Activity Detector (VAD) which precedes the MLP and determines the starting and ending point of an event-active audio sequence.
2) TUT-SED 2017: In this case the baseline method relies on an MLP architecture using 40 LogMels as audio representation [27]. The network is fed with a feature vector comprehending 5-frame as temporal context. The neural network is composed of two dense layers of 50 hidden units per layer with the 20% of dropout, while the network output layer contains K sigmoid units (where K is the number of classes) that can be active at the same time and represent the network prediction of event activity for each context window. The state of the art algorithm is based on the CRNN architecture [32]. The authors compared both monaural and binaural acoustic features, observing that binaural features in general have similar performance as single channel features on the development dataset although the best result on the evaluation dataset is obtained using monaural LogMels as network inputs. According to the authors, this can suggest that the dataset was possibly not large enough to train the CRNN fed with this kind of features.
3) TUT-Rare 2017: The baseline [28] and the state-of-theart methods of the DCASE 2017 challenge (Rare-SED) were based on a very similar architectures to that employed for the TUT-SED 2016 and described above. For the baseline method, the only difference relies in the output layer, which in this case is composed of a single sigmoid unit. The first classified algorithm [33] takes 128 LogMels as input and process them frame-wise by means of a CRNN with 1D filters on the first stage.
D. Neural Network configuration
We performed a hyperparameter search by running a series of experiments over predetermined ranges. We selected the configuration that leads, for each network architecture, to the best results from the cross-validation procedure on the development dataset of each task, and used this architecture to compute the results on the corresponding evaluation dataset.
The number and shape of convolutional layers, the nonlinear activation function, the regularizers in addition to the capsules dimensions and the maximum number of routing iterations have been varied for a total of 100 configurations. Details of searched hyperparameters and their ranges are reported in Table I. The neural networks training was accomplished by the AdaDelta stochastic gradient-based optimization algorithm [34] for a maximum of 100 epochs and batch size equal to 20 on the margin loss function. The optimizer hyperparameters were set according to [34] (i.e., initial learning rate lr = 1.0, ρ = 0.95, = 10 −6 ). The trainable weights were initialized according to the glorot-uniform scheme [35] and an early stopping strategy was employed during the training in order to [25]. The algorithm has been implemented in the Python language using Keras [36] and Tensorflow [37] as deep learning libraries, while Librosa [38] has been used for feature extraction.
For the CNN models, we performed a similar random hyperparameters search procedure for each dataset, considering only the first two blocks of the Table I and by replacing the capsule layers with feedforward layers with sigmoid activation function.
On TUT-SED 2016 and 2017 datasets, the event activity probabilities are simply thresholded at a fixed value equal to 0.5, in order to obtain the binary activity matrix used to compute the reference metric. On the TUT-Rare 2017 the network output signal is processed as proposed in [39], thus it is convolved with an exponential decay window then it is processed with a sliding median filter with a local window-size and finally a threshold is applied.
IV. RESULTS
In this section, we present the results for all the datasets and experiments described in Section III. The evaluation of Capsule and CNNs based methods have been conducted on the development sets of each examined dataset using random combinations of hyperparameters given in Table I.
A. TUT-SED 2016
Results on TUT-SET 2016 dataset are shown in Table III, while Table II reports the configurations which yielded the best performance on the evaluation dataset. All the found models have ReLU as non-linear activation function and use dropout technique as weight regularization, while the batchnormalization applied after each convolutional layer seems to be effective only for the CapsNet. In Table III results are reported considering each combination of architecture and features we evaluated. The best performing setups are highlighted with bold face. The use of STFT as acoustic representation results to be beneficial for both the architectures with respect to the LogMels. In particular, the CapsNet obtains the lowest ER on the cross-validation performed on Development dataset when is fed by the binaural version of such features. On the two scenarios of the evaluation dataset, a model based on CapsNet and binaural STFT obtains an averaged ER equal to 0.69, which is largely below both the challenge baseline [28] (-0.19) and the best score reported in literature [31] (-0.10). The comparative method based on CNNs seems not to fit at all when LogMels are used as input, while the performance is aligned with the challenge baseline based on GMM classifiers when the models are fed by monaural STFT. This discrepancy can be motivated by the enhanced ability of CapsNet to exploit small training datasets, in particular due to the effect of the routing mechanism on the weight training. In fact, the TUT-SED 2016 is composed of a small amount of audio and the sounds events occur sparsely (i.e., only 49 minutes of the total audio contain at least one event active), thus, the overall results of the comparative methods (CNNs, Baseline and SoA) on this dataset are quite low compared to the other datasets.
Another CapsNet property that is worth to highlight is the lower number of free parameters that compose the models compared to evaluated CNNs. As shown in Table II, the considered architectures have 267K and 252K free parameters respectively for the "Home" and the "Residential area" scenario. It is a relatively low number of parameters to be trained (e.g., a popular deep architecture for image classification such as AlexNet [40] is composed of 60M parameters), and the best performing CapsNets of each considered scenario have even less parameters with respect to the CNNs (-22% and -64% respectively for the "Home" and the "Residential area" scenario). Thus, the high performance of CapsNet can be explained with the architectural advantage rather than the model complexity. In addition, there can be a significant performance shift for the same type of networks with the same number of parameters, which means that a suitable hyperparameters search action (e.g., number of filters on the convolutional layers, dimension of the capsule units) is crucial in finding the best performing network structure.
1) Closer Look at Network Outputs: A comparative study on the neural network outputs, which are regarded as event activity probabilities is presented in Fig. 2. The monaural STFT from a 40 seconds sequence of the "Residential area" dataset is shown along with event annotations and the network outputs of the CapsNet and the CNN best performing models. For this example, we chose the monaural STFT as input feature because generally it yields the best results over all the considered datasets. Fig. 2 shows bird singing lasting for the whole sequence and correctly detected by both the architectures. When the car passing by event overlaps the bird singing, the CapsNet detects more clearly its presence. The people speaking event is slightly detected by both the models, while the object banging activates the relative Capsule exactly only in correspondence of the event annotation. It must be noted that the dataset is composed of unverified manually labelled real-life recordings, that may present a degree of subjectivity, thus, affecting the training. Nevertheless, the CapsNet exhibits remarkable detection capability especially in the condition of overlapping events, while the CNN outputs are definitely more "blurred" and the event people walking is wrongly detected in this sequence.
B. TUT-SED 2017
The bottom of Table III reports the results obtained with the TUT-SED 2017. As in the TUT-SED 2016, the best performing models on the Development dataset are those fed by the Binaural STFT of the input signal. In this case we can also observe interesting performance obtained by the CNNs, which on the Evaluation dataset obtain a lower ER (i.e., equal to 0.65) with respect to the state-of-the-art algorithm [32], based on CRNNs. CapsNet confirms its effectiveness and it obtains lowest ER equal to 0.58 with LogMel features, although with a slight margin with respect to the other inputs (i.e., -0.03 compared to the STFT features, -0.06 compared to both the binaural version of LogMels and STFT spectrograms).
It is interesting to notice that in the development crossvalidation, the CapsNet models yielded significantly better performance with respect to the other reported approaches, while the CNNs have decidedly worse performance. On the Evaluation dataset, it was not possible to use the earlystopping strategy, thus the ER scores of the CapsNets suffer Notwithstanding this weakness, the absolute performance obtained both with monaural and binaural spectral features is consistent and improves the state-of-the-art result, with a reduction of the ER of up to 0.21 in the best case. This is particularly evident in Fig. 3, that shows the output of the two best performing systems for a sequence of approximately 20 seconds length which contains highly overlapping sounds. The event classes "people walking" and "large vehicle" are overlapped for almost all the sequence duration and they are well detected by the CapsNet, although they are of different nature: the "large vehicle" has a typical timber and is almost stationary, while the class "people walking" comprehend impulsive and desultory sounds. The CNN seems not to be able to distinguish between the "large vehicle" and the "car" classes, detecting confidently only the latter, while the activation corresponding "people walking" class is modest. The presence of the "brakes squeaking" class, which has a specific spectral profile mostly located in the highest frequency bands (as shown in the spectrogram), is detected only by the CapsNet. We can assume this as a concrete experimental validation of the routing effectiveness. The number of free parameters amounts to 223K for the best configuration shown in Table II and it is similar to those found for the TUT-SED 2016, which consists also in this case in a reduction equal to 35% with respect to the best CNN layout.
C. TUT-Rare SED 2017
The advantage given by the routing procedure to the Caps-Net is particularly effective in the case of polyphonic SED. This is confirmed by the results obtained with the TUT-Rare SED 2017 which are shown in Table V. In this case the metric is not anymore segment-based, but it is the eventbased ER calculated using onset-only condition. We performed a separate random-search for each of the three sound event classes both for CapsNets and CNNs and we report the averaged score over the three classes. The setups that obtained the best performance on the Evaluation dataset are shown in Table IV. This is the largest dataset we evaluated and its characteristic is the high unbalance between the amount of background sounds versus the target sound events.
From the analysis of partial results on the Evaluation set (unfortunately not included for the sake of conciseness) we notice that both architectures achieve the best performance on glass break sound (0.25 and 0.24 respectively for CNNs and CapsNet with LogMels features), due to its clear spectral fingerprint compared to the background sound. The worst performing class is the gunshot (ER equal to 0.58 for the CapsNet), although the noise produced by different instances of this class involves similar spectral components. The low
3 × 3 - 3 × 3 - 3 × 3 - Primary Capsules dimension J 8 - 8 - 8 - Detection Capsules dimension G 14 - 14 - 6 - Routing iterations 5 - 5 - 1 - # Params 131K 84K 131K 84K 30K 211K
performance is probably motivated due to the fast decay of this sound, which means that in this case the routing procedure is not sufficient to avoid confusing the gunshot with other background noises, especially in the case of dataset unbalancing and low event-to-background ratio. A solution to this issue can be find in the combination of CapsNet with RNN units, as proposed in [19] for the CNNs which yields an efficient modelling of the gunshot by CRNN and improves the detection abilities even in polyphonic conditions. The babycry consists of short, harmonic sounds is detected almost equally from the two architectures due to the property of frequency shift invariance given by the convolutional kernel processing. Finally, the CNN shows better generalization performance to the CapsNet, although the ER score is far from state-of-theart which involves the use of the aforementioned CRNNs [33] or a hierarchical framework [39]. In addition, in this case are the CNN models to have a reduced number of weights to train (36%) with respect the CapsNets, except for the "gunshot" case but, as mentioned, it is also the configuration that gets the worst results.
D. Alternative Dynamic Routing for SED
We observed that the original routing procedure implies the initialization of the coefficients β ij to zero each time the procedure is restarted, i.e, after each input sample has been processed. This is reasonable in the case of image classification, for which the CapsNet has been originally proposed. In the case of audio task, we clearly expect a higher correlation between samples belonging to adjacent temporal frames X. We thus investigated the chance to initialize the coefficients β ij to zero only at the very first iteration, while for subsequent X to assign them the last values they had at the end of the previous iterative procedure. We experimented this variant considering the best performing models of the analyzed scenarios for polyphonic SED, taking into account only the systems fed with the monaural STFT. As shown in Table VI, the modification we propose in the routing procedure is effective in particular on the evaluation datasets, conferring improved generalization properties to the models we tested even without accomplishing a specific hyperparameters optimization.
V. CONCLUSION
In this work, we proposed to apply a novel neural network architecture, the CapsNet, to the polyphonic SED task. The architecture is based on both convolutional and capsule layers. The convolutional layers extract high-level time-frequency feature maps from input matrices which provide an acoustic spectral representation with long temporal context. The obtained feature maps are then used to feed the Primary Capsule layer which is connected to the Detection Capsule layer, finally extracting the event activity probabilities. These last two layers are involved in the iterative routing-by-agreement procedure, which computes the outputs based on a measure of likelihood between a capsule and its parent capsules. This architecture combines, thus, the ability of convolutional layers to learn local translation invariant filters with the ability of capsules to learn part-whole relations by using the routing procedure.
Part of the novelty of this work resides in the adaptation of the CapsNet architecture for the audio event detection task, with a special care on the input data, the layers interconnection and the regularization techniques. The routing procedure is also modified to confer a more appropriate acoustic rationale, with a further average performance improvement of 6% among the polyphonic-SED tasks.
An extensive evaluation of the algorithm is proposed with comparison to recent state-of-the-art methods on three different datasets. The experimental results demonstrate that the use of dynamic routing procedure during the training is effective and provides significant performance improvement in the case of overlapping sound events compared to traditional CNNs, and other established methods in polyphonic SED. Interestingly, the CNN based method obtained the best performance in the monophonic SED case study, thus emphasizing the suitability of the CapsNet architecture in dealing with overlapping sounds. We showed that this model is particularly effective with small sized datasets, such as TUT-SED 2016 which contains a total 78 minutes of audio for the development of the models of which one third is background noise.
Furthermore, the network trainable parameters are reduced with respect to other deep learning architectures, confirming the architectural advantage given by the introduced features also in the task of polyphonic SED.
Despite the improvement in performance, we identified a limitation of the proposed method. As presented in Section IV, the performance of the CapsNet is more sensible to the number of training iterations. This affects the generalization capabilities of the algorithm, yielding a greater relative deterioration of the performance on evaluation datasets with respect to the comparative methods.
The results we observed in this work are consistent with many other classification tasks in various domains [41]- [43], prove that the CapsNet is an effective approach which enhances the well-established representation capabilities of the CNNs also in the audio field. As a future work, regularization methods can be investigated to overcome the lack of generalization which seems to affect the CapsNets. Furthermore, regarding the SED task the addition of recurrent units may be explored to enhance the detection of particular (i.e., inpulsive) sound events in real-life audio and the recently-proposed variant of routing, based on the Expectation Maximization algorithm (EM) [44], can be investigated in this context. he collaborated to several regional and european projects on audio signal processing. Dr. Principi is author and coauthor of several international scientific peer-reviewed articles in the area of speech enhancement for robust speech and speaker recognition and intelligent audio analysis. He is member of the IEEE CIS Task Force on Computational Audio Processing, and is reviewer for several international journals. His current research interests are in the area of machine learning and digital signal processing for the smart grid (energy task scheduling, nonintrusive load monitoring, computational Intelligence for vehicle to grid) and intelligent audio analysis (multi-room voice activity detection and speaker localization, acoustic event detection, fall detection).
| 7,491 |
1906.06281
|
2949660381
|
Barcodes are used in many commercial applications, thus fast and robust reading is important. There are many different types of barcodes, some of them look similar while others are completely different. In this paper we introduce new fast and robust deep learning detector based on semantic segmentation approach. It is capable of detecting barcodes of any type simultaneously both in the document scans and in the wild by means of a single model. The detector achieves state-of-the-art results on the ArTe-Lab 1D Medium Barcode Dataset with detection rate 0.995. Moreover, developed detector can deal with more complicated object shapes like very long but narrow or very small barcodes. The proposed approach can also identify types of detected barcodes and performs at real-time speed on CPU environment being much faster than previous state-of-the-art approaches.
|
The early work in the domain of barcode detection from 2D images was motivated by the wide spread of mobile phones with cameras. @cite_4 proposes a method for finding 2D barcodes via corner detection and 1D barcodes through spiral scanning. @cite_5 introduces another method for 1D barcode detection based on decoding. Both approaches however require certain guidance from the user.
|
{
"abstract": [
"In this paper we present an algorithm for the recognition of 1D barcodes using camera phones, which is highly robust regarding the the typical image distortions. We have created a database of barcode images, which covers typical distortions, such as inhomogeneous illumination, reflections, or blurriness due to camera movement. We present results from experiments with over 1,000 images from this database using a Matlab implementation of our algorithm, as well as experiments on the go, where a Symbian C++ implementation running on a camera phone is used to recognize barcodes in daily life situations. The proposed algorithm shows a close to 100 accuracy in real life situations and yields a very good resolution dependent performance on our database, ranging from 90.5 (640 × 480) up to 99.2 (2592 × 1944). The database is freely available for other researchers.",
"This paper shows new algorithms and the implementations of image reorganization for EAN QR barcodes in mobile phones. The mobile phone system used here consists of a camera, mobile application processor, digital signal processor (DSP), and display device, and the source image is captured by the embedded camera device. The introduced algorithm is based on the code area found by four corners detection for 2D barcode and spiral scanning for 1D barcode using the embedded DSP. This algorithm is robust for practical situations and the DSP has good enough performance for the real-time recognition of the codes. The performance of our image processing is 66.7 frames sec for EAN code and 14.1 frames sec for QR code image processing, and this is sufficient performance for practical use. The released mobile phone had performance of 5-10 frames sec including OS and subsystem overheads."
],
"cite_N": [
"@cite_5",
"@cite_4"
],
"mid": [
"2136590411",
"2139923216"
]
}
|
Universal Barcode Detector via Semantic Segmentation
|
Starting from the 1960s people have invented many barcode types which serve for machine readable data representation and have lots of applications in various fields. The most frequently used are probably UPC and EAN barcodes for consumer products labeling, EAN128 serves for transferring information about cargo between enterprises, QR codes are widely used to provide links, PDF417 has variety of applications in transport, identification cards and inventory management. Barcodes have become ubiquitous in modern world, they are used as electronic tickets, in official documents, in advertisement, healthcare, for tracking objects and people. Examples of popular barcode types are shown in Fig. 1.
There are two main approaches for decoding barcodes, the former uses laser and the latter just a simple camera. Through years of development, laser scanners have become very reliable and fast for the case of reading exactly one 1D barcode, but they are completely unable to deal with 2D barcodes or read several barcodes at the same time. Another drawback is that they can not read barcodes from screens efficiently as they strongly rely on reflected light.
Popular camera-based reader is a simple smartphone application which is capable of scanning almost any type of barcode. However, most applications require some user guidance like pointing on barcode to decode. Most applications decode only one barcode at a time, despite it is possible to In this work, we introduce segmentation based barcode detector which is capable of locating all barcodes simultaneously no matter how many of them are present in the image or which types they are, so the system does not need any user guidance. The developed detector also provides information about most probable types of detected barcodes thus decreasing time for reading process.
III. DETECTION VIA SEGMENTATION
Our approach is inspired by the idea of PixelLink [12] where authors solve text detection via instance segmentation.
We believe that for barcodes the situation when 2 of them are close to each other is unusual, so we do not really need to solve instance segmentation problem therefore dealing with semantic segmentation challenge should be enough.
PixelLink shows good results capturing long but narrow lines with text, which can be a case for some barcode types so we believe such object shape invariance property is an additional advantage.
To solve the detection task we first run semantic segmentation network and then postprocess its results.
A. Semantic segmentation network
Barcodes normally can not be too small so predicting results for resolution 4 times lower than original image should be enough for reasonably good results. Thus we find segmentation map for superpixels which are 4x4 pixel blocks.
Detection is a primary task we are focusing on in this work, treating type classification as a less important sidetask. Most of barcodes share a common structure so it is only natural to classify pixels as being part of barcode (class 1) or background (class 0), thus segmentation network solves binary (super)pixel classification task.
Barcodes are relatively simple objects and thus may be detected by relatively simple architecture. To achieve realtime CPU speed we have developed quite simple architecture based on dilated and separable convolutions (see Table I). It can be logically divided into 3 blocks: 1) Downscale Module is aimed to reduce spatial features dimension. Since these initial convolutions are applied to large feature maps they cost significant amount of overall network time, so to speed up inference these convolutions are made separable. 2) Context Module. This block is inspired by [13]. However, in our architecture it serves for slightly different purpose just improving features and exponentially increasing receptive field with each layer. 3) Final classification layer is 1x1 convolution with number of filters equal to 1+n classes, where n classes is number of different barcode types we want to differentiate with. We used ReLU nonlinearity after each convolution except for the final one where we apply sigmoid to the first channel and softmax to all of the rest channels.
We have chosen the number of channels C = 24 for all convolutional layers. Our experiments show that with more filters model has comparable performance, but with less filters performance drops rapidly. As we have only a few channels in each layer the final model is very compact with only 32962 weights.
As the maximal image resolution we are working with is 512x512, receptive field for prediction is at least half an image which should be more than enough contextual information for detecting barcodes.
KERNEL 3X3 3X3 3X3 3X3 3X3 3X3 3X3 3X3 3X3 1X1 OUTPUT CHANNELS C C C C C C C C C 1+N RECEPTIVE FIELD 3X3 7X7 11X11 19X19 35X35 67X67 131X131 259X259 267X267 267X267
B. Detecting barcodes based on segmentation
After the network pass we get segmentation map for superpixels with 1 + n classes channels. For detection we use only the first channel which can be interpreted as probability being part of barcode for superpixels.
We apply the threshold value for probability to get detection class binary labels (barcode/background). In all our experiments we set this threshold value to 0.5. We now find connected components on received superpixel binary mask and calculate bounding rectangle of minimal area for each component. To do the latest we apply minAreaRect method from OpenCV library (accessed Dec 2018). Now we treat found rectangles as detected barcodes. To get detection rectangle on original image resolution we multiply all of its vertices coordinates by the network scale 4.
C. Filtering results
To avoid a situation when a small group of pixels is accidentally predicted as a barcode, we filter out all superpixel connected components with area less than threshold T area . The threshold value should be chosen to be slightly less than minimal area of objects in the dataset on the segmentation map. In all of our experiments we used value T area = 20.
D. Classification of detected objects
To determine barcode type of detected objects we use all of the rest n classes channels from segmentation network output. After softmax we treat them as probabilities of being some class.
Once we found the rectangle we compute the average probability vector inside this rectangle, then naturally choose the class with the highest probability.
IV. OPTIMIZATION SCHEME
A. Loss function
The training loss is a weighted sum of detection and classification losses
L = L detection + αL classif ication(1)
Detection loss L detection itself is a weighted sum of three components: mean binary crossentropy loss on positive pixels L p , mean binary crossentropy loss on negative pixels L n , and mean binary crossentropy loss on worst predicted k negative pixels L h , where k is equal to the number of positive pixels in image.
L detection = w p L p + w n L n + w h L h(2)
Classification loss is mean (categorical) crossentropy computed by all channels except the first one (with detection). Classification loss is calculated only on superpixels which are parts of ground truth objects.
As our primary goal is high recall in detection we have chosen w p = 15, w n = 1, w h = 5, α = 1. We also tried several different configurations but this combination was the best among them. However, we did not spent too much time on hyperparameter search.
B. Data augmentation
For augmentation we do the following: 1) with (p=0.1) return original nonaugmented image 2) with (p=0.5) rotate image random angle in [-45, 45] 3) with (p=0.5) rotate image on one of 90, 180, 270 degrees 4) with (p=0.5) do random crop. We limit crop to retain all barcodes on the image entirely and ensure that aspect ratio is changed no more than 70% compared to the original image 5) with (p=0.7) do additional image augmentation. For this purpose we used "less important" augmenters from heavy augmentation example from imgaug library [14].
V. EXPERIMENTAL RESULTS
A. Datasets
The network performance was evaluated on 2 common benchmarks for barcode detection -namely WWU Muenster Barcode Database (Muenster) and ArTe-Lab Medium Barcode Dataset (Artelab). Datasets contain 595 and 365 images with ground truth detection masks respectively, resolution for all images is 640x480. All images in Artelab dataset contain exactly one EAN13 barcode, while in Muenster there may be several barcodes on the image.
For training we used our own dataset with both 1D barcodes (Code128, Patch, EAN8, Code93, UCC128, EAN13, Industrial25, Code32, FullASCIICode, UPCE, MATRIX25, Code39, IATA25, UPCA, CODABAR, Interleaved25) and 2D barcodes (QRCode, Aztec, MaxiCode, DataMatrix, PDF417), being 16 different types for 1D and 5 types for 2D Barcodes, 21 type in total. Training dataset contains both photos and document scans. Example images from our dataset can be found in Fig. 2. Dataset consist of 17k images in total, 10% of it was used for validation.
B. Training procedure
We trained our model with batch size 8 for 70 epochs with learning rate 0.001 followed by additional 70 epochs with learning rate 0.0001
While training we resized all images to have maximal side at most 1024 maintaining aspect ratio and make both sides divisible by 64. We pick and augment/preprocess 3000 images from the dataset, then group them into batches by image size, and do this process repeatedly until the end of training. After that we pick next 3000 images and do the same until the end of the dataset. After we reach the end of the dataset, we shuffle image order and repeat the process.
We trained three models: Ours-Detection (all types) (without classification on entire dataset), Ours-Detection+Classification (all types) (with classification on entire dataset), Ours-Detection (EAN13 only) (without classification on EAN13 subset of 1000 images).
C. Evaluation metrics
We follow common evaluation scheme from [7]. Having binary masks G for ground truth and F for found detection results the Jaccard index between them is defined as
J(G, F ) = |G ∩ F | |G ∪ F |
Another common name for Jaccard index is "intersection over union" or IoU which follows from definition. The overall detection rate for a given IoU threshold T is defined as a fraction of images in the dataset where IoU is greater than that threshold
D T = i∈S I(J(G, F ) ≥ T )) |S|
where S is set of images in the dataset and I is indicator function. However, one image may contain several barcodes and if one of them is very big and another is very small D T will indicate error only on very high threshold, so we find it reasonable to evaluate detection performance with additional metrics which will average results not by images but by ground truth barcode objects on them.
For this purpose we use recall R T , defined as number of successfully detected objects divided by total number of
R T = i∈S G∈SGi I(J(G, F (G)) ≥ T ))
i∈S |SG i | where SG i is set of objects on ground truth on image i and F (G) is found box with highest Jaccard index with box G. The paired metric for recall is precision, defined as the number of successfully detected objects divided by total number of detections P T = i∈S G∈SGi I(J(G, F (G)) ≥ T )) i∈S |SF i | where SF i is set of all detections made per image i.
We found connected components for ground truth binary masks and treat them as ground truth objects.
We emphasize that all the metrics above are computed for the detected object regardless its actual type. To evaluate classification of the detected objects by type we use simple accuracy metric (number of correctly guessed objects / number of correctly detected objects). So if we find Barcode-PDF417 as Barcode-QRCode precision and recall will not be affected, but the classification accuracy will be.
D. Quantitative results
We compare our results with Cresot2015 [7], Cresot2016 [9], Namane2017 [10], Yolo2017 [11] on Artelab and Muenster datasets (Table II).
The proposed method is trained on our own dataset, however all other works which we compared with were trained on different datasets. As for the full reproducibility of other authors works on our dataset we have to follow the exactly same training protocol (including initialization and augmentations) to not underestimate the results we decided to rely on the numbers reported in works of other authors. We outperformed all previous works in terms of detection rate on Artelab dataset with the model trained only on EAN13 subset of our dataset. According to the tables, detection rate of our model trained on entire dataset with all barcode types is slightly worse than model trained on EAN13 subset. The reason for this is not poor generalization but markup errors or capturing more barcodes than in the markup (i. e. non-EAN barcodes), see Fig. 3.
As it can be seen in Fig. 4 our model has a rapid decrease in detection rate for higher Jaccard thresholds. Aside from markup errors, the main reason for that is overestimation of barcode borders in detection, which is caused by prioritizing high recall in training, it makes high impact for higher Jaccard thresholds as Jaccard index is known to be very sensitive to almost exact match (Fig. 5).
On Table III we show comparison of our models by precision and recall. Our models achieve close to an absolute recall, meaning that almost all barcodes are detected. On the other hand precision is also relatively high.
E. Execution time
For our network we measure time on 512x512 resolution which is enough for most of applications. We do not include postprocessing time as it is negligible compared to forward network run.
The developed network performs at real-time speed and is 3.5 times faster than YOLO with darknet [11] on higher resolution on the same GTX 1080 GPU. In the Table IV we compare inference times of our model with other approaches. We also provide CPU inference time (for Intel Core i5, 3.20GHz) of our model showing that it is nearly the same as reported in Cresot2016, where authors used their approach in the real-time smartphone application. It is important since not all of the devices have GPU yet.
F. Classification results
Among correctly detected objects we measured classification accuracy and achieved 60% accuracy on test set.
Moreover, classification subtask does not damage detection results. As shown in Table III the results with classification are even slightly better, meaning that detection and classification tasks can mutually benefit from each other.
G. Capturing long narrow barcodes
Additional advantage of our detector is that it is capable of finding objects of any arbitrary shape and does not assume that objects should be approximately squares as done by YOLO. Some examples are provided in Fig. 2.
VI. CONCLUSION
We have introduced new barcode detector which can achieve comparable or better performance on public benchmarks and is much faster than other methods. Moreover, our model is universal barcode detector which is capable to detect both 1D and 2D barcodes of many different types. The model is very light with less than 33000 weights which can be considered very compact and suitable for mobile devices.
Despite being shallow (i.e. very simple, we didn't use any SOTA techniques for semantic segmentation) our model shows that semantic segmentation may be used for object detection efficiently. It also provides natural way to detect objects of arbitrary shape (e.g. very long but narrow).
Future work may include using more advanced approaches in semantic segmentation to develop better network architecture and increase performance.
| 2,571 |
1906.06281
|
2949660381
|
Barcodes are used in many commercial applications, thus fast and robust reading is important. There are many different types of barcodes, some of them look similar while others are completely different. In this paper we introduce new fast and robust deep learning detector based on semantic segmentation approach. It is capable of detecting barcodes of any type simultaneously both in the document scans and in the wild by means of a single model. The detector achieves state-of-the-art results on the ArTe-Lab 1D Medium Barcode Dataset with detection rate 0.995. Moreover, developed detector can deal with more complicated object shapes like very long but narrow or very small barcodes. The proposed approach can also identify types of detected barcodes and performs at real-time speed on CPU environment being much faster than previous state-of-the-art approaches.
|
In more recent papers authors pay more attention on developing solutions which can be done automatically with less user guidance. @cite_8 finds regions with high difference between x and y derivatives, @cite_13 calculates oriented histograms to find patches with dominant direction, @cite_7 relies on morphology operations to detect both 1D and 2D barcodes, reporting high accuracy on their own data. The work of S " o r " o s @cite_1 is notable as they compare their own algorithm with other works mentioned in this paragraph. They demonstrate that their approach is superior on the same dataset WWU Muenster Barcode Database (Muenster). Their algorithm is based on the idea that 1D barcodes have many edges, 2D barcodes have many corners, while text areas have both many edges and many corners.
|
{
"abstract": [
"With the proliferation of built-in cameras barcode scanning on smartphones has become widespread in both consumer and enterprise domains. To avoid making the user precisely align the barcode at a dedicated position and angle in the camera image, barcode localization algorithms are necessary that quickly scan the image for possible barcode locations and pass those to the actual barcode decoder. In this paper, we present a barcode localization approach that is orientation, scale, and symbology (1D and 2D) invariant and shows better blur invariance than existing approaches while it operates in real time on a smartphone. Previous approaches focused on selected aspects such as orientation invariance and speed for 1D codes or scale invariance for 2D codes. Our combined method relies on the structure matrix and the saturation from the HSV color system. The comparison with three other real-time barcode localization algorithms shows that our approach outperforms the state of the art with respect to symbology and blur invariance at the expense of a reduced speed.",
"We describe a new smartphone app called BLaDE (Barcode Localization and Decoding Engine), designed to enable a blind or visually impaired user find and read product barcodes. Developed at The Smith-Kettlewell Eye Research Institute, the BLaDE Android app has been released as open source software, which can be used for free or modified for commercial or non-commercial use. Unlike popular commercial smartphone apps, BLaDE provides real-time audio feedback to help visually impaired users locate a barcode, which is a prerequisite to being able to read it. We describe experiments performed with five blind visually impaired volunteer participants demonstrating that BLaDE is usable and that the audio feedback is key to its usability.",
"Barcode technology is essential in automatic identification, and is used in a wide range of real-time applications. Different code types and applications impose special problems, so there is a continuous need for solutions with improved performance. Several methods exist for code localization, that are well characterized by accuracy and speed. Particularly, high-speed processing places need reliable automatic barcode localization, e.g. conveyor belts and automated production, where missed detections cause loss of profit. Our goal is to detect automatically, rapidly and accurately the barcode location with the help of extracted image features. We propose a new algorithm variant, that outperforms in both accuracy and efficiency other detectors found in the literature using similar ideas, and also improves on the detection performance in detecting 2D codes compared to our previous algorithm.",
"Camera cellphones have become ubiquitous, thus opening a plethora of opportunities for mobile vision applications. For instance, they can enable users to access reviews or price comparisons for a product from a picture of its barcode while still in the store. Barcode reading needs to be robust to challenging conditions such as blur, noise, low resolution, or low-quality camera lenses, all of which are extremely common. Surprisingly, even state-of-the-art barcode reading algorithms fail when some of these factors come into play. One reason resides in the early commitment strategy that virtually all existing algorithms adopt: The image is first binarized and then only the binary data are processed. We propose a new approach to barcode decoding that bypasses binarization. Our technique relies on deformable templates and exploits all of the gray-level information of each pixel. Due to our parameterization of these templates, we can efficiently perform maximum likelihood estimation independently on each digit and enforce spatial coherence in a subsequent step. We show by way of experiments on challenging UPC-A barcode images from five different databases that our approach outperforms competing algorithms. Implemented on a Nokia N95 phone, our algorithm can localize and decode a barcode on a VGA image (640 × 480, JPEG compressed) in an average time of 400-500 ms."
],
"cite_N": [
"@cite_1",
"@cite_13",
"@cite_7",
"@cite_8"
],
"mid": [
"2048438392",
"2341236339",
"1582243283",
"2104952406"
]
}
|
Universal Barcode Detector via Semantic Segmentation
|
Starting from the 1960s people have invented many barcode types which serve for machine readable data representation and have lots of applications in various fields. The most frequently used are probably UPC and EAN barcodes for consumer products labeling, EAN128 serves for transferring information about cargo between enterprises, QR codes are widely used to provide links, PDF417 has variety of applications in transport, identification cards and inventory management. Barcodes have become ubiquitous in modern world, they are used as electronic tickets, in official documents, in advertisement, healthcare, for tracking objects and people. Examples of popular barcode types are shown in Fig. 1.
There are two main approaches for decoding barcodes, the former uses laser and the latter just a simple camera. Through years of development, laser scanners have become very reliable and fast for the case of reading exactly one 1D barcode, but they are completely unable to deal with 2D barcodes or read several barcodes at the same time. Another drawback is that they can not read barcodes from screens efficiently as they strongly rely on reflected light.
Popular camera-based reader is a simple smartphone application which is capable of scanning almost any type of barcode. However, most applications require some user guidance like pointing on barcode to decode. Most applications decode only one barcode at a time, despite it is possible to In this work, we introduce segmentation based barcode detector which is capable of locating all barcodes simultaneously no matter how many of them are present in the image or which types they are, so the system does not need any user guidance. The developed detector also provides information about most probable types of detected barcodes thus decreasing time for reading process.
III. DETECTION VIA SEGMENTATION
Our approach is inspired by the idea of PixelLink [12] where authors solve text detection via instance segmentation.
We believe that for barcodes the situation when 2 of them are close to each other is unusual, so we do not really need to solve instance segmentation problem therefore dealing with semantic segmentation challenge should be enough.
PixelLink shows good results capturing long but narrow lines with text, which can be a case for some barcode types so we believe such object shape invariance property is an additional advantage.
To solve the detection task we first run semantic segmentation network and then postprocess its results.
A. Semantic segmentation network
Barcodes normally can not be too small so predicting results for resolution 4 times lower than original image should be enough for reasonably good results. Thus we find segmentation map for superpixels which are 4x4 pixel blocks.
Detection is a primary task we are focusing on in this work, treating type classification as a less important sidetask. Most of barcodes share a common structure so it is only natural to classify pixels as being part of barcode (class 1) or background (class 0), thus segmentation network solves binary (super)pixel classification task.
Barcodes are relatively simple objects and thus may be detected by relatively simple architecture. To achieve realtime CPU speed we have developed quite simple architecture based on dilated and separable convolutions (see Table I). It can be logically divided into 3 blocks: 1) Downscale Module is aimed to reduce spatial features dimension. Since these initial convolutions are applied to large feature maps they cost significant amount of overall network time, so to speed up inference these convolutions are made separable. 2) Context Module. This block is inspired by [13]. However, in our architecture it serves for slightly different purpose just improving features and exponentially increasing receptive field with each layer. 3) Final classification layer is 1x1 convolution with number of filters equal to 1+n classes, where n classes is number of different barcode types we want to differentiate with. We used ReLU nonlinearity after each convolution except for the final one where we apply sigmoid to the first channel and softmax to all of the rest channels.
We have chosen the number of channels C = 24 for all convolutional layers. Our experiments show that with more filters model has comparable performance, but with less filters performance drops rapidly. As we have only a few channels in each layer the final model is very compact with only 32962 weights.
As the maximal image resolution we are working with is 512x512, receptive field for prediction is at least half an image which should be more than enough contextual information for detecting barcodes.
KERNEL 3X3 3X3 3X3 3X3 3X3 3X3 3X3 3X3 3X3 1X1 OUTPUT CHANNELS C C C C C C C C C 1+N RECEPTIVE FIELD 3X3 7X7 11X11 19X19 35X35 67X67 131X131 259X259 267X267 267X267
B. Detecting barcodes based on segmentation
After the network pass we get segmentation map for superpixels with 1 + n classes channels. For detection we use only the first channel which can be interpreted as probability being part of barcode for superpixels.
We apply the threshold value for probability to get detection class binary labels (barcode/background). In all our experiments we set this threshold value to 0.5. We now find connected components on received superpixel binary mask and calculate bounding rectangle of minimal area for each component. To do the latest we apply minAreaRect method from OpenCV library (accessed Dec 2018). Now we treat found rectangles as detected barcodes. To get detection rectangle on original image resolution we multiply all of its vertices coordinates by the network scale 4.
C. Filtering results
To avoid a situation when a small group of pixels is accidentally predicted as a barcode, we filter out all superpixel connected components with area less than threshold T area . The threshold value should be chosen to be slightly less than minimal area of objects in the dataset on the segmentation map. In all of our experiments we used value T area = 20.
D. Classification of detected objects
To determine barcode type of detected objects we use all of the rest n classes channels from segmentation network output. After softmax we treat them as probabilities of being some class.
Once we found the rectangle we compute the average probability vector inside this rectangle, then naturally choose the class with the highest probability.
IV. OPTIMIZATION SCHEME
A. Loss function
The training loss is a weighted sum of detection and classification losses
L = L detection + αL classif ication(1)
Detection loss L detection itself is a weighted sum of three components: mean binary crossentropy loss on positive pixels L p , mean binary crossentropy loss on negative pixels L n , and mean binary crossentropy loss on worst predicted k negative pixels L h , where k is equal to the number of positive pixels in image.
L detection = w p L p + w n L n + w h L h(2)
Classification loss is mean (categorical) crossentropy computed by all channels except the first one (with detection). Classification loss is calculated only on superpixels which are parts of ground truth objects.
As our primary goal is high recall in detection we have chosen w p = 15, w n = 1, w h = 5, α = 1. We also tried several different configurations but this combination was the best among them. However, we did not spent too much time on hyperparameter search.
B. Data augmentation
For augmentation we do the following: 1) with (p=0.1) return original nonaugmented image 2) with (p=0.5) rotate image random angle in [-45, 45] 3) with (p=0.5) rotate image on one of 90, 180, 270 degrees 4) with (p=0.5) do random crop. We limit crop to retain all barcodes on the image entirely and ensure that aspect ratio is changed no more than 70% compared to the original image 5) with (p=0.7) do additional image augmentation. For this purpose we used "less important" augmenters from heavy augmentation example from imgaug library [14].
V. EXPERIMENTAL RESULTS
A. Datasets
The network performance was evaluated on 2 common benchmarks for barcode detection -namely WWU Muenster Barcode Database (Muenster) and ArTe-Lab Medium Barcode Dataset (Artelab). Datasets contain 595 and 365 images with ground truth detection masks respectively, resolution for all images is 640x480. All images in Artelab dataset contain exactly one EAN13 barcode, while in Muenster there may be several barcodes on the image.
For training we used our own dataset with both 1D barcodes (Code128, Patch, EAN8, Code93, UCC128, EAN13, Industrial25, Code32, FullASCIICode, UPCE, MATRIX25, Code39, IATA25, UPCA, CODABAR, Interleaved25) and 2D barcodes (QRCode, Aztec, MaxiCode, DataMatrix, PDF417), being 16 different types for 1D and 5 types for 2D Barcodes, 21 type in total. Training dataset contains both photos and document scans. Example images from our dataset can be found in Fig. 2. Dataset consist of 17k images in total, 10% of it was used for validation.
B. Training procedure
We trained our model with batch size 8 for 70 epochs with learning rate 0.001 followed by additional 70 epochs with learning rate 0.0001
While training we resized all images to have maximal side at most 1024 maintaining aspect ratio and make both sides divisible by 64. We pick and augment/preprocess 3000 images from the dataset, then group them into batches by image size, and do this process repeatedly until the end of training. After that we pick next 3000 images and do the same until the end of the dataset. After we reach the end of the dataset, we shuffle image order and repeat the process.
We trained three models: Ours-Detection (all types) (without classification on entire dataset), Ours-Detection+Classification (all types) (with classification on entire dataset), Ours-Detection (EAN13 only) (without classification on EAN13 subset of 1000 images).
C. Evaluation metrics
We follow common evaluation scheme from [7]. Having binary masks G for ground truth and F for found detection results the Jaccard index between them is defined as
J(G, F ) = |G ∩ F | |G ∪ F |
Another common name for Jaccard index is "intersection over union" or IoU which follows from definition. The overall detection rate for a given IoU threshold T is defined as a fraction of images in the dataset where IoU is greater than that threshold
D T = i∈S I(J(G, F ) ≥ T )) |S|
where S is set of images in the dataset and I is indicator function. However, one image may contain several barcodes and if one of them is very big and another is very small D T will indicate error only on very high threshold, so we find it reasonable to evaluate detection performance with additional metrics which will average results not by images but by ground truth barcode objects on them.
For this purpose we use recall R T , defined as number of successfully detected objects divided by total number of
R T = i∈S G∈SGi I(J(G, F (G)) ≥ T ))
i∈S |SG i | where SG i is set of objects on ground truth on image i and F (G) is found box with highest Jaccard index with box G. The paired metric for recall is precision, defined as the number of successfully detected objects divided by total number of detections P T = i∈S G∈SGi I(J(G, F (G)) ≥ T )) i∈S |SF i | where SF i is set of all detections made per image i.
We found connected components for ground truth binary masks and treat them as ground truth objects.
We emphasize that all the metrics above are computed for the detected object regardless its actual type. To evaluate classification of the detected objects by type we use simple accuracy metric (number of correctly guessed objects / number of correctly detected objects). So if we find Barcode-PDF417 as Barcode-QRCode precision and recall will not be affected, but the classification accuracy will be.
D. Quantitative results
We compare our results with Cresot2015 [7], Cresot2016 [9], Namane2017 [10], Yolo2017 [11] on Artelab and Muenster datasets (Table II).
The proposed method is trained on our own dataset, however all other works which we compared with were trained on different datasets. As for the full reproducibility of other authors works on our dataset we have to follow the exactly same training protocol (including initialization and augmentations) to not underestimate the results we decided to rely on the numbers reported in works of other authors. We outperformed all previous works in terms of detection rate on Artelab dataset with the model trained only on EAN13 subset of our dataset. According to the tables, detection rate of our model trained on entire dataset with all barcode types is slightly worse than model trained on EAN13 subset. The reason for this is not poor generalization but markup errors or capturing more barcodes than in the markup (i. e. non-EAN barcodes), see Fig. 3.
As it can be seen in Fig. 4 our model has a rapid decrease in detection rate for higher Jaccard thresholds. Aside from markup errors, the main reason for that is overestimation of barcode borders in detection, which is caused by prioritizing high recall in training, it makes high impact for higher Jaccard thresholds as Jaccard index is known to be very sensitive to almost exact match (Fig. 5).
On Table III we show comparison of our models by precision and recall. Our models achieve close to an absolute recall, meaning that almost all barcodes are detected. On the other hand precision is also relatively high.
E. Execution time
For our network we measure time on 512x512 resolution which is enough for most of applications. We do not include postprocessing time as it is negligible compared to forward network run.
The developed network performs at real-time speed and is 3.5 times faster than YOLO with darknet [11] on higher resolution on the same GTX 1080 GPU. In the Table IV we compare inference times of our model with other approaches. We also provide CPU inference time (for Intel Core i5, 3.20GHz) of our model showing that it is nearly the same as reported in Cresot2016, where authors used their approach in the real-time smartphone application. It is important since not all of the devices have GPU yet.
F. Classification results
Among correctly detected objects we measured classification accuracy and achieved 60% accuracy on test set.
Moreover, classification subtask does not damage detection results. As shown in Table III the results with classification are even slightly better, meaning that detection and classification tasks can mutually benefit from each other.
G. Capturing long narrow barcodes
Additional advantage of our detector is that it is capable of finding objects of any arbitrary shape and does not assume that objects should be approximately squares as done by YOLO. Some examples are provided in Fig. 2.
VI. CONCLUSION
We have introduced new barcode detector which can achieve comparable or better performance on public benchmarks and is much faster than other methods. Moreover, our model is universal barcode detector which is capable to detect both 1D and 2D barcodes of many different types. The model is very light with less than 33000 weights which can be considered very compact and suitable for mobile devices.
Despite being shallow (i.e. very simple, we didn't use any SOTA techniques for semantic segmentation) our model shows that semantic segmentation may be used for object detection efficiently. It also provides natural way to detect objects of arbitrary shape (e.g. very long but narrow).
Future work may include using more advanced approaches in semantic segmentation to develop better network architecture and increase performance.
| 2,571 |
1906.06281
|
2949660381
|
Barcodes are used in many commercial applications, thus fast and robust reading is important. There are many different types of barcodes, some of them look similar while others are completely different. In this paper we introduce new fast and robust deep learning detector based on semantic segmentation approach. It is capable of detecting barcodes of any type simultaneously both in the document scans and in the wild by means of a single model. The detector achieves state-of-the-art results on the ArTe-Lab 1D Medium Barcode Dataset with detection rate 0.995. Moreover, developed detector can deal with more complicated object shapes like very long but narrow or very small barcodes. The proposed approach can also identify types of detected barcodes and performs at real-time speed on CPU environment being much faster than previous state-of-the-art approaches.
|
The work of Cresot , 2015 @cite_0 is a solid baseline for 1D barcode detection. They evaluated their approach on Muenster and on extended ArTe-Lab 1D Medium barcode database (Artelab) provided by Zamberletti @cite_3 outperforming him on both datasets. The solution in @cite_0 seems to outperform @cite_1 despite it is hard to compare as they were evaluated on different datasets using slightly different metrics. Cresot's algorithm detects dark bars of barcodes using Maximal Stable Extremal Regions (MSER) followed by finding imaginary perpendicular to bars center line in Hough space. In 2016 Cresot came with a new paper @cite_11 improving previous results using a new variant of Line Segment Detector instead of MSER, which they called Parallel Segment Detector. @cite_9 proposes another bars detection method for 1D barcode detection, which is reported to be absolutely precise in real-time applications.
|
{
"abstract": [
"",
"With the proliferation of built-in cameras barcode scanning on smartphones has become widespread in both consumer and enterprise domains. To avoid making the user precisely align the barcode at a dedicated position and angle in the camera image, barcode localization algorithms are necessary that quickly scan the image for possible barcode locations and pass those to the actual barcode decoder. In this paper, we present a barcode localization approach that is orientation, scale, and symbology (1D and 2D) invariant and shows better blur invariance than existing approaches while it operates in real time on a smartphone. Previous approaches focused on selected aspects such as orientation invariance and speed for 1D codes or scale invariance for 2D codes. Our combined method relies on the structure matrix and the saturation from the HSV color system. The comparison with three other real-time barcode localization algorithms shows that our approach outperforms the state of the art with respect to symbology and blur invariance at the expense of a reduced speed.",
"Barcode reading mobile applications that identify products from pictures taken using mobile devices are widely used by customers to perform online price comparisons or to access reviews written by others. Most of the currently available barcode reading approaches focus on decoding degraded barcodes and treat the underlying barcode detection task as a side problem that can be addressed using appropriate object detection methods. However, the majority of modern mobile devices do not meet the minimum working requirements of complex general purpose object detection algorithms and most of the efficient specifically designed barcode detection algorithms require user interaction to work properly. In this paper, we present a novel method for barcode detection in camera captured images based on a supervised machine learning algorithm that identifies one-dimensional barcodes in the two-dimensional Hough Transform space. Our model is angle invariant, requires no user interaction and can be executed on a modern mobile device. It achieves excellent results for two standard one-dimensional barcode datasets: WWU Muenster Barcode Database and ArTe-Lab 1D Medium Barcode Dataset. Moreover, we prove that it is possible to enhance the overall performance of a state-of-the-art barcode reading algorithm by combining it with our detection method.",
"The linear 1D barcode is the main tagging system for billions of products sold each day. Barcodes have many advantages but require a laser scanner for fast and robust scanning. Solutions exist to read barcodes from cell phones but they assume a carefully framed image within the field of view. This undermines the true potential of barcodes in a wide range of scenarios. In this paper we present a real time technique to detect barcodes in the wild from video streams. Our technique outperforms the state-of-the-art passive techniques both in accuracy and speed. Potential commercial applications enabled by such passive scanning system are also discussed in this paper.",
"Linear barcodes are the principal labeling system for retail products. Barcode reader apps found on smartphones always assume that the localization and framing of the barcode is performed manually by a sighted human operator. This is problematic for visually-impaired people since they don't know where to position the camera to scan the barcode. To solve this problem we propose a hand-free interface to detect barcode using a wearable camera. The user rotate a query product in front of him her and is informed when and where the barcode is visible. The challenge is to detect small barcodes at arm's length in a video with potentially large motion blur. In this paper we propose a novel technique for barcode detection using very little computation (adapted to wearable systems), presenting very good robustness to blur and size variations, and able to run on HD video streams. The proposed system perform significantly better than the state-of-the-art methods on existing public datasets, while being much faster. A new and challenging egocentric product video dataset is also provided with this paper."
],
"cite_N": [
"@cite_9",
"@cite_1",
"@cite_3",
"@cite_0",
"@cite_11"
],
"mid": [
"",
"2048438392",
"1970907315",
"2008162253",
"2513174007"
]
}
|
Universal Barcode Detector via Semantic Segmentation
|
Starting from the 1960s people have invented many barcode types which serve for machine readable data representation and have lots of applications in various fields. The most frequently used are probably UPC and EAN barcodes for consumer products labeling, EAN128 serves for transferring information about cargo between enterprises, QR codes are widely used to provide links, PDF417 has variety of applications in transport, identification cards and inventory management. Barcodes have become ubiquitous in modern world, they are used as electronic tickets, in official documents, in advertisement, healthcare, for tracking objects and people. Examples of popular barcode types are shown in Fig. 1.
There are two main approaches for decoding barcodes, the former uses laser and the latter just a simple camera. Through years of development, laser scanners have become very reliable and fast for the case of reading exactly one 1D barcode, but they are completely unable to deal with 2D barcodes or read several barcodes at the same time. Another drawback is that they can not read barcodes from screens efficiently as they strongly rely on reflected light.
Popular camera-based reader is a simple smartphone application which is capable of scanning almost any type of barcode. However, most applications require some user guidance like pointing on barcode to decode. Most applications decode only one barcode at a time, despite it is possible to In this work, we introduce segmentation based barcode detector which is capable of locating all barcodes simultaneously no matter how many of them are present in the image or which types they are, so the system does not need any user guidance. The developed detector also provides information about most probable types of detected barcodes thus decreasing time for reading process.
III. DETECTION VIA SEGMENTATION
Our approach is inspired by the idea of PixelLink [12] where authors solve text detection via instance segmentation.
We believe that for barcodes the situation when 2 of them are close to each other is unusual, so we do not really need to solve instance segmentation problem therefore dealing with semantic segmentation challenge should be enough.
PixelLink shows good results capturing long but narrow lines with text, which can be a case for some barcode types so we believe such object shape invariance property is an additional advantage.
To solve the detection task we first run semantic segmentation network and then postprocess its results.
A. Semantic segmentation network
Barcodes normally can not be too small so predicting results for resolution 4 times lower than original image should be enough for reasonably good results. Thus we find segmentation map for superpixels which are 4x4 pixel blocks.
Detection is a primary task we are focusing on in this work, treating type classification as a less important sidetask. Most of barcodes share a common structure so it is only natural to classify pixels as being part of barcode (class 1) or background (class 0), thus segmentation network solves binary (super)pixel classification task.
Barcodes are relatively simple objects and thus may be detected by relatively simple architecture. To achieve realtime CPU speed we have developed quite simple architecture based on dilated and separable convolutions (see Table I). It can be logically divided into 3 blocks: 1) Downscale Module is aimed to reduce spatial features dimension. Since these initial convolutions are applied to large feature maps they cost significant amount of overall network time, so to speed up inference these convolutions are made separable. 2) Context Module. This block is inspired by [13]. However, in our architecture it serves for slightly different purpose just improving features and exponentially increasing receptive field with each layer. 3) Final classification layer is 1x1 convolution with number of filters equal to 1+n classes, where n classes is number of different barcode types we want to differentiate with. We used ReLU nonlinearity after each convolution except for the final one where we apply sigmoid to the first channel and softmax to all of the rest channels.
We have chosen the number of channels C = 24 for all convolutional layers. Our experiments show that with more filters model has comparable performance, but with less filters performance drops rapidly. As we have only a few channels in each layer the final model is very compact with only 32962 weights.
As the maximal image resolution we are working with is 512x512, receptive field for prediction is at least half an image which should be more than enough contextual information for detecting barcodes.
KERNEL 3X3 3X3 3X3 3X3 3X3 3X3 3X3 3X3 3X3 1X1 OUTPUT CHANNELS C C C C C C C C C 1+N RECEPTIVE FIELD 3X3 7X7 11X11 19X19 35X35 67X67 131X131 259X259 267X267 267X267
B. Detecting barcodes based on segmentation
After the network pass we get segmentation map for superpixels with 1 + n classes channels. For detection we use only the first channel which can be interpreted as probability being part of barcode for superpixels.
We apply the threshold value for probability to get detection class binary labels (barcode/background). In all our experiments we set this threshold value to 0.5. We now find connected components on received superpixel binary mask and calculate bounding rectangle of minimal area for each component. To do the latest we apply minAreaRect method from OpenCV library (accessed Dec 2018). Now we treat found rectangles as detected barcodes. To get detection rectangle on original image resolution we multiply all of its vertices coordinates by the network scale 4.
C. Filtering results
To avoid a situation when a small group of pixels is accidentally predicted as a barcode, we filter out all superpixel connected components with area less than threshold T area . The threshold value should be chosen to be slightly less than minimal area of objects in the dataset on the segmentation map. In all of our experiments we used value T area = 20.
D. Classification of detected objects
To determine barcode type of detected objects we use all of the rest n classes channels from segmentation network output. After softmax we treat them as probabilities of being some class.
Once we found the rectangle we compute the average probability vector inside this rectangle, then naturally choose the class with the highest probability.
IV. OPTIMIZATION SCHEME
A. Loss function
The training loss is a weighted sum of detection and classification losses
L = L detection + αL classif ication(1)
Detection loss L detection itself is a weighted sum of three components: mean binary crossentropy loss on positive pixels L p , mean binary crossentropy loss on negative pixels L n , and mean binary crossentropy loss on worst predicted k negative pixels L h , where k is equal to the number of positive pixels in image.
L detection = w p L p + w n L n + w h L h(2)
Classification loss is mean (categorical) crossentropy computed by all channels except the first one (with detection). Classification loss is calculated only on superpixels which are parts of ground truth objects.
As our primary goal is high recall in detection we have chosen w p = 15, w n = 1, w h = 5, α = 1. We also tried several different configurations but this combination was the best among them. However, we did not spent too much time on hyperparameter search.
B. Data augmentation
For augmentation we do the following: 1) with (p=0.1) return original nonaugmented image 2) with (p=0.5) rotate image random angle in [-45, 45] 3) with (p=0.5) rotate image on one of 90, 180, 270 degrees 4) with (p=0.5) do random crop. We limit crop to retain all barcodes on the image entirely and ensure that aspect ratio is changed no more than 70% compared to the original image 5) with (p=0.7) do additional image augmentation. For this purpose we used "less important" augmenters from heavy augmentation example from imgaug library [14].
V. EXPERIMENTAL RESULTS
A. Datasets
The network performance was evaluated on 2 common benchmarks for barcode detection -namely WWU Muenster Barcode Database (Muenster) and ArTe-Lab Medium Barcode Dataset (Artelab). Datasets contain 595 and 365 images with ground truth detection masks respectively, resolution for all images is 640x480. All images in Artelab dataset contain exactly one EAN13 barcode, while in Muenster there may be several barcodes on the image.
For training we used our own dataset with both 1D barcodes (Code128, Patch, EAN8, Code93, UCC128, EAN13, Industrial25, Code32, FullASCIICode, UPCE, MATRIX25, Code39, IATA25, UPCA, CODABAR, Interleaved25) and 2D barcodes (QRCode, Aztec, MaxiCode, DataMatrix, PDF417), being 16 different types for 1D and 5 types for 2D Barcodes, 21 type in total. Training dataset contains both photos and document scans. Example images from our dataset can be found in Fig. 2. Dataset consist of 17k images in total, 10% of it was used for validation.
B. Training procedure
We trained our model with batch size 8 for 70 epochs with learning rate 0.001 followed by additional 70 epochs with learning rate 0.0001
While training we resized all images to have maximal side at most 1024 maintaining aspect ratio and make both sides divisible by 64. We pick and augment/preprocess 3000 images from the dataset, then group them into batches by image size, and do this process repeatedly until the end of training. After that we pick next 3000 images and do the same until the end of the dataset. After we reach the end of the dataset, we shuffle image order and repeat the process.
We trained three models: Ours-Detection (all types) (without classification on entire dataset), Ours-Detection+Classification (all types) (with classification on entire dataset), Ours-Detection (EAN13 only) (without classification on EAN13 subset of 1000 images).
C. Evaluation metrics
We follow common evaluation scheme from [7]. Having binary masks G for ground truth and F for found detection results the Jaccard index between them is defined as
J(G, F ) = |G ∩ F | |G ∪ F |
Another common name for Jaccard index is "intersection over union" or IoU which follows from definition. The overall detection rate for a given IoU threshold T is defined as a fraction of images in the dataset where IoU is greater than that threshold
D T = i∈S I(J(G, F ) ≥ T )) |S|
where S is set of images in the dataset and I is indicator function. However, one image may contain several barcodes and if one of them is very big and another is very small D T will indicate error only on very high threshold, so we find it reasonable to evaluate detection performance with additional metrics which will average results not by images but by ground truth barcode objects on them.
For this purpose we use recall R T , defined as number of successfully detected objects divided by total number of
R T = i∈S G∈SGi I(J(G, F (G)) ≥ T ))
i∈S |SG i | where SG i is set of objects on ground truth on image i and F (G) is found box with highest Jaccard index with box G. The paired metric for recall is precision, defined as the number of successfully detected objects divided by total number of detections P T = i∈S G∈SGi I(J(G, F (G)) ≥ T )) i∈S |SF i | where SF i is set of all detections made per image i.
We found connected components for ground truth binary masks and treat them as ground truth objects.
We emphasize that all the metrics above are computed for the detected object regardless its actual type. To evaluate classification of the detected objects by type we use simple accuracy metric (number of correctly guessed objects / number of correctly detected objects). So if we find Barcode-PDF417 as Barcode-QRCode precision and recall will not be affected, but the classification accuracy will be.
D. Quantitative results
We compare our results with Cresot2015 [7], Cresot2016 [9], Namane2017 [10], Yolo2017 [11] on Artelab and Muenster datasets (Table II).
The proposed method is trained on our own dataset, however all other works which we compared with were trained on different datasets. As for the full reproducibility of other authors works on our dataset we have to follow the exactly same training protocol (including initialization and augmentations) to not underestimate the results we decided to rely on the numbers reported in works of other authors. We outperformed all previous works in terms of detection rate on Artelab dataset with the model trained only on EAN13 subset of our dataset. According to the tables, detection rate of our model trained on entire dataset with all barcode types is slightly worse than model trained on EAN13 subset. The reason for this is not poor generalization but markup errors or capturing more barcodes than in the markup (i. e. non-EAN barcodes), see Fig. 3.
As it can be seen in Fig. 4 our model has a rapid decrease in detection rate for higher Jaccard thresholds. Aside from markup errors, the main reason for that is overestimation of barcode borders in detection, which is caused by prioritizing high recall in training, it makes high impact for higher Jaccard thresholds as Jaccard index is known to be very sensitive to almost exact match (Fig. 5).
On Table III we show comparison of our models by precision and recall. Our models achieve close to an absolute recall, meaning that almost all barcodes are detected. On the other hand precision is also relatively high.
E. Execution time
For our network we measure time on 512x512 resolution which is enough for most of applications. We do not include postprocessing time as it is negligible compared to forward network run.
The developed network performs at real-time speed and is 3.5 times faster than YOLO with darknet [11] on higher resolution on the same GTX 1080 GPU. In the Table IV we compare inference times of our model with other approaches. We also provide CPU inference time (for Intel Core i5, 3.20GHz) of our model showing that it is nearly the same as reported in Cresot2016, where authors used their approach in the real-time smartphone application. It is important since not all of the devices have GPU yet.
F. Classification results
Among correctly detected objects we measured classification accuracy and achieved 60% accuracy on test set.
Moreover, classification subtask does not damage detection results. As shown in Table III the results with classification are even slightly better, meaning that detection and classification tasks can mutually benefit from each other.
G. Capturing long narrow barcodes
Additional advantage of our detector is that it is capable of finding objects of any arbitrary shape and does not assume that objects should be approximately squares as done by YOLO. Some examples are provided in Fig. 2.
VI. CONCLUSION
We have introduced new barcode detector which can achieve comparable or better performance on public benchmarks and is much faster than other methods. Moreover, our model is universal barcode detector which is capable to detect both 1D and 2D barcodes of many different types. The model is very light with less than 33000 weights which can be considered very compact and suitable for mobile devices.
Despite being shallow (i.e. very simple, we didn't use any SOTA techniques for semantic segmentation) our model shows that semantic segmentation may be used for object detection efficiently. It also provides natural way to detect objects of arbitrary shape (e.g. very long but narrow).
Future work may include using more advanced approaches in semantic segmentation to develop better network architecture and increase performance.
| 2,571 |
1809.08761
|
2952137661
|
We propose a new model for speaker naming in movies that leverages visual, textual, and acoustic modalities in an unified optimization framework. To evaluate the performance of our model, we introduce a new dataset consisting of six episodes of the Big Bang Theory TV show and eighteen full movies covering different genres. Our experiments show that our multimodal model significantly outperforms several competitive baselines on the average weighted F-score metric. To demonstrate the effectiveness of our framework, we design an end-to-end memory network model that leverages our speaker naming model and achieves state-of-the-art results on the subtitles task of the MovieQA 2017 Challenge.
|
The problem of speaker naming in movies has been explored by the computer vision and the speech communities. In the computer vision community, the speaker naming problem is usually considered as a face person naming problem, in which names are assigned to their corresponding faces on the screen @cite_31 @cite_20 @cite_38 @cite_11 @cite_35 . On the other hand, the speech community considered the problem as a speaker identification problem, which focuses on recognizing and clustering speakers rather than naming them @cite_33 @cite_21 . In this work, we aim to solve the problem of speaker naming in movies, in which we label each segment of the subtitles with its corresponding speaker name whether the speaker's face appeared on in the video or not.
|
{
"abstract": [
"",
"We address the problem of person identification in TV series. We propose a unified learning framework for multi-class classification which incorporates labeled and unlabeled data, and constraints between pairs of features in the training. We apply the framework to train multinomial logistic regression classifiers for multi-class face recognition. The method is completely automatic, as the labeled data is obtained by tagging speaking faces using subtitles and fan transcripts of the videos. We demonstrate our approach on six episodes each of two diverse TV series and achieve state-of-the-art performance.",
"In this paper we provide a brief overview of the area of speaker recognition, describing applications, underlying techniques and some indications of performance. Following this overview we will discuss some of the strengths and weaknesses of current speaker recognition technologies and outline some potential future trends in research, development and applications.",
"A tutorial on the design and development of automatic speaker-recognition systems is presented. Automatic speaker recognition is the use of a machine to recognize a person from a spoken phrase. These systems can operate in two modes: to identify a particular person or to verify a person's claimed identity. Speech processing and the basic components of automatic speaker-recognition systems are shown and design tradeoffs are discussed. Then, a new automatic speaker-recognition system is given. This recognizer performs with 98.9 correct decalcification. Last, the performances of various systems are compared.",
"We investigate the problem of automatically labelling appearances of characters in TV or film material. This is tremendously challenging due to the huge variation in imaged appearance of each character and the weakness and ambiguity of available annotation. However, we demonstrate that high precision can be achieved by combining multiple sources of information, both visual and textual. The principal novelties that we introduce are: (i) automatic generation of time stamped character annotation by aligning subtitles and transcripts; (ii) strengthening the supervisory information by identifying when characters are speaking; (iii) using complementary cues of face matching and clothing matching to propose common annotations for face tracks. Results are presented on episodes of the TV series “Buffy the Vampire Slayer”.",
"We address the character identification problem in movies and television videos: assigning names to faces on the screen. Most prior work on person recognition in video assumes some supervised data such as screenplay or handlabeled faces. In this paper, our only source of ‘supervision’ are the dialog cues: first, second and third person references (such as “I'm Jack”, “Hey, Jack!” and “Jack left”). While this kind of supervision is sparse and indirect, we exploit multiple modalities and their interactions (appearance, dialog, mouth movement, synchrony, continuity-editing cues) to effectively resolve identities through local temporal grouping followed by global weakly supervised recognition. We propose a novel temporal grouping model that partitions face tracks across multiple shots while respecting appearance, geometric and film-editing cues and constraints. In this model, states represent partitions of the k most recent face tracks, and transitions represent compatibility of consecutive partitions. We present dynamic programming inference and discriminative learning for the model. The individual face tracks are subsequently assigned a name by learning a classifier from partial label constraints. The weakly supervised classifier incorporates multiple-instance constraints from dialog cues as well as soft grouping constraints from our temporal grouping. We evaluate both the temporal grouping and final character naming on several hours of TV and movies.",
"Person identification in TV series has been a popular research topic over the last decade. In this area, most approaches either use manually annotated data or extract character supervision from a combination of subtitles and transcripts. However, both approaches have key drawbacks that hinder application of these methods at a large scale — manual annotation is expensive and transcripts are often hard to obtain. We investigate the topic of automatically labeling all character appearances in TV series using information obtained solely from subtitles. This task is extremely difficult as the dialogs between characters provide very sparse and weakly supervised data. We address these challenges by exploiting recent advances in face descriptors and Multiple Instance Learning methods. We propose methods to create MIL bags and evaluate and discuss several MIL techniques. The best combination achieves an average precision over 80 on three diverse TV series. We demonstrate that only using subtitles provides good results on identifying characters in TV series and wish to encourage the community towards this problem."
],
"cite_N": [
"@cite_35",
"@cite_38",
"@cite_33",
"@cite_21",
"@cite_31",
"@cite_20",
"@cite_11"
],
"mid": [
"",
"2093153344",
"2070176749",
"2129244720",
"2168996682",
"2170665590",
"2400416707"
]
}
|
Speaker Naming in Movies
|
Identifying speakers and their names in movies, and videos in general, is a primary task for many video analysis problems, including automatic subtitle labeling (Hu et al., 2015), content-based video indexing and retrieval (Zhang et al., 2009), video summarization (Tapaswi et al., 2014), and video storyline understanding (Tapaswi et al., 2014). It is a very challenging task, as the visual appearance of the characters changes over the course of the movie due to several factors such as scale, clothing, illumination, and so forth (Arandjelovic and Zisserman, 2005;Everingham et al., 2006). The annotation of movie data with speakers' names can be helpful in a number of applications, such as movie question answering , automatic identification of character relationships (Zhang et al., 2009), or automatic movie captioning (Hu et al., 2015).
Most previous studies relied primarily on visual information (Arandjelovic and Zisserman, 2005;Everingham et al., 2006), and aimed for the slightly different task of face track labeling; speakers who did not appear in the video frame were not assigned any names, which is common in movies and TV shows. Other available sources of information such as scripts were only used to extract cues about the speakers' names to associate the faces in the videos with their corresponding character name (Everingham et al., 2006;Tapaswi et al., 2015;Bäuml et al., 2013;Sivic et al., 2009); however since scripts are not always available, the applicability of these methods is somehow limited.
Other studies focused on the problem of speaker recognition without naming, using the speech modality as a single source of information. While some of these studies attempted to incorporate the visual modality, their goal was to cluster the speech segments rather than name the speakers (Erzin et al., 2005;Bost and Linares, 2014;Kapsouras et al., 2015;Bredin and Gelly, 2016;Hu et al., 2015;Ren et al., 2016). None of these studies used textual information (e.g., dialogue), which prevented them from identifying speaker names.
In our work, we address the task of speaker naming, and propose a new multimodal model that leverages in an unified framework of the visual, speech, and textual modalities that are naturally available while watching a movie. We do not assume the availability of a movie script or a cast list, which makes our model fully unsupervised and easily applicable to unseen movies.
The paper makes two main contributions. First, we introduce a new unsupervised system for speaker naming for movies and TV shows that exclusively depends on videos and subtitles, and relies on a novel unified optimization framework that fuses visual, textual, and acoustic modalities for speaker naming. Second, we construct and make available a dataset consisting of 24 movies with 31,019 turns manually annotated with character names. Additionally, we also evaluate the role of speaker naming when embedded in an end-toend memory network model, achieving state-ofthe-art performance results on the subtitles task of the MovieQA 2017 Challenge.
Datasets
Our dataset consists of a mix of TV show episodes and full movies. For the TV show, we use six full episodes of season one of the BBT. The number of named characters in the BBT episodes varies between 5 to 8 characters per episode, and the background noise level is low. Additionally, we also acquired a set of eighteen full movies from different genres, to evaluate how our model works under different conditions. In this latter dataset, the number of named characters ranges between 6 and 37, and it has varied levels of background noise.
We manually annotated this dataset with the character name of each subtitle segment. To facilitate the annotation process, we built an interface that parses the movies subtitles files, collects the cast list from IMDB for each movie, and then shows one subtitle segment at a time along with the cast list so that the annotator can choose the correct character. Using this tool, human annotators watched the movies and assigned a speaker name to each subtitle segment. If a character name was not mentioned in the dialogue, the annotators labeled it as "unknown." To evaluate the quality of the annotations, five movies in our dataset were double annotated. The Cohen's Kappa interannotator agreement score for these five movies is 0.91, which shows a strong level of agreement.
To clean the data, we removed empty segments, as well as subtitle description parts written between brackets such as "[groaning]" and "[sniffing]". We also removed segments with two speakers at the same time. We intentionally avoided using any automatic means to split these segments, to preserve the high-quality of our gold standard. Table 1 shows the statistics of the collected data. Overall, the dataset consists of 24 videos with a total duration of 40.28 hours, a net dialogue duration of 21.99 hours, and a total of 31,019 turns spoken by 463 different speakers. Four of the movies in this dataset are used as a development set to develop supplementary systems and to fine tune our model's parameters; the remaining movies are used for evaluation.
Data Processing and Representations
We process the movies by extracting several textual, acoustic, and visual features.
Textual Features
We use the following representations for the textual content of the subtitles: SkipThoughts uses a Recurrent Neural Network to capture the underlying semantic and syntactic properties, and map them to a vector representation (Kiros et al., 2015). We use their pretrained model to compute a 4,800 dimensional sentence representation for each line in the subtitles. 1 TF-IDF is a traditional weighting scheme in information retrieval. We represent each subtitle as a vector of tf-idf weights, where the length of the vector (i.e., vocabulary size) and the idf scores are obtained from the movie including the subtitle.
Acoustic Features
For each movie in the dataset, we extract the audio from the center channel. The center channel is usually dedicated to the dialogue in movies, while the other audio channels carry the surrounding sounds from the environment and the musical background. Although doing this does not fully eliminate the noise in the audio signal, it still improves the speech-to-noise ratio of the signal. When a movie has stereo sound (left and right channels only), we down-mix both channels of the stereo stream into a mono channel.
In this work, we use the subtitles timestamps as an estimate of the boundaries that correspond to the uttered speech segments. Usually, each subtitle corresponds to a segment being said by a single speaker. We use the subtitle timestamps for segmentation so that we can avoid automatic speaker diarization errors and focus on the speaker naming problem.
To represent the relevant acoustic information from each spoken segment, we use iVectors, which is the state-of-the-art unsupervised approach in speaker verification (Dehak et al., 2011). While other deep learning-based speaker embeddings models also exist, we do not have access to enough supervised data to build such models. We train unsupervised iVectors for each movie in the dataset, using the iVector extractor used in (Khorram et al., 2016). We extract iVectors of size 40 using a Gaussian Mixture Model-Universal Background Model (GMM-UBM) with 512 components. Each iVector corresponds to a speech segment uttered by a single speaker. We fine tune the size of the iVectors and the number of GMM-UBM components using the development dataset.
Visual Features
We detect faces in the movies every five frames using the recently proposed MTCNN (Zhang et al., 2016) model, which is pretrained for face detection and facial landmark alignment. Based on the results of face detection, we apply the forward and backward tracker with an implementation of the Dlib library (King, 2009;Danelljan et al., 2014) to extract face tracks from each video clip. We represent a face track using its best face in terms of detection score, and use the activations of the fc7 layer of pretrained VGG-Face (Parkhi et al., 2015) network as visual features.
We calculate the distance between the upper lip center and the lower lip center based on the 68point facial landmark detection implemented in the Dlib library (King, 2009;Kazemi and Sullivan, 2014). This distance is normalized by the height of face bounding boxes and concatenated across frames to represent the amount of mouth opening. A human usually speaks with lips moving with a certain frequency (3.75 Hz to 7.5 Hz used in this work) (Tapaswi et al., 2015). We apply a bandpass filter to amplify the signal of true lip motion in these segments. The overall sum of lip motion is used as the score for the talking face.
Unified Optimization Framework
We tackle the problem of speaker naming as a transductive learning problem with constraints. In this approach, we want to use the sparse positive labels extracted from the dialogue and the underlying topological structure of the rest of the unlabeled data. We also incorporate multiple cues extracted from both textual and multimedia infor-mation. A unified learning framework is proposed to enable the joint optimization over the automatically labeled and unlabeled data, along with multiple semantic cues.
Character Identification and Extraction
In this work, we do not consider the set of character names as given because we want to build a model that can be generalized to unseen movies. This strict setting adds to the problem's complexity. To extract the list of characters from the subtitles, we use the Named Entity Recognizer (NER) in the Stanford CoreNLP toolkit (Manning et al., 2014). The output is a long list of person names that are mentioned in the dialogue. This list is prone to errors including, but not limited to, nouns that are misclassified by the NER as person's name such as "Dad" and "Aye", names that are irrelevant to the movie such as "Superman" or named animals, or uncaptured character names.
To clean the extracted names list of each movie, we cluster these names based on string minimum edit distance and their gender. From each cluster, we then pick a name to represent it based on its frequency in the dialogue. The result of this step consists of name clusters along with their distribution in the dialogue. The distribution of each cluster is the sum of all the counts of its members. To filter out irrelevant characters, we run a name reference classifier, which classifies each name into first, second or third person references. If a name was only mentioned as a third person throughout the whole movie, we discard it from the list of characters. We remove any name cluster that has a total count less than three, which takes care of the misclassified names' reference types.
Grammatical Cues
We use the subtitles to extract the name mentions in the dialogue. These mentions allow us to obtain cues about the speaker name and the absence or the presence of the mentioned character in the surrounding subtitles. Thus, they affect the probability that the mentioned character is the speaker or not. We follow the same name reference categories used in (Cour et al., 2010;Haurilet et al., 2016). We classify a name mention into: first (e.g., "I'm Sheldon"), second (e.g., "Oh, hi, Penny") or third person reference (e.g., "So how did it go with Leslie?"). The first person reference represents a positive constraint that allows us to label the corresponding iVector of the speaker and his face if it exists during the segment duration. The second person reference represents a multi-instance constraint that suggests that the mentioned name is one of the characters that are present in the scene, which increases the probability of this character to be one of the speakers of the surrounding segments. On the other hand, the third person reference represents a negative constraint, as it suggests that the speaker does not exist in the scene, which lowers the character probability of the character being one of the speakers of the next or the previous subtitle segments.
To identify first, second and third person references, we train a linear support vector classifier. The first person, the second and third person classifier's training data are extracted and labeled from our development dataset, and fine tuned using 10fold cross-validation.
Unified Optimization Framework
Given a set of data points that consist of l labeled 2 and u unlabeled instances, we apply an optimization framework to infer the best prediction of speaker names. Suppose we have l+u instances X = {x 1 , x 2 , ..., x l , x l+1 , ..., x l+u } and K possible character names. We also get the dialoguebased positive labels y i for instances x i , where y i is a k-dimension one-hot vector and y j i = 1 if x i belongs to the class j, for every 1 ≤ i ≤ l and 1 ≤ j ≤ K. To name each instance x i , we want to predict another one-hot vector of naming scores f (x i ) for each x i , such that argmax j f j (x i ) = z i where z i is the ground truth number of class for instance x i .
To combine the positive labels and unlabeled data, we define the objective function for predic-tions f as follows:
L initial (f ) = 1 l l i=1 ||f (x i ) − y i || 2 + 1 l + u l+u i=1 l+u j=1 w ij ||f (x i ) − f (x j )|| 2
(1) Here w ij is the similarity between x i and x j , which is calculated as the weighted sum of textual, acoustic and visual similarities. The inverse Euclidean distance is used as similarity function for each modality. The weights for different modalities are selected as hyperparameters and tuned on the development set. This objective leads to a convex loss function which is easier to optimize over feasible predictions.
Besides the positive labels obtained from first person name references, we also introduce other semantic constraints and cues to enhance the power of our proposed approach. We implement the following four types of constraints:
Multiple Instance Constraint. Although the second person references cannot directly provide positive constraints, they imply that the mentioned characters have high probabilities to be in this conversation. Following previous work (Cour et al., 2010), we incorporate the second person references as multiple instances constraints into our optimization: if x i has a second person reference j, we encourage j to be assigned to its neighbors, i.e., its adjacent subtitles with similar timestamps. For the implementation, we simply include multiple instances constraints as a variant of positive labels with decreasing weights s, where s = 1/(l − i) for each neighbor x l . Negative Constraint. For the third person references, the mentioned characters may not occur in the conversation and movies. So we treat them as negative constraints, which means they imply that the mentioned characters should not be assigned to corresponding instances. This constraint is formulated as follows:
L neg (f ) = (i,j)∈N [f j (x i )] 2(2)
where N is the set of negative constraints x i doesn't belong class j.
Gender Constraint. We train a voice-based gender classifier by using the subtitles segments from the four movies in our development dataset (5,543 segments of subtitles). We use the segments in which we know the speaker's name and manually obtain the ground truth gender label from IMDB. We extract the signal energy, 20 Mel-frequency cepstral coefficients (MFCCs) along with their first and second derivatives, in addition to timeand frequency-based absolute fundamental frequency (f0) statistics as features to represent each segment in the subtitles. The f0 statistics has been found to improve the automatic gender detection performance for short speech segments (Levitan et al., 2016), which fits our case since the median duration of the dialogue turns in our dataset is 2.6 seconds.
The MFCC features are extracted using a step size of 16 msec over a 64 msec window using the method from (Mathieu et al., 2010), while the f0 statistics are extracted using a step size of 25 msec over a 50 msec window as the default configuration in (Eyben et al., 2013). We then use these features to train a logistic regression classifier using the Scikit-learn library (Pedregosa et al., 2011). The average accuracy of the gender classifier on a 10-fold cross-validation is 0.8867.
Given the results for the gender classification of audio segments and character names, we define the gender loss to penalize inconsistency between the predicted gender and character names:
L gender (f ) = (i,j)∈Q 1 P ga (x i )(1 − P gn (j))f j (x i ) + (i,j)∈Q 2 (1 − P ga (x i ))P gn (j)f j (x i ) (3)
where P ga(x i ) is the probability for instance x i to be a male, and P gn(j) is the probability for name j to be a male, and
Q 1 = {(i, j)|P ga (x i ) < 0.5, P gn (j) > 0.5}, Q 2 = {(i, j)|P ga (x i ) > 0.5, P gn (j) < 0.5}. Distribution Constraint.
We automatically analyze the dialogue and extract the number of mentions of each character in the subtitles using Stanford CoreNLP and string matching to capture names that are missed by the named entity recognizer. We then filter the resulting counts by removing third person mention references of each name as we assume that this character does not appear in the surrounding frames. We use the results to estimate the distribution of the speaking characters and their importance in the movies. The main goal of this step is to construct a prior probability distribution for the speakers in each movie.
To encourage our predictions to be consistent with the dialogue-based priors, we penalize the square error between the distributions of predictions and name mentions priors in the following equation:
L dis (f ) = K j=1 ( (f j (x i )) − d j ) 2 (4)
where d j is the ratio of name j mentions in all subtitles.
Final Framework. Combining the loss in Eqn. 1 and multiple losses with different constraints, we obtain our unified optimization problem:
f * = arg min f λ 1 L initial (f ) + λ 2 L M I (f ) + λ 3 L neg (f ) + λ 4 L gender (f ) + λ 5 L dis (f )(5)
All of the λs are hyper-parameters to be tuned on development set. We also include the constraint that predictions for different character names must sum to 1. We solve this constrained optimization problem with projected gradient descent (PGD). Our optimization problem in Eqn. 5 is guaranteed to be a convex optimization problem and therefore projected gradient descent is guaranteed to stop with global optima. PGD usually converges after 800 iterations.
Evaluation
We model our task as a classification problem, and use the unified optimization framework described earlier to assign a character name to each subtitle.
Since our dataset is highly unbalanced, with a few main characters usually dominating the entire dataset, we adopt the weighted F-score as our evaluation metric, instead of using an accuracy metric or a micro-average F-score. This allows us to take into account that most of the characters have only a few spoken subtitle segments, while at the same time placing emphasis on the main characters. This leads sometimes to an average weighted F-score that is not between the average precision and recall.
One aspect that is important to note is that characters are often referred to using different names. For example, in the movie "The Devil's Advocate," the character Kevin Lomax is also referred to as Kevin or Kev. In more complicated situations, characters may even have multiple identities, such as the character Saul Bloom in the movie "Ocean's Eleven," who pretends to be another character named Lyman Zerga. Since our Table 3: Comparison between the average of macro-weighted average of precision, recall and fscore of the baselines and our model. * means statistically significant (t-test p-value < 0.05) when compared to baseline B3.
goal is to assign names to speakers, and not necessarily solve this coreference problem, we consider the assignment of the subtitle segments to any of the speaker's aliases to be correct. Thus, during the evaluation, we map all the characters' aliases from our model's output to the names in the ground truth annotations. Our mapping does not include other referent nouns such as "Dad," "Buddy," etc.; if a segment gets assigned to any such terms, it is considered a misprediction. We compare our model against three baselines: B1: Most-frequently mentioned character consists of selecting the most frequently mentioned character in the dialogue as the speaker for all the subtitles. Even though it is a simple baseline, it achieves an accuracy of 27.1%, since the leading characters tend to speak the most in the movies. B2: Distribution-driven random assignment consists of randomly assigning character names according to a distribution that reflects their fraction of mentions in all the subtitles. B3: Gender-based distribution-driven random assignment consists of selecting the speaker names based on the voice-based gender detection classifier. This baseline randomly selects the character name that matches the speaker's gender according to the distribution of mentions of the names in the matching gender category.
The results obtained with our proposed unified optimization framework and the three baselines are shown in Table 3. We also report the performance of the optimization framework using different combinations of the three modalities. The model that uses all three modalities achieves the best results, and outperforms the strongest baseline (B3) by more than 6% absolute in average weighted F-score. It also significantly outperforms the usage of the visual and acoustic features combined, which have been frequently used together in previous work, suggesting the importance of textual features in this setting.
The ineffectiveness of the iVectors might be a result of the background noise and music, which are difficult to remove from the speech signal. Figure 2 shows the t-Distributed Stochastic Neighbor Embedding (t-SNE) (Van Der Maaten, 2014), which is a nonlinear dimensionality reduction technique that models points in such a way that similar vectors are modeled by nearby points and dissimilar objects are modeled by distant points, visualization of the iVectors over the whole BBT show and the movie "Titanic." In the BBT there is almost no musical background or background noise, while, Titanic has musical background in addition to the background noise such as the screams of the drowning people. From the graph, the difference between the quality of the iVectors clusters on different noise-levels is clear. Table 4 shows the effect of adding components of our loss function to the initial loss L init function. The performance of the model using only L init without the other parts is very low due to the sparsity of first person references and errors that the person reference classifier introduces.
Precision Recall F-score L initial 0.0631 0.1576 0.0775 L initial + L gender 0.1160 0.1845 0.1210 L initial + L negative 0.0825 0.0746 0.0361 L initial + L distribution 0.1050 0.1570 0.0608 L initial + L M ultipleInstance 0.3058 0.2941 0.2189 Table 4: Analysis of the effect of adding each component of the loss function to the initial loss.
In order to analyze the effect of the errors that several of the modules (e.g., gender and name reference classifiers) propagate into the system, we also test our framework by replacing each one of the components with its ground truth information. As seen in Table 5, the results obtained in this setting show significant improvement with the replacement of each component in our framework, which suggests that additional work on these components will have positive implications on the overall system. Given that for many of the movies in the dataset the videos are not completely available, we develop our initial system so that it only relies on the subtitles; we thus participate in the challenge subtitles task, which includes the dialogue (without the speaker information) as the only source of information to answer questions.
To demonstrate the effectiveness of our speaker naming approach, we design a model based on an end-to-end memory network (Sukhbaatar et al., 2015), namely Speaker-based Convolutional Memory Network (SC-MemN2N), which relies on the MovieQA dataset, and integrates the speaker naming approach as a component in the network. Specifically, we use our speaker naming framework to infer the name of the speaker for each segment of the subtitles, and prepend the predicted speaker name to each turn in the subtitles. 4 To represent the movie subtitles, we represent each turn in the subtitles as the mean-pooling of a 300-dimension pretrained word2vec (Mikolov et al., 2013) representation of each word in the sentence. We similarly represent the input questions and their corresponding answers. Given a question, we use the SC-MemN2N memory to find an answer. For questions asking about specific characters, we keep the memory slots that have the characters in question as speakers or mentioned in, and mask out the rest of the memory slots. Figure Movie Question Answers
Fargo
What did Mike's wife, as he says, die from? A1: She was killed A2: Breast cancer A3: Leukemia A4: Heart disease A5: Complications due to child birth
Titanic
What does Rose ask Jack to do in her room? A1: Sketch her in her best dress A2: Sketch her nude A3: Take a picture of her nude A4: Paint her nude A5: Take a picture of her in her best dress Table 6: Example of questions and answers from the MQA benchmark. The answers in bold are the correct answers to their corresponding question.
3 shows the architecture of our model. Table 7 includes the results of our system on the validation and test sets, along with the best systems introduced in previous work, showing that our SC-MemN2N achieves the best performance. Furthermore, to measure the effectiveness of adding the speaker names and masking, we test our model after removing the names from the network (C-MemN2N). As seen from the results, the gain of SC-MemN2N is statistically significant 5 compared to a version of the system that does not include the speaker names (C-MemN2N). Figure 4 shows the performance of both C-MemN2N and SC-MemN2N models by question type. The results suggest that our speaker naming helps the model better distinguish between characters, and that prepending the speaker names to the subtitle segments improves the ability of the memory network to correctly identify the supporting facts from the story that answers a given question.
Method
Subtitles val test SSCB-W2V 24.8 23.7 SSCB-TF-IDF 27.6 26.5 SSCB Fusion 27.7 -MemN2N 38.0 36.9 Understanding visual regions -37.4 RWMN (Na et al., 2017) 40.4 38.5 C-MemN2N (w/o SN) 40.6 -SC-MemN2N (Ours) 42.7 39.4 Table 7: Performance comparison for the subtitles task on the MovieQA 2017 Challenge on both validation and test sets. We compare our models with the best existing models (from the challenge leaderboard).
Conclusion
In this paper, we proposed a unified optimization framework for the task of speaker naming in movies. We addressed this task under a difficult setup, without a cast-list, without supervision from a script, and dealing with the complicated conditions of real movies. Our model includes textual, visual, and acoustic modalities, and incorporates several grammatical and acoustic constraints. Empirical experiments on a movie dataset demonstrated the effectiveness of our proposed method with respect to several competitive baselines. We also showed that an SC-MemN2N model that leverages our speaker naming model can achieve state-of-the-art results on the subtitles task of the MovieQA 2017 Challenge.
The dataset annotated with character names introduced in this paper is publicly available from http://lit.eecs.umich.edu/ downloads.html.
| 4,588 |
1809.08761
|
2952137661
|
We propose a new model for speaker naming in movies that leverages visual, textual, and acoustic modalities in an unified optimization framework. To evaluate the performance of our model, we introduce a new dataset consisting of six episodes of the Big Bang Theory TV show and eighteen full movies covering different genres. Our experiments show that our multimodal model significantly outperforms several competitive baselines on the average weighted F-score metric. To demonstrate the effectiveness of our framework, we design an end-to-end memory network model that leverages our speaker naming model and achieves state-of-the-art results on the subtitles task of the MovieQA 2017 Challenge.
|
In @cite_20 @cite_11 , the authors proposed a weakly supervised model depending on subtitles and a character list. They extracted textual cues from the dialog: first, second, and third person references, such as I'm Jack'', Hey, Jack!'', and Jack left''. Using a character list from IMDB, they mapped these references onto true names using minimum edit distance, and then they ascribed the references to face tracks. Other work removed the dependency on a true character list by determining all names through coreference resolution. However, this work also depended on the availability of scripts @cite_2 . In our model, we removed the dependency on both the true cast list and the script, which makes it easier to apply our model to other movies and TV shows.
|
{
"abstract": [
"",
"We address the character identification problem in movies and television videos: assigning names to faces on the screen. Most prior work on person recognition in video assumes some supervised data such as screenplay or handlabeled faces. In this paper, our only source of ‘supervision’ are the dialog cues: first, second and third person references (such as “I'm Jack”, “Hey, Jack!” and “Jack left”). While this kind of supervision is sparse and indirect, we exploit multiple modalities and their interactions (appearance, dialog, mouth movement, synchrony, continuity-editing cues) to effectively resolve identities through local temporal grouping followed by global weakly supervised recognition. We propose a novel temporal grouping model that partitions face tracks across multiple shots while respecting appearance, geometric and film-editing cues and constraints. In this model, states represent partitions of the k most recent face tracks, and transitions represent compatibility of consecutive partitions. We present dynamic programming inference and discriminative learning for the model. The individual face tracks are subsequently assigned a name by learning a classifier from partial label constraints. The weakly supervised classifier incorporates multiple-instance constraints from dialog cues as well as soft grouping constraints from our temporal grouping. We evaluate both the temporal grouping and final character naming on several hours of TV and movies.",
"Person identification in TV series has been a popular research topic over the last decade. In this area, most approaches either use manually annotated data or extract character supervision from a combination of subtitles and transcripts. However, both approaches have key drawbacks that hinder application of these methods at a large scale — manual annotation is expensive and transcripts are often hard to obtain. We investigate the topic of automatically labeling all character appearances in TV series using information obtained solely from subtitles. This task is extremely difficult as the dialogs between characters provide very sparse and weakly supervised data. We address these challenges by exploiting recent advances in face descriptors and Multiple Instance Learning methods. We propose methods to create MIL bags and evaluate and discuss several MIL techniques. The best combination achieves an average precision over 80 on three diverse TV series. We demonstrate that only using subtitles provides good results on identifying characters in TV series and wish to encourage the community towards this problem."
],
"cite_N": [
"@cite_2",
"@cite_20",
"@cite_11"
],
"mid": [
"",
"2170665590",
"2400416707"
]
}
|
Speaker Naming in Movies
|
Identifying speakers and their names in movies, and videos in general, is a primary task for many video analysis problems, including automatic subtitle labeling (Hu et al., 2015), content-based video indexing and retrieval (Zhang et al., 2009), video summarization (Tapaswi et al., 2014), and video storyline understanding (Tapaswi et al., 2014). It is a very challenging task, as the visual appearance of the characters changes over the course of the movie due to several factors such as scale, clothing, illumination, and so forth (Arandjelovic and Zisserman, 2005;Everingham et al., 2006). The annotation of movie data with speakers' names can be helpful in a number of applications, such as movie question answering , automatic identification of character relationships (Zhang et al., 2009), or automatic movie captioning (Hu et al., 2015).
Most previous studies relied primarily on visual information (Arandjelovic and Zisserman, 2005;Everingham et al., 2006), and aimed for the slightly different task of face track labeling; speakers who did not appear in the video frame were not assigned any names, which is common in movies and TV shows. Other available sources of information such as scripts were only used to extract cues about the speakers' names to associate the faces in the videos with their corresponding character name (Everingham et al., 2006;Tapaswi et al., 2015;Bäuml et al., 2013;Sivic et al., 2009); however since scripts are not always available, the applicability of these methods is somehow limited.
Other studies focused on the problem of speaker recognition without naming, using the speech modality as a single source of information. While some of these studies attempted to incorporate the visual modality, their goal was to cluster the speech segments rather than name the speakers (Erzin et al., 2005;Bost and Linares, 2014;Kapsouras et al., 2015;Bredin and Gelly, 2016;Hu et al., 2015;Ren et al., 2016). None of these studies used textual information (e.g., dialogue), which prevented them from identifying speaker names.
In our work, we address the task of speaker naming, and propose a new multimodal model that leverages in an unified framework of the visual, speech, and textual modalities that are naturally available while watching a movie. We do not assume the availability of a movie script or a cast list, which makes our model fully unsupervised and easily applicable to unseen movies.
The paper makes two main contributions. First, we introduce a new unsupervised system for speaker naming for movies and TV shows that exclusively depends on videos and subtitles, and relies on a novel unified optimization framework that fuses visual, textual, and acoustic modalities for speaker naming. Second, we construct and make available a dataset consisting of 24 movies with 31,019 turns manually annotated with character names. Additionally, we also evaluate the role of speaker naming when embedded in an end-toend memory network model, achieving state-ofthe-art performance results on the subtitles task of the MovieQA 2017 Challenge.
Datasets
Our dataset consists of a mix of TV show episodes and full movies. For the TV show, we use six full episodes of season one of the BBT. The number of named characters in the BBT episodes varies between 5 to 8 characters per episode, and the background noise level is low. Additionally, we also acquired a set of eighteen full movies from different genres, to evaluate how our model works under different conditions. In this latter dataset, the number of named characters ranges between 6 and 37, and it has varied levels of background noise.
We manually annotated this dataset with the character name of each subtitle segment. To facilitate the annotation process, we built an interface that parses the movies subtitles files, collects the cast list from IMDB for each movie, and then shows one subtitle segment at a time along with the cast list so that the annotator can choose the correct character. Using this tool, human annotators watched the movies and assigned a speaker name to each subtitle segment. If a character name was not mentioned in the dialogue, the annotators labeled it as "unknown." To evaluate the quality of the annotations, five movies in our dataset were double annotated. The Cohen's Kappa interannotator agreement score for these five movies is 0.91, which shows a strong level of agreement.
To clean the data, we removed empty segments, as well as subtitle description parts written between brackets such as "[groaning]" and "[sniffing]". We also removed segments with two speakers at the same time. We intentionally avoided using any automatic means to split these segments, to preserve the high-quality of our gold standard. Table 1 shows the statistics of the collected data. Overall, the dataset consists of 24 videos with a total duration of 40.28 hours, a net dialogue duration of 21.99 hours, and a total of 31,019 turns spoken by 463 different speakers. Four of the movies in this dataset are used as a development set to develop supplementary systems and to fine tune our model's parameters; the remaining movies are used for evaluation.
Data Processing and Representations
We process the movies by extracting several textual, acoustic, and visual features.
Textual Features
We use the following representations for the textual content of the subtitles: SkipThoughts uses a Recurrent Neural Network to capture the underlying semantic and syntactic properties, and map them to a vector representation (Kiros et al., 2015). We use their pretrained model to compute a 4,800 dimensional sentence representation for each line in the subtitles. 1 TF-IDF is a traditional weighting scheme in information retrieval. We represent each subtitle as a vector of tf-idf weights, where the length of the vector (i.e., vocabulary size) and the idf scores are obtained from the movie including the subtitle.
Acoustic Features
For each movie in the dataset, we extract the audio from the center channel. The center channel is usually dedicated to the dialogue in movies, while the other audio channels carry the surrounding sounds from the environment and the musical background. Although doing this does not fully eliminate the noise in the audio signal, it still improves the speech-to-noise ratio of the signal. When a movie has stereo sound (left and right channels only), we down-mix both channels of the stereo stream into a mono channel.
In this work, we use the subtitles timestamps as an estimate of the boundaries that correspond to the uttered speech segments. Usually, each subtitle corresponds to a segment being said by a single speaker. We use the subtitle timestamps for segmentation so that we can avoid automatic speaker diarization errors and focus on the speaker naming problem.
To represent the relevant acoustic information from each spoken segment, we use iVectors, which is the state-of-the-art unsupervised approach in speaker verification (Dehak et al., 2011). While other deep learning-based speaker embeddings models also exist, we do not have access to enough supervised data to build such models. We train unsupervised iVectors for each movie in the dataset, using the iVector extractor used in (Khorram et al., 2016). We extract iVectors of size 40 using a Gaussian Mixture Model-Universal Background Model (GMM-UBM) with 512 components. Each iVector corresponds to a speech segment uttered by a single speaker. We fine tune the size of the iVectors and the number of GMM-UBM components using the development dataset.
Visual Features
We detect faces in the movies every five frames using the recently proposed MTCNN (Zhang et al., 2016) model, which is pretrained for face detection and facial landmark alignment. Based on the results of face detection, we apply the forward and backward tracker with an implementation of the Dlib library (King, 2009;Danelljan et al., 2014) to extract face tracks from each video clip. We represent a face track using its best face in terms of detection score, and use the activations of the fc7 layer of pretrained VGG-Face (Parkhi et al., 2015) network as visual features.
We calculate the distance between the upper lip center and the lower lip center based on the 68point facial landmark detection implemented in the Dlib library (King, 2009;Kazemi and Sullivan, 2014). This distance is normalized by the height of face bounding boxes and concatenated across frames to represent the amount of mouth opening. A human usually speaks with lips moving with a certain frequency (3.75 Hz to 7.5 Hz used in this work) (Tapaswi et al., 2015). We apply a bandpass filter to amplify the signal of true lip motion in these segments. The overall sum of lip motion is used as the score for the talking face.
Unified Optimization Framework
We tackle the problem of speaker naming as a transductive learning problem with constraints. In this approach, we want to use the sparse positive labels extracted from the dialogue and the underlying topological structure of the rest of the unlabeled data. We also incorporate multiple cues extracted from both textual and multimedia infor-mation. A unified learning framework is proposed to enable the joint optimization over the automatically labeled and unlabeled data, along with multiple semantic cues.
Character Identification and Extraction
In this work, we do not consider the set of character names as given because we want to build a model that can be generalized to unseen movies. This strict setting adds to the problem's complexity. To extract the list of characters from the subtitles, we use the Named Entity Recognizer (NER) in the Stanford CoreNLP toolkit (Manning et al., 2014). The output is a long list of person names that are mentioned in the dialogue. This list is prone to errors including, but not limited to, nouns that are misclassified by the NER as person's name such as "Dad" and "Aye", names that are irrelevant to the movie such as "Superman" or named animals, or uncaptured character names.
To clean the extracted names list of each movie, we cluster these names based on string minimum edit distance and their gender. From each cluster, we then pick a name to represent it based on its frequency in the dialogue. The result of this step consists of name clusters along with their distribution in the dialogue. The distribution of each cluster is the sum of all the counts of its members. To filter out irrelevant characters, we run a name reference classifier, which classifies each name into first, second or third person references. If a name was only mentioned as a third person throughout the whole movie, we discard it from the list of characters. We remove any name cluster that has a total count less than three, which takes care of the misclassified names' reference types.
Grammatical Cues
We use the subtitles to extract the name mentions in the dialogue. These mentions allow us to obtain cues about the speaker name and the absence or the presence of the mentioned character in the surrounding subtitles. Thus, they affect the probability that the mentioned character is the speaker or not. We follow the same name reference categories used in (Cour et al., 2010;Haurilet et al., 2016). We classify a name mention into: first (e.g., "I'm Sheldon"), second (e.g., "Oh, hi, Penny") or third person reference (e.g., "So how did it go with Leslie?"). The first person reference represents a positive constraint that allows us to label the corresponding iVector of the speaker and his face if it exists during the segment duration. The second person reference represents a multi-instance constraint that suggests that the mentioned name is one of the characters that are present in the scene, which increases the probability of this character to be one of the speakers of the surrounding segments. On the other hand, the third person reference represents a negative constraint, as it suggests that the speaker does not exist in the scene, which lowers the character probability of the character being one of the speakers of the next or the previous subtitle segments.
To identify first, second and third person references, we train a linear support vector classifier. The first person, the second and third person classifier's training data are extracted and labeled from our development dataset, and fine tuned using 10fold cross-validation.
Unified Optimization Framework
Given a set of data points that consist of l labeled 2 and u unlabeled instances, we apply an optimization framework to infer the best prediction of speaker names. Suppose we have l+u instances X = {x 1 , x 2 , ..., x l , x l+1 , ..., x l+u } and K possible character names. We also get the dialoguebased positive labels y i for instances x i , where y i is a k-dimension one-hot vector and y j i = 1 if x i belongs to the class j, for every 1 ≤ i ≤ l and 1 ≤ j ≤ K. To name each instance x i , we want to predict another one-hot vector of naming scores f (x i ) for each x i , such that argmax j f j (x i ) = z i where z i is the ground truth number of class for instance x i .
To combine the positive labels and unlabeled data, we define the objective function for predic-tions f as follows:
L initial (f ) = 1 l l i=1 ||f (x i ) − y i || 2 + 1 l + u l+u i=1 l+u j=1 w ij ||f (x i ) − f (x j )|| 2
(1) Here w ij is the similarity between x i and x j , which is calculated as the weighted sum of textual, acoustic and visual similarities. The inverse Euclidean distance is used as similarity function for each modality. The weights for different modalities are selected as hyperparameters and tuned on the development set. This objective leads to a convex loss function which is easier to optimize over feasible predictions.
Besides the positive labels obtained from first person name references, we also introduce other semantic constraints and cues to enhance the power of our proposed approach. We implement the following four types of constraints:
Multiple Instance Constraint. Although the second person references cannot directly provide positive constraints, they imply that the mentioned characters have high probabilities to be in this conversation. Following previous work (Cour et al., 2010), we incorporate the second person references as multiple instances constraints into our optimization: if x i has a second person reference j, we encourage j to be assigned to its neighbors, i.e., its adjacent subtitles with similar timestamps. For the implementation, we simply include multiple instances constraints as a variant of positive labels with decreasing weights s, where s = 1/(l − i) for each neighbor x l . Negative Constraint. For the third person references, the mentioned characters may not occur in the conversation and movies. So we treat them as negative constraints, which means they imply that the mentioned characters should not be assigned to corresponding instances. This constraint is formulated as follows:
L neg (f ) = (i,j)∈N [f j (x i )] 2(2)
where N is the set of negative constraints x i doesn't belong class j.
Gender Constraint. We train a voice-based gender classifier by using the subtitles segments from the four movies in our development dataset (5,543 segments of subtitles). We use the segments in which we know the speaker's name and manually obtain the ground truth gender label from IMDB. We extract the signal energy, 20 Mel-frequency cepstral coefficients (MFCCs) along with their first and second derivatives, in addition to timeand frequency-based absolute fundamental frequency (f0) statistics as features to represent each segment in the subtitles. The f0 statistics has been found to improve the automatic gender detection performance for short speech segments (Levitan et al., 2016), which fits our case since the median duration of the dialogue turns in our dataset is 2.6 seconds.
The MFCC features are extracted using a step size of 16 msec over a 64 msec window using the method from (Mathieu et al., 2010), while the f0 statistics are extracted using a step size of 25 msec over a 50 msec window as the default configuration in (Eyben et al., 2013). We then use these features to train a logistic regression classifier using the Scikit-learn library (Pedregosa et al., 2011). The average accuracy of the gender classifier on a 10-fold cross-validation is 0.8867.
Given the results for the gender classification of audio segments and character names, we define the gender loss to penalize inconsistency between the predicted gender and character names:
L gender (f ) = (i,j)∈Q 1 P ga (x i )(1 − P gn (j))f j (x i ) + (i,j)∈Q 2 (1 − P ga (x i ))P gn (j)f j (x i ) (3)
where P ga(x i ) is the probability for instance x i to be a male, and P gn(j) is the probability for name j to be a male, and
Q 1 = {(i, j)|P ga (x i ) < 0.5, P gn (j) > 0.5}, Q 2 = {(i, j)|P ga (x i ) > 0.5, P gn (j) < 0.5}. Distribution Constraint.
We automatically analyze the dialogue and extract the number of mentions of each character in the subtitles using Stanford CoreNLP and string matching to capture names that are missed by the named entity recognizer. We then filter the resulting counts by removing third person mention references of each name as we assume that this character does not appear in the surrounding frames. We use the results to estimate the distribution of the speaking characters and their importance in the movies. The main goal of this step is to construct a prior probability distribution for the speakers in each movie.
To encourage our predictions to be consistent with the dialogue-based priors, we penalize the square error between the distributions of predictions and name mentions priors in the following equation:
L dis (f ) = K j=1 ( (f j (x i )) − d j ) 2 (4)
where d j is the ratio of name j mentions in all subtitles.
Final Framework. Combining the loss in Eqn. 1 and multiple losses with different constraints, we obtain our unified optimization problem:
f * = arg min f λ 1 L initial (f ) + λ 2 L M I (f ) + λ 3 L neg (f ) + λ 4 L gender (f ) + λ 5 L dis (f )(5)
All of the λs are hyper-parameters to be tuned on development set. We also include the constraint that predictions for different character names must sum to 1. We solve this constrained optimization problem with projected gradient descent (PGD). Our optimization problem in Eqn. 5 is guaranteed to be a convex optimization problem and therefore projected gradient descent is guaranteed to stop with global optima. PGD usually converges after 800 iterations.
Evaluation
We model our task as a classification problem, and use the unified optimization framework described earlier to assign a character name to each subtitle.
Since our dataset is highly unbalanced, with a few main characters usually dominating the entire dataset, we adopt the weighted F-score as our evaluation metric, instead of using an accuracy metric or a micro-average F-score. This allows us to take into account that most of the characters have only a few spoken subtitle segments, while at the same time placing emphasis on the main characters. This leads sometimes to an average weighted F-score that is not between the average precision and recall.
One aspect that is important to note is that characters are often referred to using different names. For example, in the movie "The Devil's Advocate," the character Kevin Lomax is also referred to as Kevin or Kev. In more complicated situations, characters may even have multiple identities, such as the character Saul Bloom in the movie "Ocean's Eleven," who pretends to be another character named Lyman Zerga. Since our Table 3: Comparison between the average of macro-weighted average of precision, recall and fscore of the baselines and our model. * means statistically significant (t-test p-value < 0.05) when compared to baseline B3.
goal is to assign names to speakers, and not necessarily solve this coreference problem, we consider the assignment of the subtitle segments to any of the speaker's aliases to be correct. Thus, during the evaluation, we map all the characters' aliases from our model's output to the names in the ground truth annotations. Our mapping does not include other referent nouns such as "Dad," "Buddy," etc.; if a segment gets assigned to any such terms, it is considered a misprediction. We compare our model against three baselines: B1: Most-frequently mentioned character consists of selecting the most frequently mentioned character in the dialogue as the speaker for all the subtitles. Even though it is a simple baseline, it achieves an accuracy of 27.1%, since the leading characters tend to speak the most in the movies. B2: Distribution-driven random assignment consists of randomly assigning character names according to a distribution that reflects their fraction of mentions in all the subtitles. B3: Gender-based distribution-driven random assignment consists of selecting the speaker names based on the voice-based gender detection classifier. This baseline randomly selects the character name that matches the speaker's gender according to the distribution of mentions of the names in the matching gender category.
The results obtained with our proposed unified optimization framework and the three baselines are shown in Table 3. We also report the performance of the optimization framework using different combinations of the three modalities. The model that uses all three modalities achieves the best results, and outperforms the strongest baseline (B3) by more than 6% absolute in average weighted F-score. It also significantly outperforms the usage of the visual and acoustic features combined, which have been frequently used together in previous work, suggesting the importance of textual features in this setting.
The ineffectiveness of the iVectors might be a result of the background noise and music, which are difficult to remove from the speech signal. Figure 2 shows the t-Distributed Stochastic Neighbor Embedding (t-SNE) (Van Der Maaten, 2014), which is a nonlinear dimensionality reduction technique that models points in such a way that similar vectors are modeled by nearby points and dissimilar objects are modeled by distant points, visualization of the iVectors over the whole BBT show and the movie "Titanic." In the BBT there is almost no musical background or background noise, while, Titanic has musical background in addition to the background noise such as the screams of the drowning people. From the graph, the difference between the quality of the iVectors clusters on different noise-levels is clear. Table 4 shows the effect of adding components of our loss function to the initial loss L init function. The performance of the model using only L init without the other parts is very low due to the sparsity of first person references and errors that the person reference classifier introduces.
Precision Recall F-score L initial 0.0631 0.1576 0.0775 L initial + L gender 0.1160 0.1845 0.1210 L initial + L negative 0.0825 0.0746 0.0361 L initial + L distribution 0.1050 0.1570 0.0608 L initial + L M ultipleInstance 0.3058 0.2941 0.2189 Table 4: Analysis of the effect of adding each component of the loss function to the initial loss.
In order to analyze the effect of the errors that several of the modules (e.g., gender and name reference classifiers) propagate into the system, we also test our framework by replacing each one of the components with its ground truth information. As seen in Table 5, the results obtained in this setting show significant improvement with the replacement of each component in our framework, which suggests that additional work on these components will have positive implications on the overall system. Given that for many of the movies in the dataset the videos are not completely available, we develop our initial system so that it only relies on the subtitles; we thus participate in the challenge subtitles task, which includes the dialogue (without the speaker information) as the only source of information to answer questions.
To demonstrate the effectiveness of our speaker naming approach, we design a model based on an end-to-end memory network (Sukhbaatar et al., 2015), namely Speaker-based Convolutional Memory Network (SC-MemN2N), which relies on the MovieQA dataset, and integrates the speaker naming approach as a component in the network. Specifically, we use our speaker naming framework to infer the name of the speaker for each segment of the subtitles, and prepend the predicted speaker name to each turn in the subtitles. 4 To represent the movie subtitles, we represent each turn in the subtitles as the mean-pooling of a 300-dimension pretrained word2vec (Mikolov et al., 2013) representation of each word in the sentence. We similarly represent the input questions and their corresponding answers. Given a question, we use the SC-MemN2N memory to find an answer. For questions asking about specific characters, we keep the memory slots that have the characters in question as speakers or mentioned in, and mask out the rest of the memory slots. Figure Movie Question Answers
Fargo
What did Mike's wife, as he says, die from? A1: She was killed A2: Breast cancer A3: Leukemia A4: Heart disease A5: Complications due to child birth
Titanic
What does Rose ask Jack to do in her room? A1: Sketch her in her best dress A2: Sketch her nude A3: Take a picture of her nude A4: Paint her nude A5: Take a picture of her in her best dress Table 6: Example of questions and answers from the MQA benchmark. The answers in bold are the correct answers to their corresponding question.
3 shows the architecture of our model. Table 7 includes the results of our system on the validation and test sets, along with the best systems introduced in previous work, showing that our SC-MemN2N achieves the best performance. Furthermore, to measure the effectiveness of adding the speaker names and masking, we test our model after removing the names from the network (C-MemN2N). As seen from the results, the gain of SC-MemN2N is statistically significant 5 compared to a version of the system that does not include the speaker names (C-MemN2N). Figure 4 shows the performance of both C-MemN2N and SC-MemN2N models by question type. The results suggest that our speaker naming helps the model better distinguish between characters, and that prepending the speaker names to the subtitle segments improves the ability of the memory network to correctly identify the supporting facts from the story that answers a given question.
Method
Subtitles val test SSCB-W2V 24.8 23.7 SSCB-TF-IDF 27.6 26.5 SSCB Fusion 27.7 -MemN2N 38.0 36.9 Understanding visual regions -37.4 RWMN (Na et al., 2017) 40.4 38.5 C-MemN2N (w/o SN) 40.6 -SC-MemN2N (Ours) 42.7 39.4 Table 7: Performance comparison for the subtitles task on the MovieQA 2017 Challenge on both validation and test sets. We compare our models with the best existing models (from the challenge leaderboard).
Conclusion
In this paper, we proposed a unified optimization framework for the task of speaker naming in movies. We addressed this task under a difficult setup, without a cast-list, without supervision from a script, and dealing with the complicated conditions of real movies. Our model includes textual, visual, and acoustic modalities, and incorporates several grammatical and acoustic constraints. Empirical experiments on a movie dataset demonstrated the effectiveness of our proposed method with respect to several competitive baselines. We also showed that an SC-MemN2N model that leverages our speaker naming model can achieve state-of-the-art results on the subtitles task of the MovieQA 2017 Challenge.
The dataset annotated with character names introduced in this paper is publicly available from http://lit.eecs.umich.edu/ downloads.html.
| 4,588 |
1809.08761
|
2952137661
|
We propose a new model for speaker naming in movies that leverages visual, textual, and acoustic modalities in an unified optimization framework. To evaluate the performance of our model, we introduce a new dataset consisting of six episodes of the Big Bang Theory TV show and eighteen full movies covering different genres. Our experiments show that our multimodal model significantly outperforms several competitive baselines on the average weighted F-score metric. To demonstrate the effectiveness of our framework, we design an end-to-end memory network model that leverages our speaker naming model and achieves state-of-the-art results on the subtitles task of the MovieQA 2017 Challenge.
|
On the other hand, talking faces have been used to improve speaker recognition and diarization in TV shows @cite_28 @cite_17 @cite_29 . In the case of @cite_10 , they modeled the problem of speaker naming as facial recognition to identify speakers in news broadcasts. This work leveraged optical character recognition to read the broadcasters' names that were displayed on screen, requiring the faces to already be annotated.
|
{
"abstract": [
"While successful on broadcast news, meetings or telephone conversation, state-of-the-art speaker diarization techniques tend to perform poorly on TV series or movies. In this paper, we propose to rely on state-of-the-art face clustering techniques to guide acoustic speaker diarization. Two approaches are tested and evaluated on the first season of Game Of Thrones TV series. The second (better) approach relies on a novel talking-face detection module based on bi-directional long short-term memory recurrent neural network. Both audio-visual approaches outperform the audio-only baseline. A detailed study of the behavior of these approaches is also provided and paves the way to future improvements.",
"",
"Naming faces is important for news videos browsing and indexing. Although some research efforts have been contributed to it, they only use the concurrent information between the face and name or employ some clues as features and use simple heuristic method or machine learning approach to finish the task. They use little extra knowledge about the names and faces. Different from previous work, in this paper we present a novel approach to name the faces by exploring extra knowledge obtained from image google. The behind assumption is that the faces of those important persons will turn out many times in the web images and could be retrieved from image google easily. Firstly, faces are detected in the video frames; and the name entities of candidate persons are extracted from the textual information by automatic speech recognition and close caption detection. Then, these candidate person names are used as queries to find the name related person images through image google. After that, the retrieved result is analyzed and some typical faces are selected through feature density estimation. Finally, the detected faces in the news video are matched with the faces selected from the result returned by image google to label each face. Experimental results on MSNBC news and CNN news demonstrate that the proposed approach is effective.",
"Speaker diarization, usually denoted as the “who spoke when” task, turns out to be particularly challenging when applied to fictional films, where many characters talk in various acoustic conditions (background music, sound effects...). Despite this acoustic variability, such movies exhibit specific visual patterns in the dialogue scenes. In this paper, we introduce a two-step method to achieve speaker diarization in TV series: a speaker diarization is first performed locally in the scenes detected as dialogues; then, the hypothesized local speakers are merged in a second agglomerative clustering process, with the constraint that speakers locally hypothesized to be distinct must not be assigned to the same cluster. The performances of our approach are compared to those obtained by standard speaker diarization tools applied to the same data."
],
"cite_N": [
"@cite_28",
"@cite_29",
"@cite_10",
"@cite_17"
],
"mid": [
"2525932165",
"",
"2105303679",
"2004419043"
]
}
|
Speaker Naming in Movies
|
Identifying speakers and their names in movies, and videos in general, is a primary task for many video analysis problems, including automatic subtitle labeling (Hu et al., 2015), content-based video indexing and retrieval (Zhang et al., 2009), video summarization (Tapaswi et al., 2014), and video storyline understanding (Tapaswi et al., 2014). It is a very challenging task, as the visual appearance of the characters changes over the course of the movie due to several factors such as scale, clothing, illumination, and so forth (Arandjelovic and Zisserman, 2005;Everingham et al., 2006). The annotation of movie data with speakers' names can be helpful in a number of applications, such as movie question answering , automatic identification of character relationships (Zhang et al., 2009), or automatic movie captioning (Hu et al., 2015).
Most previous studies relied primarily on visual information (Arandjelovic and Zisserman, 2005;Everingham et al., 2006), and aimed for the slightly different task of face track labeling; speakers who did not appear in the video frame were not assigned any names, which is common in movies and TV shows. Other available sources of information such as scripts were only used to extract cues about the speakers' names to associate the faces in the videos with their corresponding character name (Everingham et al., 2006;Tapaswi et al., 2015;Bäuml et al., 2013;Sivic et al., 2009); however since scripts are not always available, the applicability of these methods is somehow limited.
Other studies focused on the problem of speaker recognition without naming, using the speech modality as a single source of information. While some of these studies attempted to incorporate the visual modality, their goal was to cluster the speech segments rather than name the speakers (Erzin et al., 2005;Bost and Linares, 2014;Kapsouras et al., 2015;Bredin and Gelly, 2016;Hu et al., 2015;Ren et al., 2016). None of these studies used textual information (e.g., dialogue), which prevented them from identifying speaker names.
In our work, we address the task of speaker naming, and propose a new multimodal model that leverages in an unified framework of the visual, speech, and textual modalities that are naturally available while watching a movie. We do not assume the availability of a movie script or a cast list, which makes our model fully unsupervised and easily applicable to unseen movies.
The paper makes two main contributions. First, we introduce a new unsupervised system for speaker naming for movies and TV shows that exclusively depends on videos and subtitles, and relies on a novel unified optimization framework that fuses visual, textual, and acoustic modalities for speaker naming. Second, we construct and make available a dataset consisting of 24 movies with 31,019 turns manually annotated with character names. Additionally, we also evaluate the role of speaker naming when embedded in an end-toend memory network model, achieving state-ofthe-art performance results on the subtitles task of the MovieQA 2017 Challenge.
Datasets
Our dataset consists of a mix of TV show episodes and full movies. For the TV show, we use six full episodes of season one of the BBT. The number of named characters in the BBT episodes varies between 5 to 8 characters per episode, and the background noise level is low. Additionally, we also acquired a set of eighteen full movies from different genres, to evaluate how our model works under different conditions. In this latter dataset, the number of named characters ranges between 6 and 37, and it has varied levels of background noise.
We manually annotated this dataset with the character name of each subtitle segment. To facilitate the annotation process, we built an interface that parses the movies subtitles files, collects the cast list from IMDB for each movie, and then shows one subtitle segment at a time along with the cast list so that the annotator can choose the correct character. Using this tool, human annotators watched the movies and assigned a speaker name to each subtitle segment. If a character name was not mentioned in the dialogue, the annotators labeled it as "unknown." To evaluate the quality of the annotations, five movies in our dataset were double annotated. The Cohen's Kappa interannotator agreement score for these five movies is 0.91, which shows a strong level of agreement.
To clean the data, we removed empty segments, as well as subtitle description parts written between brackets such as "[groaning]" and "[sniffing]". We also removed segments with two speakers at the same time. We intentionally avoided using any automatic means to split these segments, to preserve the high-quality of our gold standard. Table 1 shows the statistics of the collected data. Overall, the dataset consists of 24 videos with a total duration of 40.28 hours, a net dialogue duration of 21.99 hours, and a total of 31,019 turns spoken by 463 different speakers. Four of the movies in this dataset are used as a development set to develop supplementary systems and to fine tune our model's parameters; the remaining movies are used for evaluation.
Data Processing and Representations
We process the movies by extracting several textual, acoustic, and visual features.
Textual Features
We use the following representations for the textual content of the subtitles: SkipThoughts uses a Recurrent Neural Network to capture the underlying semantic and syntactic properties, and map them to a vector representation (Kiros et al., 2015). We use their pretrained model to compute a 4,800 dimensional sentence representation for each line in the subtitles. 1 TF-IDF is a traditional weighting scheme in information retrieval. We represent each subtitle as a vector of tf-idf weights, where the length of the vector (i.e., vocabulary size) and the idf scores are obtained from the movie including the subtitle.
Acoustic Features
For each movie in the dataset, we extract the audio from the center channel. The center channel is usually dedicated to the dialogue in movies, while the other audio channels carry the surrounding sounds from the environment and the musical background. Although doing this does not fully eliminate the noise in the audio signal, it still improves the speech-to-noise ratio of the signal. When a movie has stereo sound (left and right channels only), we down-mix both channels of the stereo stream into a mono channel.
In this work, we use the subtitles timestamps as an estimate of the boundaries that correspond to the uttered speech segments. Usually, each subtitle corresponds to a segment being said by a single speaker. We use the subtitle timestamps for segmentation so that we can avoid automatic speaker diarization errors and focus on the speaker naming problem.
To represent the relevant acoustic information from each spoken segment, we use iVectors, which is the state-of-the-art unsupervised approach in speaker verification (Dehak et al., 2011). While other deep learning-based speaker embeddings models also exist, we do not have access to enough supervised data to build such models. We train unsupervised iVectors for each movie in the dataset, using the iVector extractor used in (Khorram et al., 2016). We extract iVectors of size 40 using a Gaussian Mixture Model-Universal Background Model (GMM-UBM) with 512 components. Each iVector corresponds to a speech segment uttered by a single speaker. We fine tune the size of the iVectors and the number of GMM-UBM components using the development dataset.
Visual Features
We detect faces in the movies every five frames using the recently proposed MTCNN (Zhang et al., 2016) model, which is pretrained for face detection and facial landmark alignment. Based on the results of face detection, we apply the forward and backward tracker with an implementation of the Dlib library (King, 2009;Danelljan et al., 2014) to extract face tracks from each video clip. We represent a face track using its best face in terms of detection score, and use the activations of the fc7 layer of pretrained VGG-Face (Parkhi et al., 2015) network as visual features.
We calculate the distance between the upper lip center and the lower lip center based on the 68point facial landmark detection implemented in the Dlib library (King, 2009;Kazemi and Sullivan, 2014). This distance is normalized by the height of face bounding boxes and concatenated across frames to represent the amount of mouth opening. A human usually speaks with lips moving with a certain frequency (3.75 Hz to 7.5 Hz used in this work) (Tapaswi et al., 2015). We apply a bandpass filter to amplify the signal of true lip motion in these segments. The overall sum of lip motion is used as the score for the talking face.
Unified Optimization Framework
We tackle the problem of speaker naming as a transductive learning problem with constraints. In this approach, we want to use the sparse positive labels extracted from the dialogue and the underlying topological structure of the rest of the unlabeled data. We also incorporate multiple cues extracted from both textual and multimedia infor-mation. A unified learning framework is proposed to enable the joint optimization over the automatically labeled and unlabeled data, along with multiple semantic cues.
Character Identification and Extraction
In this work, we do not consider the set of character names as given because we want to build a model that can be generalized to unseen movies. This strict setting adds to the problem's complexity. To extract the list of characters from the subtitles, we use the Named Entity Recognizer (NER) in the Stanford CoreNLP toolkit (Manning et al., 2014). The output is a long list of person names that are mentioned in the dialogue. This list is prone to errors including, but not limited to, nouns that are misclassified by the NER as person's name such as "Dad" and "Aye", names that are irrelevant to the movie such as "Superman" or named animals, or uncaptured character names.
To clean the extracted names list of each movie, we cluster these names based on string minimum edit distance and their gender. From each cluster, we then pick a name to represent it based on its frequency in the dialogue. The result of this step consists of name clusters along with their distribution in the dialogue. The distribution of each cluster is the sum of all the counts of its members. To filter out irrelevant characters, we run a name reference classifier, which classifies each name into first, second or third person references. If a name was only mentioned as a third person throughout the whole movie, we discard it from the list of characters. We remove any name cluster that has a total count less than three, which takes care of the misclassified names' reference types.
Grammatical Cues
We use the subtitles to extract the name mentions in the dialogue. These mentions allow us to obtain cues about the speaker name and the absence or the presence of the mentioned character in the surrounding subtitles. Thus, they affect the probability that the mentioned character is the speaker or not. We follow the same name reference categories used in (Cour et al., 2010;Haurilet et al., 2016). We classify a name mention into: first (e.g., "I'm Sheldon"), second (e.g., "Oh, hi, Penny") or third person reference (e.g., "So how did it go with Leslie?"). The first person reference represents a positive constraint that allows us to label the corresponding iVector of the speaker and his face if it exists during the segment duration. The second person reference represents a multi-instance constraint that suggests that the mentioned name is one of the characters that are present in the scene, which increases the probability of this character to be one of the speakers of the surrounding segments. On the other hand, the third person reference represents a negative constraint, as it suggests that the speaker does not exist in the scene, which lowers the character probability of the character being one of the speakers of the next or the previous subtitle segments.
To identify first, second and third person references, we train a linear support vector classifier. The first person, the second and third person classifier's training data are extracted and labeled from our development dataset, and fine tuned using 10fold cross-validation.
Unified Optimization Framework
Given a set of data points that consist of l labeled 2 and u unlabeled instances, we apply an optimization framework to infer the best prediction of speaker names. Suppose we have l+u instances X = {x 1 , x 2 , ..., x l , x l+1 , ..., x l+u } and K possible character names. We also get the dialoguebased positive labels y i for instances x i , where y i is a k-dimension one-hot vector and y j i = 1 if x i belongs to the class j, for every 1 ≤ i ≤ l and 1 ≤ j ≤ K. To name each instance x i , we want to predict another one-hot vector of naming scores f (x i ) for each x i , such that argmax j f j (x i ) = z i where z i is the ground truth number of class for instance x i .
To combine the positive labels and unlabeled data, we define the objective function for predic-tions f as follows:
L initial (f ) = 1 l l i=1 ||f (x i ) − y i || 2 + 1 l + u l+u i=1 l+u j=1 w ij ||f (x i ) − f (x j )|| 2
(1) Here w ij is the similarity between x i and x j , which is calculated as the weighted sum of textual, acoustic and visual similarities. The inverse Euclidean distance is used as similarity function for each modality. The weights for different modalities are selected as hyperparameters and tuned on the development set. This objective leads to a convex loss function which is easier to optimize over feasible predictions.
Besides the positive labels obtained from first person name references, we also introduce other semantic constraints and cues to enhance the power of our proposed approach. We implement the following four types of constraints:
Multiple Instance Constraint. Although the second person references cannot directly provide positive constraints, they imply that the mentioned characters have high probabilities to be in this conversation. Following previous work (Cour et al., 2010), we incorporate the second person references as multiple instances constraints into our optimization: if x i has a second person reference j, we encourage j to be assigned to its neighbors, i.e., its adjacent subtitles with similar timestamps. For the implementation, we simply include multiple instances constraints as a variant of positive labels with decreasing weights s, where s = 1/(l − i) for each neighbor x l . Negative Constraint. For the third person references, the mentioned characters may not occur in the conversation and movies. So we treat them as negative constraints, which means they imply that the mentioned characters should not be assigned to corresponding instances. This constraint is formulated as follows:
L neg (f ) = (i,j)∈N [f j (x i )] 2(2)
where N is the set of negative constraints x i doesn't belong class j.
Gender Constraint. We train a voice-based gender classifier by using the subtitles segments from the four movies in our development dataset (5,543 segments of subtitles). We use the segments in which we know the speaker's name and manually obtain the ground truth gender label from IMDB. We extract the signal energy, 20 Mel-frequency cepstral coefficients (MFCCs) along with their first and second derivatives, in addition to timeand frequency-based absolute fundamental frequency (f0) statistics as features to represent each segment in the subtitles. The f0 statistics has been found to improve the automatic gender detection performance for short speech segments (Levitan et al., 2016), which fits our case since the median duration of the dialogue turns in our dataset is 2.6 seconds.
The MFCC features are extracted using a step size of 16 msec over a 64 msec window using the method from (Mathieu et al., 2010), while the f0 statistics are extracted using a step size of 25 msec over a 50 msec window as the default configuration in (Eyben et al., 2013). We then use these features to train a logistic regression classifier using the Scikit-learn library (Pedregosa et al., 2011). The average accuracy of the gender classifier on a 10-fold cross-validation is 0.8867.
Given the results for the gender classification of audio segments and character names, we define the gender loss to penalize inconsistency between the predicted gender and character names:
L gender (f ) = (i,j)∈Q 1 P ga (x i )(1 − P gn (j))f j (x i ) + (i,j)∈Q 2 (1 − P ga (x i ))P gn (j)f j (x i ) (3)
where P ga(x i ) is the probability for instance x i to be a male, and P gn(j) is the probability for name j to be a male, and
Q 1 = {(i, j)|P ga (x i ) < 0.5, P gn (j) > 0.5}, Q 2 = {(i, j)|P ga (x i ) > 0.5, P gn (j) < 0.5}. Distribution Constraint.
We automatically analyze the dialogue and extract the number of mentions of each character in the subtitles using Stanford CoreNLP and string matching to capture names that are missed by the named entity recognizer. We then filter the resulting counts by removing third person mention references of each name as we assume that this character does not appear in the surrounding frames. We use the results to estimate the distribution of the speaking characters and their importance in the movies. The main goal of this step is to construct a prior probability distribution for the speakers in each movie.
To encourage our predictions to be consistent with the dialogue-based priors, we penalize the square error between the distributions of predictions and name mentions priors in the following equation:
L dis (f ) = K j=1 ( (f j (x i )) − d j ) 2 (4)
where d j is the ratio of name j mentions in all subtitles.
Final Framework. Combining the loss in Eqn. 1 and multiple losses with different constraints, we obtain our unified optimization problem:
f * = arg min f λ 1 L initial (f ) + λ 2 L M I (f ) + λ 3 L neg (f ) + λ 4 L gender (f ) + λ 5 L dis (f )(5)
All of the λs are hyper-parameters to be tuned on development set. We also include the constraint that predictions for different character names must sum to 1. We solve this constrained optimization problem with projected gradient descent (PGD). Our optimization problem in Eqn. 5 is guaranteed to be a convex optimization problem and therefore projected gradient descent is guaranteed to stop with global optima. PGD usually converges after 800 iterations.
Evaluation
We model our task as a classification problem, and use the unified optimization framework described earlier to assign a character name to each subtitle.
Since our dataset is highly unbalanced, with a few main characters usually dominating the entire dataset, we adopt the weighted F-score as our evaluation metric, instead of using an accuracy metric or a micro-average F-score. This allows us to take into account that most of the characters have only a few spoken subtitle segments, while at the same time placing emphasis on the main characters. This leads sometimes to an average weighted F-score that is not between the average precision and recall.
One aspect that is important to note is that characters are often referred to using different names. For example, in the movie "The Devil's Advocate," the character Kevin Lomax is also referred to as Kevin or Kev. In more complicated situations, characters may even have multiple identities, such as the character Saul Bloom in the movie "Ocean's Eleven," who pretends to be another character named Lyman Zerga. Since our Table 3: Comparison between the average of macro-weighted average of precision, recall and fscore of the baselines and our model. * means statistically significant (t-test p-value < 0.05) when compared to baseline B3.
goal is to assign names to speakers, and not necessarily solve this coreference problem, we consider the assignment of the subtitle segments to any of the speaker's aliases to be correct. Thus, during the evaluation, we map all the characters' aliases from our model's output to the names in the ground truth annotations. Our mapping does not include other referent nouns such as "Dad," "Buddy," etc.; if a segment gets assigned to any such terms, it is considered a misprediction. We compare our model against three baselines: B1: Most-frequently mentioned character consists of selecting the most frequently mentioned character in the dialogue as the speaker for all the subtitles. Even though it is a simple baseline, it achieves an accuracy of 27.1%, since the leading characters tend to speak the most in the movies. B2: Distribution-driven random assignment consists of randomly assigning character names according to a distribution that reflects their fraction of mentions in all the subtitles. B3: Gender-based distribution-driven random assignment consists of selecting the speaker names based on the voice-based gender detection classifier. This baseline randomly selects the character name that matches the speaker's gender according to the distribution of mentions of the names in the matching gender category.
The results obtained with our proposed unified optimization framework and the three baselines are shown in Table 3. We also report the performance of the optimization framework using different combinations of the three modalities. The model that uses all three modalities achieves the best results, and outperforms the strongest baseline (B3) by more than 6% absolute in average weighted F-score. It also significantly outperforms the usage of the visual and acoustic features combined, which have been frequently used together in previous work, suggesting the importance of textual features in this setting.
The ineffectiveness of the iVectors might be a result of the background noise and music, which are difficult to remove from the speech signal. Figure 2 shows the t-Distributed Stochastic Neighbor Embedding (t-SNE) (Van Der Maaten, 2014), which is a nonlinear dimensionality reduction technique that models points in such a way that similar vectors are modeled by nearby points and dissimilar objects are modeled by distant points, visualization of the iVectors over the whole BBT show and the movie "Titanic." In the BBT there is almost no musical background or background noise, while, Titanic has musical background in addition to the background noise such as the screams of the drowning people. From the graph, the difference between the quality of the iVectors clusters on different noise-levels is clear. Table 4 shows the effect of adding components of our loss function to the initial loss L init function. The performance of the model using only L init without the other parts is very low due to the sparsity of first person references and errors that the person reference classifier introduces.
Precision Recall F-score L initial 0.0631 0.1576 0.0775 L initial + L gender 0.1160 0.1845 0.1210 L initial + L negative 0.0825 0.0746 0.0361 L initial + L distribution 0.1050 0.1570 0.0608 L initial + L M ultipleInstance 0.3058 0.2941 0.2189 Table 4: Analysis of the effect of adding each component of the loss function to the initial loss.
In order to analyze the effect of the errors that several of the modules (e.g., gender and name reference classifiers) propagate into the system, we also test our framework by replacing each one of the components with its ground truth information. As seen in Table 5, the results obtained in this setting show significant improvement with the replacement of each component in our framework, which suggests that additional work on these components will have positive implications on the overall system. Given that for many of the movies in the dataset the videos are not completely available, we develop our initial system so that it only relies on the subtitles; we thus participate in the challenge subtitles task, which includes the dialogue (without the speaker information) as the only source of information to answer questions.
To demonstrate the effectiveness of our speaker naming approach, we design a model based on an end-to-end memory network (Sukhbaatar et al., 2015), namely Speaker-based Convolutional Memory Network (SC-MemN2N), which relies on the MovieQA dataset, and integrates the speaker naming approach as a component in the network. Specifically, we use our speaker naming framework to infer the name of the speaker for each segment of the subtitles, and prepend the predicted speaker name to each turn in the subtitles. 4 To represent the movie subtitles, we represent each turn in the subtitles as the mean-pooling of a 300-dimension pretrained word2vec (Mikolov et al., 2013) representation of each word in the sentence. We similarly represent the input questions and their corresponding answers. Given a question, we use the SC-MemN2N memory to find an answer. For questions asking about specific characters, we keep the memory slots that have the characters in question as speakers or mentioned in, and mask out the rest of the memory slots. Figure Movie Question Answers
Fargo
What did Mike's wife, as he says, die from? A1: She was killed A2: Breast cancer A3: Leukemia A4: Heart disease A5: Complications due to child birth
Titanic
What does Rose ask Jack to do in her room? A1: Sketch her in her best dress A2: Sketch her nude A3: Take a picture of her nude A4: Paint her nude A5: Take a picture of her in her best dress Table 6: Example of questions and answers from the MQA benchmark. The answers in bold are the correct answers to their corresponding question.
3 shows the architecture of our model. Table 7 includes the results of our system on the validation and test sets, along with the best systems introduced in previous work, showing that our SC-MemN2N achieves the best performance. Furthermore, to measure the effectiveness of adding the speaker names and masking, we test our model after removing the names from the network (C-MemN2N). As seen from the results, the gain of SC-MemN2N is statistically significant 5 compared to a version of the system that does not include the speaker names (C-MemN2N). Figure 4 shows the performance of both C-MemN2N and SC-MemN2N models by question type. The results suggest that our speaker naming helps the model better distinguish between characters, and that prepending the speaker names to the subtitle segments improves the ability of the memory network to correctly identify the supporting facts from the story that answers a given question.
Method
Subtitles val test SSCB-W2V 24.8 23.7 SSCB-TF-IDF 27.6 26.5 SSCB Fusion 27.7 -MemN2N 38.0 36.9 Understanding visual regions -37.4 RWMN (Na et al., 2017) 40.4 38.5 C-MemN2N (w/o SN) 40.6 -SC-MemN2N (Ours) 42.7 39.4 Table 7: Performance comparison for the subtitles task on the MovieQA 2017 Challenge on both validation and test sets. We compare our models with the best existing models (from the challenge leaderboard).
Conclusion
In this paper, we proposed a unified optimization framework for the task of speaker naming in movies. We addressed this task under a difficult setup, without a cast-list, without supervision from a script, and dealing with the complicated conditions of real movies. Our model includes textual, visual, and acoustic modalities, and incorporates several grammatical and acoustic constraints. Empirical experiments on a movie dataset demonstrated the effectiveness of our proposed method with respect to several competitive baselines. We also showed that an SC-MemN2N model that leverages our speaker naming model can achieve state-of-the-art results on the subtitles task of the MovieQA 2017 Challenge.
The dataset annotated with character names introduced in this paper is publicly available from http://lit.eecs.umich.edu/ downloads.html.
| 4,588 |
1810.05846
|
2897622729
|
We present Nesterov-type acceleration techniques for Alternating Least Squares (ALS) methods applied to canonical tensor decomposition. While Nesterov acceleration turns gradient descent into an optimal first-order method for convex problems by adding a momentum term with a specific weight sequence, a direct application of this method and weight sequence to ALS results in erratic convergence behaviour or divergence. This is so because the tensor decomposition problem is non-convex and ALS is accelerated instead of gradient descent. We investigate how line search or restart mechanisms can be used to obtain effective acceleration. We first consider a cubic line search (LS) strategy for determining the momentum weight, showing numerically that the combined Nesterov-ALS-LS approach is competitive with or superior to other recently developed nonlinear acceleration techniques for ALS, including acceleration by nonlinear conjugate gradients (NCG) and LBFGS. As an alternative, we consider various restarting techniques, some of which are inspired by previously proposed restarting mechanisms for Nesterov's accelerated gradient method. We study how two key parameters, the momentum weight and the restart condition, should be set. Our extensive empirical results show that the Nesterov-accelerated ALS methods with restart can be dramatically more efficient than the stand-alone ALS or Nesterov accelerated gradient method, when problems are ill-conditioned or accurate solutions are required. The resulting methods perform competitively with or superior to existing acceleration methods for ALS, and additionally enjoy the benefit of being much simpler and easier to implement. On a large and ill-conditioned 71 x 1000 x 900 tensor consisting of readings from chemical sensors used for tracking hazardous gases, the restarted Nesterov-ALS method outperforms any of the existing methods by a large factor.
|
Nesterov's technique has also been used to accelerate non-gradient based methods. In @cite_12 it was used to accelerate ADMM, and @cite_13 used it to accelerate an approximate Newton method.
|
{
"abstract": [
"Optimization plays a key role in machine learning. Recently, stochastic second-order methods have attracted much attention due to their low computational cost in each iteration. However, these algorithms might perform poorly especially if it is hard to approximate the Hessian well and efficiently. As far as we know, there is no effective way to handle this problem. In this paper, we resort to Nesterov's acceleration technique to improve the convergence performance of a class of second-order methods called approximate Newton. We give a theoretical analysis that Nesterov's acceleration technique can improve the convergence performance for approximate Newton just like for first-order methods. We accordingly propose an accelerated regularized sub-sampled Newton. Our accelerated algorithm performs much better than the original regularized sub-sampled Newton in experiments, which validates our theory empirically. Besides, the accelerated regularized sub-sampled Newton has good performance comparable to or even better than classical algorithms.",
"Alternating direction methods are a common tool for general mathematical programming and optimization. These methods have become particularly important in the field of variational image processing, which frequently requires the minimization of nondifferentiable objectives. This paper considers accelerated (i.e., fast) variants of two common alternating direction methods: the alternating direction method of multipliers (ADMM) and the alternating minimization algorithm (AMA). The proposed acceleration is of the form first proposed by Nesterov for gradient descent methods. In the case that the objective function is strongly convex, global convergence bounds are provided for both classical and accelerated variants of the methods. Numerical examples are presented to demonstrate the superior performance of the fast methods for a wide variety of problems."
],
"cite_N": [
"@cite_13",
"@cite_12"
],
"mid": [
"2766750855",
"2076261573"
]
}
| 0 |
||
1810.05846
|
2897622729
|
We present Nesterov-type acceleration techniques for Alternating Least Squares (ALS) methods applied to canonical tensor decomposition. While Nesterov acceleration turns gradient descent into an optimal first-order method for convex problems by adding a momentum term with a specific weight sequence, a direct application of this method and weight sequence to ALS results in erratic convergence behaviour or divergence. This is so because the tensor decomposition problem is non-convex and ALS is accelerated instead of gradient descent. We investigate how line search or restart mechanisms can be used to obtain effective acceleration. We first consider a cubic line search (LS) strategy for determining the momentum weight, showing numerically that the combined Nesterov-ALS-LS approach is competitive with or superior to other recently developed nonlinear acceleration techniques for ALS, including acceleration by nonlinear conjugate gradients (NCG) and LBFGS. As an alternative, we consider various restarting techniques, some of which are inspired by previously proposed restarting mechanisms for Nesterov's accelerated gradient method. We study how two key parameters, the momentum weight and the restart condition, should be set. Our extensive empirical results show that the Nesterov-accelerated ALS methods with restart can be dramatically more efficient than the stand-alone ALS or Nesterov accelerated gradient method, when problems are ill-conditioned or accurate solutions are required. The resulting methods perform competitively with or superior to existing acceleration methods for ALS, and additionally enjoy the benefit of being much simpler and easier to implement. On a large and ill-conditioned 71 x 1000 x 900 tensor consisting of readings from chemical sensors used for tracking hazardous gases, the restarted Nesterov-ALS method outperforms any of the existing methods by a large factor.
|
Nesterov's accelerated gradient method is known to exhibit oscillatory behavior on convex problems. An interesting discussion on this is provided in @cite_5 which formulates an ODE as the continuous time analogue of Nesterov's method. Such oscillatory behavior happens when the method approaches convergence, and can be alleviated by restarting the algorithm using the current iterate as the initial solution, usually resetting the sequence of momentum weights to its initial state close to 0. In @cite_5 an explanation is provided of why resetting the momentum weight to a small value is effective using the ODE formulation of Nesterov's accelerated gradient descent. In @cite_17 the use of adaptive restarting was explored for convex problems, and @cite_9 explored the use of adaptive restarting and adaptive momentum weight for nonlinear systems of equations resulting from finite element approximation of PDEs. Our work is the first study of a general Nesterov-accelerated ALS scheme.
|
{
"abstract": [
"We derive a second-order ordinary differential equation (ODE) which is the limit of Nesterov's accelerated gradient method. This ODE exhibits approximate equivalence to Nesterov's scheme and thus can serve as a tool for analysis. We show that the continuous time ODE allows for a better understanding of Nesterov's scheme. As a byproduct, we obtain a family of schemes with similar convergence rates. The ODE interpretation also suggests restarting Nesterov's scheme leading to an algorithm, which can be rigorously proven to converge at a linear rate whenever the objective is strongly convex.",
"We present accelerated residual methods for the iterative solution of systems of equations by leveraging recent developments in accelerated gradient methods for convex optimization. The stability p...",
""
],
"cite_N": [
"@cite_5",
"@cite_9",
"@cite_17"
],
"mid": [
"2528062157",
"2894145971",
""
]
}
| 0 |
||
1810.05846
|
2897622729
|
We present Nesterov-type acceleration techniques for Alternating Least Squares (ALS) methods applied to canonical tensor decomposition. While Nesterov acceleration turns gradient descent into an optimal first-order method for convex problems by adding a momentum term with a specific weight sequence, a direct application of this method and weight sequence to ALS results in erratic convergence behaviour or divergence. This is so because the tensor decomposition problem is non-convex and ALS is accelerated instead of gradient descent. We investigate how line search or restart mechanisms can be used to obtain effective acceleration. We first consider a cubic line search (LS) strategy for determining the momentum weight, showing numerically that the combined Nesterov-ALS-LS approach is competitive with or superior to other recently developed nonlinear acceleration techniques for ALS, including acceleration by nonlinear conjugate gradients (NCG) and LBFGS. As an alternative, we consider various restarting techniques, some of which are inspired by previously proposed restarting mechanisms for Nesterov's accelerated gradient method. We study how two key parameters, the momentum weight and the restart condition, should be set. Our extensive empirical results show that the Nesterov-accelerated ALS methods with restart can be dramatically more efficient than the stand-alone ALS or Nesterov accelerated gradient method, when problems are ill-conditioned or accurate solutions are required. The resulting methods perform competitively with or superior to existing acceleration methods for ALS, and additionally enjoy the benefit of being much simpler and easier to implement. On a large and ill-conditioned 71 x 1000 x 900 tensor consisting of readings from chemical sensors used for tracking hazardous gases, the restarted Nesterov-ALS method outperforms any of the existing methods by a large factor.
|
Several ALS-specific nonlinear acceleration techniques have been developed recently as discussed in the introduction @cite_11 @cite_8 @cite_7 . These algorithms often have complex forms and incur significant computational overhead. Our Nesterov-ALS scheme is simple and straightforward to implement, and only incurs a small amount of computational overhead.
|
{
"abstract": [
"Summary Alternating least squares (ALS) is often considered the workhorse algorithm for computing the rank-R canonical tensor approximation, but for certain problems, its convergence can be very slow. The nonlinear conjugate gradient (NCG) method was recently proposed as an alternative to ALS, but the results indicated that NCG is usually not faster than ALS. To improve the convergence speed of NCG, we consider a nonlinearly preconditioned NCG (PNCG) algorithm for computing the rank-R canonical tensor decomposition. Our approach uses ALS as a nonlinear preconditioner in the NCG algorithm. Alternatively, NCG can be viewed as an acceleration process for ALS. We demonstrate numerically that the convergence acceleration mechanism in PNCG often leads to important pay-offs for difficult tensor decomposition problems, with convergence that is significantly faster and more robust than for the stand-alone NCG or ALS algorithms. We consider several approaches for incorporating the nonlinear preconditioner into the NCG algorithm that have been described in the literature previously and have met with success in certain application areas. However, it appears that the nonlinearly PNCG approach has received relatively little attention in the broader community and remains underexplored both theoretically and experimentally. Thus, this paper serves several additional functions, by providing in one place a concise overview of several PNCG variants and their properties that have only been described in a few places scattered throughout the literature, by systematically comparing the performance of these PNCG variants for the tensor decomposition problem, and by drawing further attention to the usefulness of nonlinearly PNCG as a general tool. In addition, we briefly discuss the convergence of the PNCG algorithm. In particular, we obtain a new convergence result for one of the PNCG variants under suitable conditions, building on known convergence results for non-preconditioned NCG. Copyright © 2014 John Wiley & Sons, Ltd.",
"",
"A new algorithm is presented for computing a canonical rank- @math tensor approximation that has minimal distance to a given tensor in the Frobenius norm, where the canonical rank- @math tensor consists of the sum of @math rank-one tensors. Each iteration of the method consists of three steps. In the first step, a tentative new iterate is generated by a stand-alone one-step process, for which we use alternating least squares (ALS). In the second step, an accelerated iterate is generated by a nonlinear generalized minimal residual (GMRES) approach, recombining previous iterates in an optimal way, and essentially using the stand-alone one-step process as a preconditioner. In particular, the nonlinear extension of GMRES we use that was proposed by Washio and Oosterlee in [Electron. Trans. Numer. Anal., 15 (2003), pp. 165--185] for nonlinear partial differential equation problems (which is itself related to other existing acceleration methods for nonlinear equation systems). In the third step, a line search is perfor..."
],
"cite_N": [
"@cite_8",
"@cite_7",
"@cite_11"
],
"mid": [
"2963626304",
"",
"2963282659"
]
}
| 0 |
||
1810.04428
|
2897826647
|
Text simplification (TS) can be viewed as monolingual translation task, translating between text variations within a single language. Recent neural TS models draw on insights from neural machine translation to learn lexical simplification and content reduction using encoder-decoder model. But different from neural machine translation, we cannot obtain enough ordinary and simplified sentence pairs for TS, which are expensive and time-consuming to build. Target-side simplified sentences plays an important role in boosting fluency for statistical TS, and we investigate the use of simplified sentences to train, with no changes to the network architecture. We propose to pair simple training sentence with a synthetic ordinary sentence via back-translation, and treating this synthetic data as additional training data. We train encoder-decoder model using synthetic sentence pairs and original sentence pairs, which can obtain substantial improvements on the available WikiLarge data and WikiSmall data compared with the state-of-the-art methods.
|
Compared with SMT, neural machine translation (NMT) has shown to produce state-of-the-art results @cite_1 @cite_11 . The central approach of NMT is an encoder-decoder architecture implemented by recurrent neural networks, which can represent the input sequence as a vector, and then decode that vector into an output sequence. Therefore, NMT models were used for text simplification task, and achieved good results @cite_9 @cite_10 @cite_2 . The main limitation of the aforementioned NMT models for text simplification depended on the parallel ordinary-simplified sentence pairs. Because ordinary-simplified sentence pairs are expensive and time-consuming to build, the available largest data is EW-SEW that only have 296,402 sentence pairs. The dataset is insufficiency for NMT model if we want to NMT model can obtain the best parameters. Considering simplified data plays an important role in boosting fluency for phrase-based text simplification, and we investigate the use of simplified data for text simplification. We are the first to show that we can effectively adapt neural translation models for text simplifiation with simplified corpora.
|
{
"abstract": [
"",
"Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.",
"Sentence simplification aims to make sentences easier to read and understand. Most recent approaches draw on insights from machine translation to learn simplification rewrites from monolingual corpora of complex and simple sentences. We address the simplification problem with an encoder-decoder model coupled with a deep reinforcement learning framework. Our model, which we call Dress (as shorthand for D eep RE inforcement S entence S implification), explores the space of possible simplifications while learning to optimize a reward function that encourages outputs which are simple, fluent, and preserve the meaning of the input. Experiments on three datasets demonstrate that our model outperforms competitive simplification systems.",
"Text simplification (TS) aims to reduce the lexical and structural complexity of a text, while still retaining the semantic meaning. Current automatic TS techniques are limited to either lexical-level applications or manually defining a large amount of rules. Since deep neural networks are powerful models that have achieved excellent performance over many difficult tasks, in this paper, we propose to use the Long Short-Term Memory (LSTM) Encoder-Decoder model for sentence level TS, which makes minimal assumptions about word sequence. We conduct preliminary experiments to find that the model is able to learn operation rules such as reversing, sorting and replacing from sequence pairs, which shows that the model may potentially discover and apply rules such as modifying sentence structure, substituting words, and removing words for TS.",
"Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier."
],
"cite_N": [
"@cite_9",
"@cite_1",
"@cite_2",
"@cite_10",
"@cite_11"
],
"mid": [
"",
"2133564696",
"2953033958",
"2531738943",
"2949888546"
]
}
|
Improving Neural Text Simplification Model with Simplified Corpora
|
Text simplification aims to reduce the lexical and structural complexity of a text, while still retaining the semantic meaning, which can help children, non-native speakers, and people with cognitive disabilities, to understand text better. One of the methods of automatic text simplification can be generally divided into three categories: lexical simplification (LS) (Biran et al., 2011;Paetzold and Specia, 2016), rulebased (Štajner et al., 2017), and machine translation (MT) (Zhu et al., 2010;Wang et al., 2016). LS is mainly used to simplify text by substituting infrequent and difficult words with frequent and easier words. However, there are several challenges for the LS approach: a great number of transformation rules are required for reasonable coverage and should be applied based on the specific context; third, the syntax and semantic meaning of the sentence is hard to retain. Rule-based approaches use hand-crafted rules for lexical and syntactic simplification, for example, substituting difficult words in a predefined vocabulary. However, such approaches need a lot of human-involvement to manually define these rules, and it is impossible to give all possible simplification rules. MT-based approach has attracted great attention in the last several years, which addresses text simplification as a monolingual machine translation problem translating from 'ordinary' and 'simplified' sentences.
In recent years, neural Machine Translation (NMT) is a newly-proposed deep learning approach and achieves very impressive results Rush et al., 2015;Sutskever et al., 2014). Unlike the traditional phrased-based machine translation system which operates on small components separately, NMT system is being trained end-to-end, without the need to have external decoders, language models or phrase tables. Therefore, the existing architectures in NMT are used for text simplification (Nisioi et al., 2017;Wang et al., 2016). However, most recent work using NMT is limited to the training data that are scarce and expensive to build. Language models trained on simplified corpora have played a central role in statistical text simplification (Glavaš andŠtajner, 2015;Coster and Kauchak, 2011). One main reason is the amount of available simplified corpora typically far exceeds the amount of parallel data. The performance of models can be typically improved when trained on more data. Therefore, we expect simplified corpora to be especially helpful for NMT models.
In contrast to previous work, which uses the existing NMT models, we explore strategy to include simplified training corpora in the training process without changing the neural network architecture.
We first propose to pair simplified training sentences with synthetic ordinary sentences during training, and treat this synthetic data as additional training data. We obtain synthetic ordinary sentences through back-translation, i.e. an automatic translation of the simplified sentence into the ordinary sentence (Sennrich et al., 2015). Then, we mix the synthetic data into the original (simplified-ordinary) data to train NMT model. Experimental results on two publicly available datasets show that we can improve the text simplification quality of NMT models by mixing simplified sentences into the training set over NMT model only using the original training data.
NMT Training with Simplified Corpora
Simplified Corpora
We collected a simplified dataset from Simple English Wikipedia that are freely available 1 , which has been previously used for many text simplification methods (Biran et al., 2011;Coster and Kauchak, 2011;Zhu et al., 2010). The simple English Wikipedia is pretty easy to understand than normal English Wikipedia. We downloaded all articles from Simple English Wikipedia. For these articles, we removed stubs, navigation pages and any article that consisted of a single sentence. We then split them into sentences with the Stanford CorNLP (Manning et al., 2014), and deleted these sentences whose number of words are smaller than 10 or large than 40. After removing repeated sentences, we chose 600K sentences as the simplified data with 11.6M words, and the size of vocabulary is 82K.
Text Simplification using Neural Machine Translation
Our work is built on attention-based NMT (Bahdanau et al., 2014) as an encoder-decoder network with recurrent neural networks (RNN), which simultaneously conducts dynamic alignment and generation of the target simplified sentence.
The encoder uses a bidirectional RNN that consists of forward and backward RNN. Given a source sentence X = (x 1 , x 2 , ..., x l ), the forward RNN and backward RNN calculate forward hidden states ( − → h 1 , ..., − → h l ) and backward hidden states ( ← − h 1 , ..., ← − h l ), respectively. The annotation vector h j is obtained by concatenating − → h j and − → h j . The decoder is a RNN that predicts a target simplificated sentence with Gated Recurrent Unit (GRU) . Given the previously generated target (simplified) sentence Y = (y 1 , y 2 , ..., y t−1 ), the probability of next target word y t is P (y t |X) = sof tmax(g(e y t−1 ,st,ct ))
(1)
where g(.) is a non-linear function, e y t−1 is the embedding of y t−1 , and s t is a decoding state for time step t. State s t is calculated by
s t = f (s t−1 , e y t−1 , c t )(2)
where f (.) is the activation function GRU. The c t is the context vector computed as a weighted annotation h j , computed by
c t = l j a tj · h j(3)
where the weight a tj is computed by
a tj = exp(e tj ) l i=1 exp(e ti )(4)e tj = v T a tanh(W a s t−1 + U a h j )(5)
where v a , W a and U a are weight matrices. The training objective is to maximize the likelihood of the training data. Beam search is employed for decoding.
Synthetic Simplified Sentences
We train an auxiliary system using NMT model from the simplified sentence to the ordinary sentence, which is first trained on the available parallel data. For leveraging simplified sentences to improve the quality of NMT model for text simplification, we propose to adapt the back-translation approach proposed by Sennrich et al. (Sennrich et al., 2015) to our scenario. More concretely, Given one sentence in simplified sentences, we use the simplified-ordinary system in translate mode with greedy decoding to translate it to the ordinary sentences, which is denoted as back-translation. This way, we obtain a synthetic parallel simplified-ordinary sentences. Both the synthetic sentences and the available parallel data are used as training data for the original NMT system.
We evaluate the performance of text simplification using neural machine translation on available parallel sentences and additional simplified sentences. Dataset. We use two simplification datasets (WikiSmall and WikiLarge). WikiSmall consists of ordinary and simplified sentences from the ordinary and simple English Wikipedias, which has been used as benchmark for evaluating text simplification (Woodsend and Lapata, 2011;Wubben et al., 2012;Nisioi et al., 2017). The training set has 89,042 sentence pairs, and the test set has 100 pairs. Wiki-Large is also from Wikipedia corpus whose training set contains 296,402 sentence pairs (Xu et al., 2016;Zhang and Lapata, 2017). WikiLarge includes 8 (reference) simplifications for 2,359 sentences split into 2,000 for development and 359 for testing.
Metrics. Three metrics in text simplification are chosen in this paper. BLEU (Bahdanau et al., 2014) is one traditional machine translation metric to assess the degree to which translated simplifications differed from reference simplifications. FKGL measures the readability of the output (Kincaid et al., 1975). A small FKGL represents simpler output. SARI is a recent text-simplification metric by comparing the output against the source and reference simplifications (Zhang and Lapata, 2017).
We evaluate the output of all systems using human evaluation. The metric is denoted as Simplicity (Nisioi et al., 2017). The three non-native fluent English speakers are shown reference sentences and output sentences. They are asked whether the output sentence is much simpler (+2), somewhat simpler (+1), equally (0), somewhat more difficult (-1), and much more difficult (-2) than the reference sentence.
Methods. We use OpenNMT (Klein et al., 2017) as the implementation of the NMT system for all experiments . We generally follow the default settings and training procedure described by Klein et al.(2017). We replace out-of-vocabulary words with a special UNK symbol. At prediction time, we replace UNK words with the highest probability score from the attention layer.
OpenNMT system used on parallel data is the baseline system. To obtain a synthetic parallel training set, we back-translate a random sample of 100K sentences from the collected simplified corpora. OpenNMT used on parallel data and synthetic data is our model. The benchmarks are run on a Intel(R) Core(TM) i7-5930K [email protected], 32GB Mem, trained on 1 GPU GeForce GTX 1080 (Pascal) with CUDA v. 8.0.
We choose three statistical text simplification systems. PBMT-R is a phrase-based method with a reranking post-processing step (Wubben et al., 2012). Hybrid performs sentence splitting and deletion operations based on discourse representation structures, and then simplifies sentences with PBMT-R (Narayan and Gardent, 2014). SBMT-SARI (Xu et al., 2016) is syntax-based translation model using PPDB paraphrase database (Ganitkevitch et al., 2013) and modifies tuning function (using SARI). We choose two neural text simplification systems. NMT is a basic attention-based encoder-decoder model which uses OpenNMT framework to train with two LSTM layers, hidden states of size 500 and 500 hidden units, SGD optimizer, and a dropout rate of 0.3 (Nisioi et al., 2017). Dress is an encoder-decoder model coupled with a deep reinforcement learning framework, and the parameters are chosen according to the original paper (Zhang and Lapata, 2017). For the experiments with synthetic parallel data, we back-translate a random sample of 60 000 sentences from the collected simplified sentences into ordinary sentences. Our model is trained on synthetic data and the available parallel data, denoted as NMT+synthetic.
Results. Table 1 shows the results of all models on WikiLarge dataset. We can see that our method (NMT+synthetic) can obtain higher BLEU, lower FKGL and high SARI compared with other models, except Dress on FKGL and SBMT-SARI on SARI. It verified that including synthetic data during training is very effective, and yields an improvement over our baseline NMF by 2.11 BLEU, 1.7 FKGL and 1.07 SARI. We also substantially outperform Dress, who previously reported SOTA result. The results of our human evaluation using Simplicity are also presented in Table 1. NMT on synthetic data is significantly better than PBMT-R, Dress, and SBMT-SARI on Simplicity. It indicates that our method with simplified data is effective at creating simpler output.
Results on WikiSmall dataset are shown in Table 2. We see substantial improvements (6.37 BLEU) than NMT from adding simplified training data with synthetic ordinary sentences. Compared with sta- tistical machine translation models (PBMT-R, Hybrid, SBMT-SARI), our method (NMT+synthetic) still have better results, but slightly worse FKGL and SARI. Similar to the results in WikiLarge, the results of our human evaluation using Simplicity outperforms the other models. In conclusion, Our method produces better results comparing with the baselines, which demonstrates the effectiveness of adding simplified training data.
Conclusion
In this paper, we propose one simple method to use simplified corpora during training of NMT systems, with no changes to the network architecture. In the experiments on two datasets, we achieve substantial gains in all tasks, and new SOTA results, via back-translation of simplified sentences into the ordinary sentences, and treating this synthetic data as additional training data. Because we do not change the neural network architecture to integrate simplified corpora, our method can be easily applied to other Neural Text Simplification (NTS) systems. We expect that the effectiveness of our method not only varies with the quality of the NTS system used for back-translation, but also depends on the amount of available parallel and simplified corpora. In the paper, we have only utilized data from Wikipedia for simplified sentences. In the future, many other text sources are available and the impact of not only size, but also of domain should be investigated.
| 1,973 |
1810.03944
|
2896778177
|
Distance metric learning (DML) aims to find an appropriate way to reveal the underlying data relationship. It is critical in many machine learning, pattern recognition and data mining algorithms, and usually require large amount of label information (such as class labels or pair triplet constraints) to achieve satisfactory performance. However, the label information may be insufficient in real-world applications due to the high-labeling cost, and DML may fail in this case. Transfer metric learning (TML) is able to mitigate this issue for DML in the domain of interest (target domain) by leveraging knowledge information from other related domains (source domains). Although achieved a certain level of development, TML has limited success in various aspects such as selective transfer, theoretical understanding, handling complex data, big data and extreme cases. In this survey, we present a systematic review of the TML literature. In particular, we group TML into different categories according to different settings and metric transfer strategies, such as direct metric approximation, subspace approximation, distance approximation, and distribution approximation. A summarization and insightful discussion of the various TML approaches and their applications will be presented. Finally, we indicate some challenges and provide possible future directions.
|
TML is quite related to transfer subspace learning (TSL) @cite_26 @cite_111 or transfer feature learning (TFL) @cite_19 . An early work on TSL is presented in @cite_26 that finds a low-dimensional latent space, where the distribution difference between the source and target domain is minimized. This algorithm is conducted in a transductive manner and not convenient to derive a representation for new samples. This issue is tackled by @cite_41 , where a generic regularization framework is proposed for TSL based on Bregman divergence @cite_60 . A low-rank TSL (LTSL) framework is proposed in @cite_125 @cite_113 , where the subspace is found by reconstructing the projected target data using the projected source data under the low-rank representation @cite_88 @cite_43 theme. The main advantage of the framework is that only relevant source data are utilized to find the subspace and noisy information can be filtered out. That is, it can avoid negative transfer. The framework is further extended in @cite_67 to help recover missing modality in the target domain and improved in @cite_63 by exploiting both low-rank and sparse structures on the reconstruction matrix.
|
{
"abstract": [
"We consider an interesting problem in this paper that uses transfer learning in two directions to compensate missing knowledge from the target domain. Transfer learning tends to be exploited as a powerful tool that mitigates the discrepancy between different databases used for knowledge transfer. It can also be used for knowledge transfer between different modalities within one database. However, in either case, transfer learning will fail if the target data are missing. To overcome this, we consider knowledge transfer between different databases and modalities simultaneously in a single framework, where missing target data from one database are recovered to facilitate recognition task. We referred to this framework as Latent Low-rank Transfer Subspace Learning method (L2TSL). We first propose to use a low-rank constraint as well as dictionary learning in a learned subspace to guide the knowledge transfer between and within different databases. We then introduce a latent factor to uncover the underlying structure of the missing target data. Next, transfer learning in two directions is proposed to integrate auxiliary database for transfer learning with missing target data. Experimental results of multi-modalities knowledge transfer with missing target data demonstrate that our method can successfully inherit knowledge from the auxiliary database to complete the target domain, and therefore enhance the performance when recognizing data from the modality without any training data.",
"Transfer learning addresses the problem of how to utilize plenty of labeled data in a source domain to solve related but different problems in a target domain, even when the training and testing problems have different distributions or features. In this paper, we consider transfer learning via dimensionality reduction. To solve this problem, we learn a low-dimensional latent feature space where the distributions between the source domain data and the target domain data are the same or close to each other. Onto this latent feature space, we project the data in related domains where we can apply standard learning algorithms to train classification or regression models. Thus, the latent feature space can be treated as a bridge of transferring knowledge from the source domain to the target domain. The main contribution of our work is that we propose a new dimensionality reduction method to find a latent space, which minimizes the distance between distributions of the data in different domains in a latent space. The effectiveness of our approach to transfer learning is verified by experiments in two real world applications: indoor WiFi localization and binary text classification.",
"A class of distortions termed functional Bregman divergences is defined, which includes squared error and relative entropy. A functional Bregman divergence acts on functions or distributions, and generalizes the standard Bregman divergence for vectors and a previous pointwise Bregman divergence that was defined for functions. A recent result showed that the mean minimizes the expected Bregman divergence. The new functional definition enables the extension of this result to the continuous case to show that the mean minimizes the expected functional Bregman divergence over a set of functions or distributions. It is shown how this theorem applies to the Bayesian estimation of distributions. Estimation of the uniform distribution from independent and identically drawn samples is presented as a case study.",
"The regularization principals [31] lead approximation schemes to deal with various learning problems, e.g., the regularization of the norm in a reproducing kernel Hilbert space for the ill-posed problem. In this paper, we present a family of subspace learning algorithms based on a new form of regularization, which transfers the knowledge gained in training samples to testing samples. In particular, the new regularization minimizes the Bregman divergence between the distribution of training samples and that of testing samples in the selected subspace, so it boosts the performance when training and testing samples are not independent and identically distributed. To test the effectiveness of the proposed regularization, we introduce it to popular subspace learning algorithms, e.g., principal components analysis (PCA) for cross-domain face modeling; and Fisher's linear discriminant analysis (FLDA), locality preserving projections (LPP), marginal Fisher's analysis (MFA), and discriminative locality alignment (DLA) for cross-domain face recognition and text categorization. Finally, we present experimental evidence on both face image data sets and text data sets, suggesting that the proposed Bregman divergence-based regularization is effective to deal with cross-domain learning problems.",
"One of the most important challenges in machine learning is performing effective learning when there are limited training data available. However, there is an important case when there are sufficient training data coming from other domains (source). Transfer learning aims at finding ways to transfer knowledge learned from a source domain to a target domain by handling the subtle differences between the source and target. In this paper, we propose a novel framework to solve the aforementioned knowledge transfer problem via low-rank representation constraints. This is achieved by finding an optimal subspace where each datum in the target domain can be linearly represented by the corresponding subspace in the source domain. Extensive experiments on several databases, i.e., Yale B, CMU PIE, UB Kin Face databases validate the effectiveness of the proposed approach and show the superiority to the existing, well-established methods.",
"",
"Transfer learning is established as an effective technology in computer vision for leveraging rich labeled data in the source domain to build an accurate classifier for the target domain. However, most prior methods have not simultaneously reduced the difference in both the marginal distribution and conditional distribution between domains. In this paper, we put forward a novel transfer learning approach, referred to as Joint Distribution Adaptation (JDA). Specifically, JDA aims to jointly adapt both the marginal distribution and conditional distribution in a principled dimensionality reduction procedure, and construct new feature representation that is effective and robust for substantial distribution difference. Extensive experiments verify that JDA can significantly outperform several state-of-the-art methods on four types of cross-domain image classification problems.",
"",
"In this paper, we address the problem of unsupervised domain transfer learning in which no labels are available in the target domain. We use a transformation matrix to transfer both the source and target data to a common subspace, where each target sample can be represented by a combination of source samples such that the samples from different domains can be well interlaced. In this way, the discrepancy of the source and target domains is reduced. By imposing joint low-rank and sparse constraints on the reconstruction coefficient matrix, the global and local structures of data can be preserved. To enlarge the margins between different classes as much as possible and provide more freedom to diminish the discrepancy, a flexible linear classifier (projection) is obtained by learning a non-negative label relaxation matrix that allows the strict binary label matrix to relax into a slack variable matrix. Our method can avoid a potentially negative transfer by using a sparse matrix to model the noise and, thus, is more robust to different types of noise. We formulate our problem as a constrained low-rankness and sparsity minimization problem and solve it by the inexact augmented Lagrange multiplier method. Extensive experiments on various visual domain adaptation tasks show the superiority of the proposed method over the state-of-the art methods. The MATLAB code of our method will be publicly available at http: www.yongxu.org lunwen.html .",
"We propose low-rank representation (LRR) to segment data drawn from a union of multiple linear (or affine) subspaces. Given a set of data vectors, LRR seeks the lowest-rank representation among all the candidates that represent all vectors as the linear combination of the bases in a dictionary. Unlike the well-known sparse representation (SR), which computes the sparsest representation of each data vector individually, LRR aims at finding the lowest-rank representation of a collection of vectors jointly. LRR better captures the global structure of data, giving a more effective tool for robust subspace segmentation from corrupted data. Both theoretical and experimental results show that LRR is a promising tool for subspace segmentation.",
""
],
"cite_N": [
"@cite_67",
"@cite_26",
"@cite_60",
"@cite_41",
"@cite_125",
"@cite_113",
"@cite_19",
"@cite_43",
"@cite_63",
"@cite_88",
"@cite_111"
],
"mid": [
"2294193936",
"2107298017",
"1963551385",
"2162854380",
"2087107531",
"",
"2096943734",
"",
"2240559667",
"79405465",
""
]
}
|
Transfer Metric Learning: Algorithms, Applications and Outlooks
|
I T is critical to evaluate the distances between samples in pattern analysis and machine learning applications. If an appropriate distance metric can be obtained, even the simple k-nearest neighbor (k-NN) classifier, or k-means clustering can perform well [1], [2]. In addition, for largescale and efficient information retrieval, the results are usually obtained directly according to the distances to the query [3], and a good distance metric is also the key of many other important applications, such as face verification [4] and person re-identification [5].
To learn a reliable distance metric, we usually need large amount of label information, which can be the class labels or target values as used in the typical machine learning approaches (such as classification or regression), and it is more common to utilize some pair or triplet-based constraints [6]. Such constraints are weakly-supervised since the exact label for an individual sample is unknown. However, in real-world applications, the label information is often scarce since manually labeling is labor-intensive and it is exhausted or even impossible to collect abundant side information for a new learning problem.
Transfer learning [7], which aims to mitigate the label deficiency issue in model training, is thus introduced to improve the performance of distance metric learnng (DML) when the label information is insufficient in a target domain. This leads to the so-called transfer metric learning (TML), which has been found to be very useful in many applications. For example, in face verification [8], the main step is to estimate the similarities/distances between face images. The data distributions of the images captured under different scenarios vary due to the varied background, illumination, etc. Therefore, the metric learned in one scenario may be not effective in a new scenario and TML would be helpful. In person re-identification [5], [9], the key is to estimate the similarities/distances between images of persons appeared in different cameras. The data distributions of the images captured using different cameras vary due to the varied camera setting and scenario. In addition, the distribution for the same camera may change over time. Hence, calibration is needed to achieve satisfactory performance and TML is able to reduce such effort. A more general example is image retrieval, where the data distributions of images in different datasets vary [10]. It would also be very useful to utilize expensive or semantic features to help learn a metric for cheap features or the ones that are hard to be interpreted [11], [12].
In the past decade, dozens of works have been proposed in this area and we provide in this survey a comprehensive overview of these methods. In this survey, we aim to make the machine learners quickly grasp the TML research area, and facilitate the chosen of appropriate methods for machine learning practitioners. Besides, there still be many issues to be tackled in TML, and we hope that some new ideas can be inspired from this survey.
The rest of this survey is organized as follows. We first present the background and overview of TML in Section 2, which includes a brief history of TML, the main notations used throughout the paper, and a categorization of 1. Evolution of transfer metric learning, which has been studied for almost ten years.
the TML approaches. In the subsequent two sections, we give a detailed description of the approaches in the two main categories, i.e., homogeneous and heterogeneous TML respectively. Section 5 is a summarization of the different applications of TML and finally, we conclude this survey and identify some possible future directions in Section 6.
A brief history of transfer metric learning
Transfer metric learning (TML) is a relatively new research field. The works that explicitly applying transfer learning to improve DML start around the year of 2009. For example, multiple auxiliary (source) datasets are utilized in [14] to help the metric learning on the target set. The main idea is to enforce the target metric to be close to the different source metrics. An adaptive weight is learned to reflect the contribution of each source metric to the target metric. In [15], such contribution is determined by learning a covariance matrix between the different metrics. Instead of directly learning the target metric, the decomposition based method [16] assumes that the target metric can be represented as a linear combination of multiple base metrics, which can be derived from the source metric. Hence, the metric learning is casted as learning combination coefficients, where the parameters to be learned can be much fewer. We can not only using source metrics to help the target metric learning, but also make the different DML tasks help each other. The latter is often called multi-task metric learning (MTML). One representative work is the multitask extension [17] of a well-known DML algorithm LMNN [2]. Some other related works including GPMTML [18], MtMCML [5] and CP-mtML [10]. In addition, there are a few domain adaptation metric learning approaches [19], [20]. Most of the above methods can only learn linear metric for the target domain. The domain adaptation metric learning (DAML) approach presented in [19] is able to learn nonlinear target metric based on the kernel method. Recently, neural network is also employed to conduct nonlinear metric transfer [8] by taking the advantage of deep learning technique [21].
The study of heterogeneous TML is a bit later than homogeneous TML and there are much fewer works than those in the homogeneous setting. To the best of our best knowledge, the first work that explicitly designed for heterogeneous TML is the one presented in [22], but it is limited in that only two domains (one source and target domain) can be handled. There exist a few tensor based approaches [23], [24] for heterogeneous MTML, where the high-order correlations between all domains are exploited. A main disadvantage of these approaches is that the computational complexity is high. Dai et al. [11] proposes an unsupervised heterogeneous TML algorithm, which aims to use some "expensive" (sophisticated, off-the-shelf) features to help learn a metric for relatively "cheap" feature. This is also termed metric imitation. Recently, a general heterogeneous TML framework is proposed in [12], [25]. The framework first extracts some knowledge fragments (linear or nonlinear mappings) from pre-trained source metric, and then using these fragments to help the target domain learn either linear or nonlinear distance metric. The framework is flexible and easy-to-use. An illustration figure for the evolution of TML is shown in Fig. 1.
Notations and definitions
In this survey, we assume there are M different domains, and the m'th domain is associated with a feature space X m and marginal distribution P m (X m ). Without loss of generality, we assume the M 'th (the last) domain is the target domain, and all the remained ones are source domains. If there is only one source domain, we signify it using the script "S". In distance metric learning (DML), the task is to learn a distance function for any two instances, i.e., d φ (x i , x j ), which must satisfy several properties including nonnegativity, identity, symmetry and triangle inequality [6]. Here, φ is the parameter of the distance function, and we call it distance metric in this survey. For a nonlinear distance metric, φ is often given by a nonlinear feature mapping. The linear metric is denoted as A, which is a positive semidefinite (PSD) matrix and adopted in the popular Mahalanobias metric learning [1].
To learn the metric in the m'th domain, we assume there is a training set D m , which contains N m samples with x mi ∈ R dm to be the feature representation for the i'th sample. In a fully-supervised scenario, the corresponding label y mi is also given. However, DML is usually conducted in a weakly-supervised manner, where only some similar/dissimilar constraints on training sample pairs (x mi , x mj ) are provided. Alternatively, the constraint can be a relative comparison for a training triplet (x mi , x mj , x mk ), e.g., x mi is more similar to x mj than to x mk [6].
In traditional DML, we are often provided with abundant labeled data (such as samples with similar/dissimilar * DML algorithm * *
Metric transfer
Transfer Metric Learning Traditional Metric Learning Fig. 2. An illustration of traditional distance metric learning (DML) and transfer metric learning (TML). Given abundant labeled data, DML aims to learn a distance function between samples so that their distance is small if semantically similar and large otherwise. TML improves DML when the labeled data are insufficient in the target domain by utilizing information from related source domains, which have better distance estimations between samples. For example, it may be hard to distinguish "zebra" from "tiger" by observing only a few labeled samples due to the very similar stripe texture. But this task can be much easier if we have enough labeled samples to well distinguish "horse" from "cat". The sample images are from the NUS-WIDE [13] dataset. constraints) so that the learned metric A * can well separate semantically similar data from dissimilar ones, such as "zebra" and "tiger" shown in Fig. 2. While in realworld applications, the learned target metric A M may be not satisfactory since the labeled data are insufficient in the target domain. For example, it may be hard to distinguish "zebra" from "tiger" given only a few labeled samples since the two types of animals have very similar stripe texture. To mitigate the label deficiency issue in the target metric learning, we may utilize the information from other related source domain, where the distance metric A * S is good enough or a good metric can be learned using large amounts of labeled data. For example, if we have enough labeled samples to well distinguish "horse" from "cat", then it may be very easy for us to recognize "zebra" and "tiger" by observing only a few labeled samples. The source metric cannot be directly used in the target domain due to the different data distributions [14] or representations [22] between the source and target domains. Therefore, (homogeneous or heterogeneous) transfer metric learning (TML) is developed to improve the target metric by transferring knowledge (particularly, the metric information) from the source domain. A summarization and discussion of the various TML methods is given as follows.
A categorization of transfer metric learning techniques
As shown in Fig. 3, we can classify TML into different categories according to various principals. Firstly, TML can be generally grouped as homogeneous TML and heterogeneous TML according to the feature setting. In the former group, the samples of different domains lie in the same feature space (X 1 = X 2 = · · · = X M ), and only the data distributions vary (P 1 (X 1 ) = P 2 (X 2 ) = · · · = P M (X M )).
Whereas in heterogeneous TML, the feature spaces are different (X 1 = X 2 = · · · = X M ) and there may be semantic gap between the source and target domains. For example, in the problem of image matching, we may have only a few labeled images in a new scenario due to the high labeling cost, but there are large amounts of labeled images in some other scenarios. The data distributions of different scenarios vary since there are different backgrounds, illuminations, etc. Besides, the web images are usually associated with text descriptions, and it is useful to utilize the semantic textual features to help learn a better distance metric for visual features [22]. The data representations are quite different for the textual and visual domains.
We can also categorize the different TML approaches as inductive TML, transductive TML, and unsupervised TML according to whether the label information is available in the source or target domains. The relationship of the three learning settings are summarized in Table 1. This is similar to the categorization of transfer learning presented in [7]. Furthermore, we summarize the TML approaches into four different cases according to the utilized transfer strategies. Some early works of TML directly enforce the target metric to be close the source metric, and we thus refer it to as TML via metric approximation. Since the main difference between the source and target domains in homogeneous TML is the distribution divergence, some approaches enable metric transfer by minimizing the distribution difference. We refer this case to as TML via distribution approximation. There is a large amount of TML approaches that enable knowledge transfer by finding a common subspace for the source and target domains, especially in heterogeneous TML. This context is referred to as TML via subspace approximation. Finally, there is a few works that let the distance functions of different domains share some common parts
TML approaches Brief description
Metric approximation Use the target metric to approximate the source metric [14], [15], [16], [17], [18], [26].
Distribution approximation Conduct metric transfer by minimizing the data distributions of different domains [8], [19], [20], [27], [28].
Subspace approximation Conduct metric transfer by finding a common subspace for different domains [12], [22], [23], [24], [25], [29].
Distance approximation Share common parts between distance functions or enforce agreement between distances of corresponding sample pairs in different domains [10], [11], [30].
or enforce the distances of corresponding sample pairs to agree with each other in different domains, and we refer it to as TML via distance approximation. The former two cases are usually used in homogeneous TML, and the latter two cases can be adopted for heterogenous TML. Table 2 is a brief description of these cases.
In Table 3, we show which strategies are currently employed for different settings. In homogeneous TML, most of the current algorithms are inductive, and the transductive ones are usually conducted via distribution approximation. There is still no unsupervised method and a possible solution is to extend some unsupervised DML (e.g., [31]) or transfer learning (e.g., [32]) algorithms for unsupervised TML. One challenge is how to ensure the metric learned in the source domain is better since there are no labeled data in both the source and target domains. In the heterogeneous setting [33], since feature dimensions of different domains do not have correspondences, it is inappropriate to conduct TML via direct metric approximation. Most of the current heterogeneous TML approaches first find a common sub-space for different domains, and then conduct knowledge transfer in the subspace. Unsupervised heterogeneous TML can be easily extended for the transductive heterogeneous setting by further utilizing source labels, and it is possible to adopt the distribution approximation strategy in the heterogenous setting by first finding a common representation for the different domains.
HOMOGENEOUS TRANSFER METRIC LEARNING
In homogeneous TML, the utilized features (data representations) are the same, but the data distributions vary for different domains. For example, in sentiment classification as shown in Fig. 4, we would like to determine the sentiment polarity (positive, negative or neutral) for a review of electronics. The performance of a sentiment classifier depends much on the distance estimation between reviews. To obtain reliable distance estimation, we usually need large amounts of labeled reviews to learn a good distance metric. However, we may only have a few labeled electronics reviews due to the high labeling cost and thus the obtained metric is Fig. 4. An example of homogeneous transfer metric learning. In sentiment classification, distance metric learned for target (such as electronics) reviews may be not satisfactory due to the insufficient labeled data. Homogeneous TML improves the metric by using abundant labeled source (such as book) reviews, where the data distribution is different from the target reviews.
not satisfactory. Fortunately, we may have abundant labeled book reviews, which are often easier to collect. Directly applying the metric learned using the labeled book reviews to the sentiment classification of electronics reviews is not appropriate due to the distribution difference between the electronics and book reviews. Transfer metric learning is able to deal with this issue and learn improved distance metric for the target sentiment classification of electronics reviews by using labeled book reviews.
Inductive TML
Under the inductive setting, we are provided with a few labeled data in the target domain. The number of labeled data in the source domain is large enough so that a good distance metric can be obtained, i.e., N S N M > 0. In inductive transfer learning [7], there may be no labeled source data (N S = 0), but we have not seen such works in homogeneous TML.
TML via metric approximation
An intuitive idea for homogeneous TML is to first use the source domain data {D m } to learn the source distance metrics {φ m } beforehand, and then enforce the target metric to be close to the pre-trained source metrics. Therefore, the general formulation for learning the target metric φ M is given by
arg min φ M (φ M ) = L(φ M ; D M ) + γR(φ M ; φ 1 , · · · , φ M −1 ),(1)
where L(φ M ; D M ) is the empirical loss w.r.t. the metric, R(φ M ; φ 1 , · · · , φ M −1 ) is a regularization term that exploits the relationship between the source and target metrics, and γ ≥ 0 is a trade-off hyper-parameter. Any loss function used in standard DML can be adopted, and the key is how to design an appropriate regularization term. In [14], two different regularization terms are developed. The first one is to minimize the LogDet divergence [34] between the source and target Mahalanobias metrics, i.e.,
R(A M ; A 1 , · · · , A M −1 ) = M −1 m=1 α m D LD (A M , A m ) = M −1 m=1 α m tr(A −1 m A M ) − logdet(A M ) .(2)
Here, {A m 0} M m=1 are constrained to be PSD matrices and D LD (·, ·) indicates the LogDet divergence of two matrices. This is more appropriate than the Forbenius norm of matrix difference due to the desirable properties of the LogDet divergence, such as scale invariance [34]. The coefficients {α m } that satisfy α m ≥ 0 and M −1 m=1 α m = 1 is learned to reflect the contributions of different source metrics to the target metric. Secondly, to exploit the geometric structure of data distribution, Zha et al. [14] propose a regularization term based on manifold regularization [35]:
R(A M ; A 1 , · · · , A M −1 ) = M −1 m=1 α m tr X U L m (X U ) T A M ,(3)
where X U is the feature matrix of unlabeled data, and L m is the Laplacian matrix of the data adjacency graph calculated based on the metric A m . In [15], the importance of the source metrics to the target metric is exploited by learning a task covariance matrix over the metrics. The matrix can model the correlations between different tasks. This approach allows negative and zero transfer.
Both of the above two approaches incorporate the source metrics into a regularization term to penalize the target metric learning. Different from them, a novel decompositionbased TML method is proposed in [16], which constructs the target metric by using the base metrics derived from the source metrics, that is, Hence, the number of parameters to be learned is reduced significantly, and the performance can be improved since the labeled samples in the target domain is scarce. Another advantage of the model is that the PSD constraint of the target metric can be automatically satisfied, and thus the computational cost is low. A semi-supervised extension was presented in [26] by combining it with manifold regularization.
A M = U M diag(θ)U T M = N B r=1 θ M r u M r u T M r = N B r=1 θ M r B M r ,(4)
In addition to utilizing the source metrics to help the target metric learning, there exist some multi-task metric learning (MTML) approaches that enable different metrics to help each other in metric learning. A representative work is the large margin multi-task metric learning (mtLMNN) [17], which is a multi-task extension of a well-known DML algorithm, i.e., large margin nearest neighbor (LMNN) [2]. In mtLMNN, all the different metrics are learned simultaneously by assuming that each metric consists of a common metric A 0 and task-specific metric A m , i.e., A m = A 0 + A m . Based on the same idea, a semi-supervised MTML method is developed in [36], where the unlabeled data is utilized by designing a loss to preserve neighborhood relationship. Then a regularization term is designed to control the amount of information to be shared among all tasks. In [15], a MTML approach is presented by first vectorizing the Mahalanobias metrics and then using a task covariance matrix to exploit the task relationship. Similarly, the metrics are vectorized in [5], but the different metrics are enforced to be close under the graph-based regularization theme [37]. In addition, a general MTML framework is proposed in [18], which enables knowledge transfer by enforcing different metrics {A m } to be close to a common metric A 0 . The general Bregman matrix divergence [38] is introduced to measure the difference between two metrics. The framework incorporates mtLMNN as a special case and the geometry is preserved in the transfer by adopting a special Bregman divergence, i.e., the von Neumann divergence [38].
TML via subspace approximation
Most of the TML approaches via direct metric approximation have a main drawback, i.e., when the feature dimension is high, the model is prone to overfitting due to the large number of parameters to be learned. This also leads to high computational cost in both training and prediction. To tackle this issue, some low-rank TML methods are proposed. They usually decompose the metric as A m = U m U T m , where U m ∈ R dm×r is a low-rank transformation matrix. This leads to a common subspace for different domains, and the knowledge transfer is conducted in the subspace. For example, a low-rank multi-task metric learning framework is proposed in [29], [39], which assumes that each transformation is a product of a common transformation and taskspecific one, i.e., U m = U m U 0 . As a special case, the large margin component analysis (LMCA) [40] is extended to multi-task LMCA (mtLMCA), which is shown to be superior to mtLMNN.
TML via distance approximation
Both the models of mtLMNN and mtLMCA are trained based on labeled sample triplets. Different from them, CP-mtML [10] learn the metrics using labeled pairs, which are often easier to collect. Similar to mtLMCA, CP-mtML decomposes the metric as A m = U m U T m , but the different projections {U m } are coupled by assuming that the distance function consists of a common part and task-specific one, i.e.,
d 2 Um (x i , x j ) = d 2 Um (x i , x j ) + d 2 U0 (x i , x j ).(5)
A main advantage of CP-mtML is that the optimization problem can be solved efficiently using stochastic gradient descent (SGD), and hence the model is scalable for highdimensional features and large amounts of training data. Besides, the learned transformation can be used to derive low-dimensional features, which are desirable in large-scale information retrieval.
Transductive TML
Under the transductive setting, there are no labeled data in the target domain and we only have large amounts of labeled source data, i.e., N S N M = 0.
TML via distribution approximation
In homogeneous TML, the data distributions vary for different domains. Therefore, we can minimize the distribution difference between the source and target domains, so that the source domain samples can be reused in the target metric learning. In [19], a domain adaptation metric learning (DAML) approach is proposed. In DAML, the distance metric is parameterized by a feature mapping φ M . The mapping is learned by first transforming the samples in the source and target domains using the mapping, and then minimizing the distribution difference of the source and target domains in the transformed space. At the same time, φ M is learned to make the transformed samples satisfy the similar/dissimilar constraints in the source domain. The general formulation for learning φ M is given by
arg min φ M (φ M ) = L(φ M ; D S ) + γD P D (P M (X M ), P S (X S )) ,(6)
where D P D (·, ·) is a measure of the difference between two probability distributions. Maximum mean discrepancy (MMD) [41] is adopted as the measure in DAML. The nonlinear mapping φ M is learned in the reproducing kernel Hilbert space (RKHS), and the solution is found using the kernel method. Since the source and target samples in the transformed space follow similar distribution, the mapping learned using the source label information is also discriminative in the target domain. The same idea is adopted in deep TML (DTML) [8], and the main difference is that the nonlinear mapping is assumed to be a multi-layer neural network. The knowledge transfer is conducted at the output layer and each hidden layer, and some weight hyperparameters are set to balance the importance of the losses in different layers. A major limitation of these works is that they only consider the marginal distribution difference. This limitation is overcame in [27], where a novel TML method is developed by simultaneously reducing the marginal and conditional distribution divergences between the source and target domains. The conditional distribution divergence is reduced by first assigning pseudo labels to target domain data using the classifiers trained on source domain data, and then applying the class-wise MMD [42].
Different from these methods, which reduce the distribution difference in a new space, the importance sampling [43] is introduced in [20] to handle DML under covariate shift. The formulation is given as follows,
arg min
A M 0 (A M ) = i,j w ij l(A M ; x Si , x Sj , y Sij ),(7)
where l(·) is some pre-defined loss function over a training pair (x Si , x Sj ) with y Sij = ±1 indicating the two samples are similar or not. The weight w ij =
P M (x Si )P M (x Sj ) P S (x Si )P S (x Sj )
indicates the importance of the pair in the source domain for learning the target metric. Intuitively, if the pair of source samples have large probability to be occurred in the target domain, they should contribute highly in the target metric learning. In particular, for the distance (such as the popular Mahalanobias distance) which is induced by a norm, i.e., d(x i , x j ) = ϕ(x i − x j ), we can calculate the weight as [20], the weights and target metric are learned separately and this may lead to error propagation across them. The issue is tackled by [28], where the weights and target metric are learned simultaneously in a unified framework.
w ij = P M (δ Sij ) P S (δ Sij ) , where δ Sij = x Si − x Sj . In
Discussion
TML via metric approximation is straightforward in that divergence between the source and target metrics (parameterized by PSD matrices) are directly minimized. A major difference of the various metric approximation based approaches is that the source and target metrics are enforced to be close in different ways, e.g., by adopting different types of divergence. These approaches are often limited in that the training complexity is high due to the PSD constraint and the distance calculation in the inference stage is not efficient for high-dimensional data. Subspace approximation based TML compensates for these shortcomings by reformulating the metric learning as learning a transformation or mapping. The PSD constraint is automatically satisfied and the learned transformation can be used to derive compressed representation, which would facilitate efficient distance estimation or sample matching, where the hash technique [44] can be involved. This is critical in many applications, such as information retrieval. The main disadvantage of the subspace approximation based methods is that their optimization problems are often non-convex and hence only local optimum can be obtained. The recent work [10] based on distance approximation also learn a projection instead of the metric but the optimization is more efficient. All of these approaches do not explicitly deal with the distribution difference, which is the main issue that transfer learning would like to tackle. Distribution approximation based methods focus on this point by usually minimizing the MMD measure or utilizing the importance sampling strategy.
Related work
TML is quite related to transfer subspace learning (TSL) [45], [46] or transfer feature learning (TFL) [47]. An early work on TSL is presented in [45] that finds a low-dimensional latent space, where the distribution difference between the source and target domain is minimized. This algorithm is conducted in a transductive manner and not convenient to derive a representation for new samples. This issue is tackled by Si et al. [46], where a generic regularization framework is proposed for TSL based on Bregman divergence [48]. A low-rank TSL (LTSL) framework is proposed in [49], [50], where the subspace is found by reconstructing the projected target data using the projected source data under the lowrank representation [51], [52] theme. The main advantage of the framework is that only relevant source data are utilized to find the subspace and noisy information can be filtered out. That is, it can avoid negative transfer. The framework is further extended in [53] to help recover missing modality in the target domain and improved in [54] by exploiting both low-rank and sparse structures on the reconstruction matrix.
TFL is very similar to TSL and a representative method is presented in [47], where the typical MMD is modified to take both the marginal and class-conditional distributions into consideration. More recent works on TFL are built upon the powerful deep feature learning. For example, considering that the features in deep neural networks are usually general in the first layers and task-specific in higher layers, Long et al. [55] propose the deep adaptation networks (DAN), which frozes the general layers in convolutional neural networks (CNN) [56] and only conduct adaption in the task-specific layers. Besides, multi-kernel MMD (MK-MMD) [57] is employed to improve kernel selection in MMD. In DAN, only the marginal distribution difference between the source and target domains is exploited. This is improved by the joint adaptation networks (JAN) [58], which is able to reduce the joint distribution divergence using a proposed joint MMD (JMMD). The JMMD can involve both the input features and output labels in domain adaptation. The constrained deep TSL [59] method can also exploit the joint distribution and the target domain knowledge is incorporated gradually during a progressive transfer procedure.
All of these TSL or TFL approaches have very close relationships to the subspace and distribution approximation Fig. 5. An example of heterogeneous transfer metric learning. In multi-lingual sentiment classification, distance metric learned for target reviews (such as the ones written in Spanish) may be not satisfactory due to the insufficient labeled data. Heterogeneous TML improves the metric by using abundant labeled source reviews (such as the ones written in English), where the data representation is different from the target reviews (e.g., due to the different vocabularies).
based TML. Although they do not aim to learn metrics, it is not hard to adapt them for TML by adopting some metric learning loss in these models.
HETEROGENEOUS TRANSFER METRIC LEARN-ING
In heterogeneous TML, the different domains have different features (data representations), and sometimes have semantic gap, such as the textual and visual domains. A typical example is the multi-lingual sentiment classification as shown in Fig. 5, where we would like to determine the sentiment polarity for a review written in Spanish. The labeled Spanish reviews may be scarce but it is much easier to collect abundant labeled reviews written in English. Directly applying the metric learned using the labeled English reviews to the sentiment classification of Spanish reviews is infeasible since the representations of Spanish and English reviews are different due to the varied vocabularies. This issue can be tackled by heterogeneous TML, which improves the distance metric for the target sentiment classification of Spanish reviews using labeled English reviews.
Inductive heterogeneous TML
Different from the inductive homogenous setting, the number of labeled data in the source domain can be zero under the inductive heterogeneous setting. This is because the source feature may have much stronger representation power than that in the target domain, and thus no labeled data are required to obtain a good distance function in the source domain.
Heterogeneous TML via subspace approximation
To our best knowledge, heterogeneous TML under the inductive setting is only studied in recent years. For example, a heterogeneous multi-task metric learning (HMTML) method is proposed in [23]. HMTML assumes that the similar/dissimilar constraints are limited in multiple heterogeneous domains, but there are large amounts of unlabeled data that have representations in all domains, i.e.,
D U = {(x U 1n , · · · , x U M n )} N U n=1 .
(8)
Since the different representations corresponding to the same (unlabeled) sample, the transformed representations should be close to each other in the subspace. By minimizing the divergence of transformed representations (or equivalently maximizing their correlations), each transformation is learned by using the information from all domains. This results in an improved transformation, and thus better metric than learning them separately. In [23], a tensor-based regularization term is designed to exploit the high-order correlations between different domains. A variant of the model is presented in [24], which uses the class labels to build domain connection.
In [25], a general heterogeneous TML approach is proposed based on the knowledge fragments transfer [60] strategy. The optimization problem is given by
arg min φ M (φ M ) = L(φ M ; D M )+γR({φ M c (·)}, {ϕ Sc (·)}; D U ),(9)
where φ M c (·) is the c'th coordinate of the mapping φ M , and ϕ Sc (·) is the c'th fragment of the knowledge in the source domain. The source knowledge fragments are represented by some mapping functions, which are learned by applying existing DML algorithms in the source domain beforehand. Then the target metric (which also consists of multiple mapping functions) is enforced to agree with the source fragments on the unlabeled corresponding data. This helps learn an improved metric in the target domain since the pretrained source distance function is assumed to be superior than the target distance function without knowledge transfer. Intuitively, the target subspace is enforced to approach a better source subspace. An improvement of the model is presented in [12], where the locality of the geometric structure of the data distribution is preserved via manifold regularization [35].
Heterogeneous TML via distance approximation
We can not only enforce the subspace representations of corresponding sample in different domains to be close, but also let the distances of corresponding sample pairs to agree with each other in different domains. For example, an online heterogeneous TML approach is proposed in [30], which also assumes that there are abundant unlabeled corresponding data, but the target labeled sample pairs are provided in a sequential manner (one by one). Given a new labeled training pair, the target metric is updated as:
A k+1 M = arg min A M 0 (A M ) = L(A M ) + γ A D LD (A M , A k M ) + γ I R(d A M , d A S ; D U ),(10)
where L(A M ) is the empirical loss w.r.t. the current labeled pair, D LD (·, ·) is the LogDet divergence [34], and
R(d A M , d A S ; D U )
is a regularization term that enforce agreements between the source and target distances (of corresponding pairs). Here, A k M is the target metric obtained previously and initialized as an identity matrix. The source metric A S can be an identity matrix if the source feature is much more powerful than the target feature. By precalculating A S and formulating the term R(·) under the manifold regularization theme [35], an online algorithm is developed to update the target metric A M efficiently.
Unsupervised heterogeneous TML
There exist a few unsupervised heterogeneous TML approaches that utilize unlabeled corresponding data for metric transfer and no label information is provided in either the source or target domains (N S = N M = 0). Under this unsupervised paradigm, the utilized source feature should be more expressive or interpretable than the target feature, so that the estimated distances in the source domain can be better than those in the target domain.
Heterogeneous TML via subspace approximation
An early work is done in [22], where the main idea is to maximize the similarity of any unlabeled corresponding pairs in a common subspace, i.e.,
arg min
A M 0 (A M ) = N U n=1 l (ϕ(θ)) ,(11)
where ϕ(θ) = 1 1+exp(−θ) with θ = (x U M n ) T Gx U Sn and G = U T M U S . Here, l(·) is chosen to be the negative logistic loss and the proximal gradient method is adopted for optimization. A main disadvantage of this TML approach is that the computational complexity is high since the costly singular value decomposition (SVD) is involved in each iteration of the optimization.
Heterogeneous TML via distance approximation
Instead of directly maximizing the likelihood between unlabeled sample pairs, Dai et al. [11] propose to use the target samples to approximate the source manifold. The method is inspired by locally linear embedding (LLE) [61], and metric transfer is conducted by enforcing embeddings of target samples to preserve local properties in the source domain. The optimization problem is given by
arg min U M (U M ) = N U i=1 U T M x U M i − N U j=1 w Sij (U T M x U M j ) ,(12)
where w Sij is the weight in the adjacency graph calculated using the source domain feature. This enables the distances (between samples) in the source and target domains to agree with each other on the manifold. The optimization is much more efficient than [22] since only a generalized eigenvector problem is needed to be solved.
Discussion
It is nature to conduct heterogeneous TML via subspace approximation since the representations of different domains vary and finding a common representation can facilitate the knowledge transfer. Similar to that in the homogeneous setting, the main drawback is that the optimization problem is usually non-convex. Although this drawback can be remedied by directly learning a PSD matrix, such as using the distance approximation strategy, it is nontrivial to perform efficient distance inference for high-dimensional data and extend the algorithm to learn nonlinear metric. Due to the strong ability and rapid development of deep learning, it may be more promising to learn transformation or mapping than PSD matrix in TML, based on either subspace or distance approximation.
Related work
Some early heterogeneous transfer learning approaches are not specially designed for DML, but the learned feature transformation or mapping for each domain can be used to derive a metric. For example, in the work of heterogeneous domain adaptation via manifold alignment (DAMA) [62], the class labels are utilized to align different domains. A mapping function is learned for each domain and all functions are learned together. After being projected into a common subspace, the samples should be close to each other if they belong to the same class and separated otherwise. This is conducted for all samples from either the same domain or different domains. The label information of all different domains can be utilized to learn the shared subspace, and thus better embeddings (representations) can be learned for different domains than learning them separately. In [63], a multi-task discriminant analysis (MTDA) approach is proposed to deal with heterogeneous feature spaces in different domains. MTDA assumes the linear transformation of the m'th domain is given by U m = W m H, which consists of a task-specific part W m and a common part H for all tasks. Then all the transformations are learned in a single optimization problem, which is similar to that of the well-known linear discriminant analysis (LDA) [64]. In [65], a multitask nonnegative matrix factorization (MTNMF) approach is proposed to learn the different mappings for all domains by simultaneously factorizing their data representation and feature-class correlation matrices. The factorized class representation matrix is assumed to be shared by all tasks. This leads to a common subspace for different domains.
All of these approaches have very close relationships to the subspace approximation based heterogeneous TML, but they mainly utilize the fully-supervised class labels to learn feature mappings for different domains. As we mentioned previously, it is common to utilize the weakly-supervised pair/triplet constraints in DML and it is not hard to adapt these approaches for heterogeneous TML by adopting some metric learning loss w.r.t. pair/triplet constraints in these models.
APPLICATIONS
In general, for any applications where DML is appropriate, TML is a good candidate when the label information is scarce or hard to collect. In Table 4, we summarize the different applications that TML utilized in.
Homogeneous TML
Computer vision
Similar to DML [66], most of the TML approaches are applied in computer vision. For example, effectiveness of many homogeneous TML methods are verified in the common image classification application, which includes handwritten letter/digit classification [15], [16], [18], [39], face recognition [14], [19], natural scene categorization and object recognition [16], [19], [27].
DML is particular suitable and crucial for some applications, such as face verification [8], person re-identification [5] and image retrieval [10]. This is because in these applications, results can be directly inferred from the distances between samples. Face verification aims to decide whether two face images belong to the same person or not. In [8], TML is applied for face verification across different datasets, where the distributions vary. The goal of person re-identification is to decide whether the people appear in multiple cameras are the same person or not, where the cameras often do not have overlapping views. The data distributions of the images captured by different cameras vary due to the varying illumination, background, etc. Besides, distribution may change over time for the same camera. Hence, TML can be very useful in person re-identification [5], [8], [67]. An efficient MTML approach is proposed in [10] to make use of auxiliary datasets for face retrieval, where the tasks vary for different datasets. Stochastic gradient descent (SGD) is adopted for optimization and the algorithm is scalable to large amounts of training data and high dimensional features.
Speech recognition
Different groups of speakers have different ways in uttering an English alphabet. In [17], [18], [36], alphabet recognition in each group is regarded as a task, and MTML is employed to learn the metrics of different groups together. Similarly, since men and women have different pronunciation styles, vowel classification is performed for two different groups according to the gender, and MTML is adopted to learn their metrics simultaneously by making use of all available labeled data [18].
Other applications
In [68], MTML is used for predictions in social networks. For example, citation prediction is to predict the referencing between articles given their contents. The citation patterns of different areas (such as computer science and engineering) are different but related, and thus MTML is adopted to learn the prediction models of multiple areas simultaneously. Social circle prediction is to assign a person to appropriate social circles given his/her profile. Different types of social circles (such as family members and colleges) are different but related with each other, and hence MTML is applied to improve the performance. In [17], [18], [29], MTML is applied to customer information prediction in insurance company. There are multiple variables that can be used to predict the interest of a person in buying a certain insurance policy. Each variable is a discrete value and can be predicted using other variables. The predictions of different variables can be conducted together since they are correlated with each other.
Heterogeneous TML
Computer vision
Similar to homogeneous TML, heterogeneous TML is also mainly applied to the computer vision community, such as image classification including face recognition [63], natural scene categorization [12], [24], [25] and object recognition [12], [23], [25], image clustering [11], image retrieval [11], [12], [69], and face verification [12]. In these applications, either the feature dimensions vary or different types of features are extracted for the source and target domains. In particular, expensive features (has strong representation power but high computational cost, such as CNN [70]) can be used to guide learning an improved metric for relatively cheap features (such as LBP [71]), and interpretable text feature can help the metric learning of visual feature, which is often to interpret [22], [65].
In [11], heterogeneous TML is adopted to improve image super-resolution, which is to generate a high-resolution (HR) image for its low-resolution (LR) counterpart. The method is based on JOR [72], which is an example-based super-resolution approach. JOR needs to find the nearest neighbors for the LR images, and a metric is learned in [11] to replace the Euclidean metric in the k-NN search by leveraging information from the HR domain.
Text analysis
In the text analysis area, heterogenous TML is mainly applied by using labeled documents written in one language (such as English) to help analysis of the documents in another language (such as Spanish). The utilized vocabularies vary for different languages, and thus the data representations are heterogeneous for different domains. Some typical examples including text categorization [23], [62], sentiment classification [65] and document retrieval [62]. In [65], heterogenous MTML is applied to email spam detection since the vocabularies for different persons' email vary.
CONCLUSION AND DISCUSSION
Summary
In this survey, we provide a comprehensive and structured overview of the transfer metric learning (TML) methods and their applications. We generally group TML as homogeneous and heterogeneous TML according to the feature setting. Similar to [7], the TML approaches can also be classified into inductive, transductive and unsupervised TML according to the label setting. According to the transfer strategy, we further categorize the TML approaches into four contexts, i.e., TML via metric approximation, TML via distribution approximation, TML via subspace approximation and TML via distance approximation.
Homogeneous TML has been studied extensively under the inductive setting and various transfer strategies can be adopted. In the transductive setting, TML is mainly conducted by distribution approximation, and there are still no unsupervised methods for homogeneous TML. Unsupervised TML can be carried out under the heterogeneous setting. This is because if more powerful feature is utilized in the source domain, then the distance estimation can be better than that in the target domain [11]. Since the data representations vary for different domains in heterogeneous TML, most of these approaches find a common subspace for knowledge transfer.
Challenges and future directions
We finally identify some challenges in TML and speculate several possible future directions.
Selective transfer in TML
Current transfer learning and TML algorithms usually assume that the source tasks or domain samples are positively related with the target ones. However, this assumption may not hold in real-world applications [15], [73]. The TML algorithm presented in [15] can leverage negatively correlated task by learning task correlation matrix. In [74], the relations of 26 popular visual learning tasks are learned using a large image dataset, where each image has annotations in all tasks. This leads to a task taxonomy map, which can be used to guide the chosen of appropriate supervision policies in transfer learning. Different from these approaches, which consider selective transfer [75] at the task-level, a heterogeneous transfer learning method based on the attention mechanism is proposed in [73], which can avoid negative transfer at the instance-level. The low-rank TML model presented in [50] can also avoid negative transfer to some extent by filtering noisy information in the source domain.
Task correlations have been exploited for metric approximation based TML [15], and the attention scheme can be used for subspace approximation based TML following [73]. It is still unclear how to conduct selective transfer in distribution and distance approximation based TML. Adopting the attention scheme may be a certain choice, but this scheme cannot make use of the negative transfer. Therefore, a promising future direction may be to conduct selective transfer at the hypothesis space-level so that both the positive and negative transfer can be effectively utilized.
More theoretical understanding of TML
There is a theoretical study in [12], which shows that generalization ability of the target metric can be improved by directly enforcing the source feature mappings to agree with the target mappings. But there is still lack of general analysis scheme (such as [76], [77], [78]) and theoretical results for TML. In particular, more theoretical studies should be conducted to understand when and how could the source domain knowledge help the target metric learning.
TML for handling complex data
Most of current TML approaches only learn linear metrics (such as the Mahalanobis metric). However, there may be nonlinear structure in the data, e.g., most of the visual feature representations. Linear metric may fail to capture such structure and hence it is desirable to learn nonlinear metric for the target domain in TML. There have been several works on nonlinear homogeneous TML based on neural networks [8], [55], [58]. But all of them are mainly designed for continuous real-valued data and learn real-valued metrics. More studies can be conducted for histogram data or learning binary target metrics. The histogram data is popular in visual analytic-based applications, and binary metric is efficient in distance calculation. As far as we are concerned, there is only one nonlinear TML work under the heterogeneous setting [25] (with an extension presented in [12]), where gradient boosting regression tree (GBRT) [79], [80] is adopted to learn nonlinear metric in the target domain. Some other nonlinear learning techniques can be investigated, and also binary metrics can be learned to accelerate prediction. In addition, when the structure of the target data distribution is very complex, it could be a good choice to learn Riemannian metric [81] or multiple local metrics [82] to approximate the geodesic distance in the target domain.
TML for handling changing and big data
In TML, all the training data in the source and target domains are usually assumed to be provided at once and a fixed target metric is learned. However, in real-world applications, the data are usually comes in a sequential order and the data distribution may change overtime. For example, tremendous amounts of data are uploaded on the web everyday, and for a robot, the environment changes overtime and feedbacks are provided continuously. Therefore, it is desirable to develop some TML algorithms to make the metric adapt to different changes. Some quite related topics including online learning [83], [84] and lifelong learning [85]. There is a recent try in [30], where an online heterogeneous TML is developed. However, this approach needs abundant unlabeled corresponding data in the source and target domains for knowledge transfer. Hence, the approach may be not efficient when vast amounts of unlabeled data are needed to achieve satisfactory accuracy.
Although the number of training data in the target domain is often assumed to be small, the continuously changing data is "big" in a long term. In addition, when the feature dimension is high, computational costs of the distances between vast amounts of samples based on a learned Mahalanobias metric is intolerable. A typical example is information retrieval. Therefore, it is desirable to learn some target metric that is efficient in distance calculation, e.g., learn hamming distance metric [86], [87] or feature hashing [44], [88], [89], [90] in the target domain.
TML for handling extreme cases
One-shot learning [91] and zero-shot learning [92] are two extreme cases of transfer learning. In these cases, the number of labeled data in the target domain is very small (such as only one) and even zero. The main goal is to recognize rare or unseen classes [93], where some additional knowledge (such as descriptions of the relations between existing and unseen classes) may be provided. This is more like human learning, and much useful in practice. They are quite related to the concepts of domain generalization [94], [95], [96].
DML has been found to be useful in learning unknown classifiers [97] (with an extension in [98]), but it does not aim to learn a metric in the target domain. In [99], an unbiased metric is learned across different domains, but no specific information about the target domain is leveraged. Although some existing TML algorithms allow no labeled data in the target domain [8], [11], they need large amounts of unlabeled target data, which can be regarded as additional knowledge. If we do not have unlabeled data, is it possible to utilize other semantic information to help the target metric learning? There exists a try in [100], where the ColorChecker Chart is utilized as additional information for person re-identification under the one-shot setting. But such information is not easy to access and not general for different applications. Hence, more common and easily accessible knowledge should be identified and explored for general TML under the one/zero-shot setting.
| 8,941 |
1810.03944
|
2896778177
|
Distance metric learning (DML) aims to find an appropriate way to reveal the underlying data relationship. It is critical in many machine learning, pattern recognition and data mining algorithms, and usually require large amount of label information (such as class labels or pair triplet constraints) to achieve satisfactory performance. However, the label information may be insufficient in real-world applications due to the high-labeling cost, and DML may fail in this case. Transfer metric learning (TML) is able to mitigate this issue for DML in the domain of interest (target domain) by leveraging knowledge information from other related domains (source domains). Although achieved a certain level of development, TML has limited success in various aspects such as selective transfer, theoretical understanding, handling complex data, big data and extreme cases. In this survey, we present a systematic review of the TML literature. In particular, we group TML into different categories according to different settings and metric transfer strategies, such as direct metric approximation, subspace approximation, distance approximation, and distribution approximation. A summarization and insightful discussion of the various TML approaches and their applications will be presented. Finally, we indicate some challenges and provide possible future directions.
|
TFL is very similar to TSL and a representative method is presented in @cite_19 , where the typical MMD is modified to take both the marginal and class-conditional distributions into consideration. More recent works on TFL are built upon the powerful deep feature learning. For example, considering that the features in deep neural networks are usually general in the first layers and task-specific in higher layers, @cite_84 propose the deep adaptation networks (DAN), which frozes the general layers in convolutional neural networks (CNN) @cite_17 and only conduct adaption in the task-specific layers. Besides, multi-kernel MMD (MK-MMD) @cite_38 is employed to improve kernel selection in MMD. In DAN, only the marginal distribution difference between the source and target domains is exploited. This is improved by the joint adaptation networks (JAN) @cite_0 , which is able to reduce the joint distribution divergence using a proposed joint MMD (JMMD). The JMMD can involve both the input features and output labels in domain adaptation. The constrained deep TSL @cite_120 method can also exploit the joint distribution and the target domain knowledge is incorporated gradually during a progressive transfer procedure.
|
{
"abstract": [
"Given samples from distributions p and q, a two-sample test determines whether to reject the null hypothesis that p = q, based on the value of a test statistic measuring the distance between the samples. One choice of test statistic is the maximum mean discrepancy (MMD), which is a distance between embeddings of the probability distributions in a reproducing kernel Hilbert space. The kernel used in obtaining these embeddings is critical in ensuring the test has high power, and correctly distinguishes unlike distributions with high probability. A means of parameter selection for the two-sample test based on the MMD is proposed. For a given test level (an upper bound on the probability of making a Type I error), the kernel is chosen so as to maximize the test power, and minimize the probability of making a Type II error. The test statistic, test threshold, and optimization over the kernel parameters are obtained with cost linear in the sample size. These properties make the kernel selection and test procedures suited to data streams, where the observations cannot all be stored in memory. In experiments, the new kernel selection approach yields a more powerful test than earlier kernel selection heuristics.",
"Feature learning with deep models has achieved impressive results for both data representation and classification for various vision tasks. Deep feature learning, however, typically requires a large amount of training data, which may not be feasible for some application domains. Transfer learning can be one of the approaches to alleviate this problem by transferring data from data-rich source domain to data-scarce target domain. Existing transfer learning methods typically perform one-shot transfer learning and often ignore the specific properties that the transferred data must satisfy. To address these issues, we introduce a constrained deep transfer feature learning method to perform simultaneous transfer learning and feature learning by performing transfer learning in a progressively improving feature space iteratively in order to better narrow the gap between the target domain and the source domain for effective transfer of the data from the source domain to target domain. Furthermore, we propose to exploit the target domain knowledge and incorporate such prior knowledge as a constraint during transfer learning to ensure that the transferred data satisfies certain properties of the target domain. To demonstrate the effectiveness of the proposed constrained deep transfer feature learning method, we apply it to thermal feature learning for eye detection by transferring from the visible domain. We also applied the proposed method for cross-view facial expression recognition as a second application. The experimental results demonstrate the effectiveness of the proposed method for both applications.",
"Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation. However, as deep features eventually transition from general to specific along the network, the feature transferability drops significantly in higher layers with increasing domain discrepancy. Hence, it is important to formally reduce the dataset bias and enhance the transferability in task-specific layers. In this paper, we propose a new Deep Adaptation Network (DAN) architecture, which generalizes deep convolutional neural network to the domain adaptation scenario. In DAN, hidden representations of all task-specific layers are embedded in a reproducing kernel Hilbert space where the mean embeddings of different domain distributions can be explicitly matched. The domain discrepancy is further reduced using an optimal multi-kernel selection method for mean embedding matching. DAN can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding. Extensive empirical evidence shows that the proposed architecture yields state-of-the-art image classification error rates on standard domain adaptation benchmarks.",
"Deep networks have been successfully applied to learn transferable features for adapting models from a source domain to a different target domain. In this paper, we present joint adaptation networks (JAN), which learn a transfer network by aligning the joint distributions of multiple domain-specific layers across domains based on a joint maximum mean discrepancy (JMMD) criterion. Adversarial training strategy is adopted to maximize JMMD such that the distributions of the source and target domains are made more distinguishable. Learning can be performed by stochastic gradient descent with the gradients computed by back-propagation in linear-time. Experiments testify that our model yields state of the art results on standard datasets.",
"Transfer learning is established as an effective technology in computer vision for leveraging rich labeled data in the source domain to build an accurate classifier for the target domain. However, most prior methods have not simultaneously reduced the difference in both the marginal distribution and conditional distribution between domains. In this paper, we put forward a novel transfer learning approach, referred to as Joint Distribution Adaptation (JDA). Specifically, JDA aims to jointly adapt both the marginal distribution and conditional distribution in a principled dimensionality reduction procedure, and construct new feature representation that is effective and robust for substantial distribution difference. Extensive experiments verify that JDA can significantly outperform several state-of-the-art methods on four types of cross-domain image classification problems.",
"We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry."
],
"cite_N": [
"@cite_38",
"@cite_120",
"@cite_84",
"@cite_0",
"@cite_19",
"@cite_17"
],
"mid": [
"2110097068",
"2962474440",
"2951670162",
"2408201877",
"2096943734",
"2163605009"
]
}
|
Transfer Metric Learning: Algorithms, Applications and Outlooks
|
I T is critical to evaluate the distances between samples in pattern analysis and machine learning applications. If an appropriate distance metric can be obtained, even the simple k-nearest neighbor (k-NN) classifier, or k-means clustering can perform well [1], [2]. In addition, for largescale and efficient information retrieval, the results are usually obtained directly according to the distances to the query [3], and a good distance metric is also the key of many other important applications, such as face verification [4] and person re-identification [5].
To learn a reliable distance metric, we usually need large amount of label information, which can be the class labels or target values as used in the typical machine learning approaches (such as classification or regression), and it is more common to utilize some pair or triplet-based constraints [6]. Such constraints are weakly-supervised since the exact label for an individual sample is unknown. However, in real-world applications, the label information is often scarce since manually labeling is labor-intensive and it is exhausted or even impossible to collect abundant side information for a new learning problem.
Transfer learning [7], which aims to mitigate the label deficiency issue in model training, is thus introduced to improve the performance of distance metric learnng (DML) when the label information is insufficient in a target domain. This leads to the so-called transfer metric learning (TML), which has been found to be very useful in many applications. For example, in face verification [8], the main step is to estimate the similarities/distances between face images. The data distributions of the images captured under different scenarios vary due to the varied background, illumination, etc. Therefore, the metric learned in one scenario may be not effective in a new scenario and TML would be helpful. In person re-identification [5], [9], the key is to estimate the similarities/distances between images of persons appeared in different cameras. The data distributions of the images captured using different cameras vary due to the varied camera setting and scenario. In addition, the distribution for the same camera may change over time. Hence, calibration is needed to achieve satisfactory performance and TML is able to reduce such effort. A more general example is image retrieval, where the data distributions of images in different datasets vary [10]. It would also be very useful to utilize expensive or semantic features to help learn a metric for cheap features or the ones that are hard to be interpreted [11], [12].
In the past decade, dozens of works have been proposed in this area and we provide in this survey a comprehensive overview of these methods. In this survey, we aim to make the machine learners quickly grasp the TML research area, and facilitate the chosen of appropriate methods for machine learning practitioners. Besides, there still be many issues to be tackled in TML, and we hope that some new ideas can be inspired from this survey.
The rest of this survey is organized as follows. We first present the background and overview of TML in Section 2, which includes a brief history of TML, the main notations used throughout the paper, and a categorization of 1. Evolution of transfer metric learning, which has been studied for almost ten years.
the TML approaches. In the subsequent two sections, we give a detailed description of the approaches in the two main categories, i.e., homogeneous and heterogeneous TML respectively. Section 5 is a summarization of the different applications of TML and finally, we conclude this survey and identify some possible future directions in Section 6.
A brief history of transfer metric learning
Transfer metric learning (TML) is a relatively new research field. The works that explicitly applying transfer learning to improve DML start around the year of 2009. For example, multiple auxiliary (source) datasets are utilized in [14] to help the metric learning on the target set. The main idea is to enforce the target metric to be close to the different source metrics. An adaptive weight is learned to reflect the contribution of each source metric to the target metric. In [15], such contribution is determined by learning a covariance matrix between the different metrics. Instead of directly learning the target metric, the decomposition based method [16] assumes that the target metric can be represented as a linear combination of multiple base metrics, which can be derived from the source metric. Hence, the metric learning is casted as learning combination coefficients, where the parameters to be learned can be much fewer. We can not only using source metrics to help the target metric learning, but also make the different DML tasks help each other. The latter is often called multi-task metric learning (MTML). One representative work is the multitask extension [17] of a well-known DML algorithm LMNN [2]. Some other related works including GPMTML [18], MtMCML [5] and CP-mtML [10]. In addition, there are a few domain adaptation metric learning approaches [19], [20]. Most of the above methods can only learn linear metric for the target domain. The domain adaptation metric learning (DAML) approach presented in [19] is able to learn nonlinear target metric based on the kernel method. Recently, neural network is also employed to conduct nonlinear metric transfer [8] by taking the advantage of deep learning technique [21].
The study of heterogeneous TML is a bit later than homogeneous TML and there are much fewer works than those in the homogeneous setting. To the best of our best knowledge, the first work that explicitly designed for heterogeneous TML is the one presented in [22], but it is limited in that only two domains (one source and target domain) can be handled. There exist a few tensor based approaches [23], [24] for heterogeneous MTML, where the high-order correlations between all domains are exploited. A main disadvantage of these approaches is that the computational complexity is high. Dai et al. [11] proposes an unsupervised heterogeneous TML algorithm, which aims to use some "expensive" (sophisticated, off-the-shelf) features to help learn a metric for relatively "cheap" feature. This is also termed metric imitation. Recently, a general heterogeneous TML framework is proposed in [12], [25]. The framework first extracts some knowledge fragments (linear or nonlinear mappings) from pre-trained source metric, and then using these fragments to help the target domain learn either linear or nonlinear distance metric. The framework is flexible and easy-to-use. An illustration figure for the evolution of TML is shown in Fig. 1.
Notations and definitions
In this survey, we assume there are M different domains, and the m'th domain is associated with a feature space X m and marginal distribution P m (X m ). Without loss of generality, we assume the M 'th (the last) domain is the target domain, and all the remained ones are source domains. If there is only one source domain, we signify it using the script "S". In distance metric learning (DML), the task is to learn a distance function for any two instances, i.e., d φ (x i , x j ), which must satisfy several properties including nonnegativity, identity, symmetry and triangle inequality [6]. Here, φ is the parameter of the distance function, and we call it distance metric in this survey. For a nonlinear distance metric, φ is often given by a nonlinear feature mapping. The linear metric is denoted as A, which is a positive semidefinite (PSD) matrix and adopted in the popular Mahalanobias metric learning [1].
To learn the metric in the m'th domain, we assume there is a training set D m , which contains N m samples with x mi ∈ R dm to be the feature representation for the i'th sample. In a fully-supervised scenario, the corresponding label y mi is also given. However, DML is usually conducted in a weakly-supervised manner, where only some similar/dissimilar constraints on training sample pairs (x mi , x mj ) are provided. Alternatively, the constraint can be a relative comparison for a training triplet (x mi , x mj , x mk ), e.g., x mi is more similar to x mj than to x mk [6].
In traditional DML, we are often provided with abundant labeled data (such as samples with similar/dissimilar * DML algorithm * *
Metric transfer
Transfer Metric Learning Traditional Metric Learning Fig. 2. An illustration of traditional distance metric learning (DML) and transfer metric learning (TML). Given abundant labeled data, DML aims to learn a distance function between samples so that their distance is small if semantically similar and large otherwise. TML improves DML when the labeled data are insufficient in the target domain by utilizing information from related source domains, which have better distance estimations between samples. For example, it may be hard to distinguish "zebra" from "tiger" by observing only a few labeled samples due to the very similar stripe texture. But this task can be much easier if we have enough labeled samples to well distinguish "horse" from "cat". The sample images are from the NUS-WIDE [13] dataset. constraints) so that the learned metric A * can well separate semantically similar data from dissimilar ones, such as "zebra" and "tiger" shown in Fig. 2. While in realworld applications, the learned target metric A M may be not satisfactory since the labeled data are insufficient in the target domain. For example, it may be hard to distinguish "zebra" from "tiger" given only a few labeled samples since the two types of animals have very similar stripe texture. To mitigate the label deficiency issue in the target metric learning, we may utilize the information from other related source domain, where the distance metric A * S is good enough or a good metric can be learned using large amounts of labeled data. For example, if we have enough labeled samples to well distinguish "horse" from "cat", then it may be very easy for us to recognize "zebra" and "tiger" by observing only a few labeled samples. The source metric cannot be directly used in the target domain due to the different data distributions [14] or representations [22] between the source and target domains. Therefore, (homogeneous or heterogeneous) transfer metric learning (TML) is developed to improve the target metric by transferring knowledge (particularly, the metric information) from the source domain. A summarization and discussion of the various TML methods is given as follows.
A categorization of transfer metric learning techniques
As shown in Fig. 3, we can classify TML into different categories according to various principals. Firstly, TML can be generally grouped as homogeneous TML and heterogeneous TML according to the feature setting. In the former group, the samples of different domains lie in the same feature space (X 1 = X 2 = · · · = X M ), and only the data distributions vary (P 1 (X 1 ) = P 2 (X 2 ) = · · · = P M (X M )).
Whereas in heterogeneous TML, the feature spaces are different (X 1 = X 2 = · · · = X M ) and there may be semantic gap between the source and target domains. For example, in the problem of image matching, we may have only a few labeled images in a new scenario due to the high labeling cost, but there are large amounts of labeled images in some other scenarios. The data distributions of different scenarios vary since there are different backgrounds, illuminations, etc. Besides, the web images are usually associated with text descriptions, and it is useful to utilize the semantic textual features to help learn a better distance metric for visual features [22]. The data representations are quite different for the textual and visual domains.
We can also categorize the different TML approaches as inductive TML, transductive TML, and unsupervised TML according to whether the label information is available in the source or target domains. The relationship of the three learning settings are summarized in Table 1. This is similar to the categorization of transfer learning presented in [7]. Furthermore, we summarize the TML approaches into four different cases according to the utilized transfer strategies. Some early works of TML directly enforce the target metric to be close the source metric, and we thus refer it to as TML via metric approximation. Since the main difference between the source and target domains in homogeneous TML is the distribution divergence, some approaches enable metric transfer by minimizing the distribution difference. We refer this case to as TML via distribution approximation. There is a large amount of TML approaches that enable knowledge transfer by finding a common subspace for the source and target domains, especially in heterogeneous TML. This context is referred to as TML via subspace approximation. Finally, there is a few works that let the distance functions of different domains share some common parts
TML approaches Brief description
Metric approximation Use the target metric to approximate the source metric [14], [15], [16], [17], [18], [26].
Distribution approximation Conduct metric transfer by minimizing the data distributions of different domains [8], [19], [20], [27], [28].
Subspace approximation Conduct metric transfer by finding a common subspace for different domains [12], [22], [23], [24], [25], [29].
Distance approximation Share common parts between distance functions or enforce agreement between distances of corresponding sample pairs in different domains [10], [11], [30].
or enforce the distances of corresponding sample pairs to agree with each other in different domains, and we refer it to as TML via distance approximation. The former two cases are usually used in homogeneous TML, and the latter two cases can be adopted for heterogenous TML. Table 2 is a brief description of these cases.
In Table 3, we show which strategies are currently employed for different settings. In homogeneous TML, most of the current algorithms are inductive, and the transductive ones are usually conducted via distribution approximation. There is still no unsupervised method and a possible solution is to extend some unsupervised DML (e.g., [31]) or transfer learning (e.g., [32]) algorithms for unsupervised TML. One challenge is how to ensure the metric learned in the source domain is better since there are no labeled data in both the source and target domains. In the heterogeneous setting [33], since feature dimensions of different domains do not have correspondences, it is inappropriate to conduct TML via direct metric approximation. Most of the current heterogeneous TML approaches first find a common sub-space for different domains, and then conduct knowledge transfer in the subspace. Unsupervised heterogeneous TML can be easily extended for the transductive heterogeneous setting by further utilizing source labels, and it is possible to adopt the distribution approximation strategy in the heterogenous setting by first finding a common representation for the different domains.
HOMOGENEOUS TRANSFER METRIC LEARNING
In homogeneous TML, the utilized features (data representations) are the same, but the data distributions vary for different domains. For example, in sentiment classification as shown in Fig. 4, we would like to determine the sentiment polarity (positive, negative or neutral) for a review of electronics. The performance of a sentiment classifier depends much on the distance estimation between reviews. To obtain reliable distance estimation, we usually need large amounts of labeled reviews to learn a good distance metric. However, we may only have a few labeled electronics reviews due to the high labeling cost and thus the obtained metric is Fig. 4. An example of homogeneous transfer metric learning. In sentiment classification, distance metric learned for target (such as electronics) reviews may be not satisfactory due to the insufficient labeled data. Homogeneous TML improves the metric by using abundant labeled source (such as book) reviews, where the data distribution is different from the target reviews.
not satisfactory. Fortunately, we may have abundant labeled book reviews, which are often easier to collect. Directly applying the metric learned using the labeled book reviews to the sentiment classification of electronics reviews is not appropriate due to the distribution difference between the electronics and book reviews. Transfer metric learning is able to deal with this issue and learn improved distance metric for the target sentiment classification of electronics reviews by using labeled book reviews.
Inductive TML
Under the inductive setting, we are provided with a few labeled data in the target domain. The number of labeled data in the source domain is large enough so that a good distance metric can be obtained, i.e., N S N M > 0. In inductive transfer learning [7], there may be no labeled source data (N S = 0), but we have not seen such works in homogeneous TML.
TML via metric approximation
An intuitive idea for homogeneous TML is to first use the source domain data {D m } to learn the source distance metrics {φ m } beforehand, and then enforce the target metric to be close to the pre-trained source metrics. Therefore, the general formulation for learning the target metric φ M is given by
arg min φ M (φ M ) = L(φ M ; D M ) + γR(φ M ; φ 1 , · · · , φ M −1 ),(1)
where L(φ M ; D M ) is the empirical loss w.r.t. the metric, R(φ M ; φ 1 , · · · , φ M −1 ) is a regularization term that exploits the relationship between the source and target metrics, and γ ≥ 0 is a trade-off hyper-parameter. Any loss function used in standard DML can be adopted, and the key is how to design an appropriate regularization term. In [14], two different regularization terms are developed. The first one is to minimize the LogDet divergence [34] between the source and target Mahalanobias metrics, i.e.,
R(A M ; A 1 , · · · , A M −1 ) = M −1 m=1 α m D LD (A M , A m ) = M −1 m=1 α m tr(A −1 m A M ) − logdet(A M ) .(2)
Here, {A m 0} M m=1 are constrained to be PSD matrices and D LD (·, ·) indicates the LogDet divergence of two matrices. This is more appropriate than the Forbenius norm of matrix difference due to the desirable properties of the LogDet divergence, such as scale invariance [34]. The coefficients {α m } that satisfy α m ≥ 0 and M −1 m=1 α m = 1 is learned to reflect the contributions of different source metrics to the target metric. Secondly, to exploit the geometric structure of data distribution, Zha et al. [14] propose a regularization term based on manifold regularization [35]:
R(A M ; A 1 , · · · , A M −1 ) = M −1 m=1 α m tr X U L m (X U ) T A M ,(3)
where X U is the feature matrix of unlabeled data, and L m is the Laplacian matrix of the data adjacency graph calculated based on the metric A m . In [15], the importance of the source metrics to the target metric is exploited by learning a task covariance matrix over the metrics. The matrix can model the correlations between different tasks. This approach allows negative and zero transfer.
Both of the above two approaches incorporate the source metrics into a regularization term to penalize the target metric learning. Different from them, a novel decompositionbased TML method is proposed in [16], which constructs the target metric by using the base metrics derived from the source metrics, that is, Hence, the number of parameters to be learned is reduced significantly, and the performance can be improved since the labeled samples in the target domain is scarce. Another advantage of the model is that the PSD constraint of the target metric can be automatically satisfied, and thus the computational cost is low. A semi-supervised extension was presented in [26] by combining it with manifold regularization.
A M = U M diag(θ)U T M = N B r=1 θ M r u M r u T M r = N B r=1 θ M r B M r ,(4)
In addition to utilizing the source metrics to help the target metric learning, there exist some multi-task metric learning (MTML) approaches that enable different metrics to help each other in metric learning. A representative work is the large margin multi-task metric learning (mtLMNN) [17], which is a multi-task extension of a well-known DML algorithm, i.e., large margin nearest neighbor (LMNN) [2]. In mtLMNN, all the different metrics are learned simultaneously by assuming that each metric consists of a common metric A 0 and task-specific metric A m , i.e., A m = A 0 + A m . Based on the same idea, a semi-supervised MTML method is developed in [36], where the unlabeled data is utilized by designing a loss to preserve neighborhood relationship. Then a regularization term is designed to control the amount of information to be shared among all tasks. In [15], a MTML approach is presented by first vectorizing the Mahalanobias metrics and then using a task covariance matrix to exploit the task relationship. Similarly, the metrics are vectorized in [5], but the different metrics are enforced to be close under the graph-based regularization theme [37]. In addition, a general MTML framework is proposed in [18], which enables knowledge transfer by enforcing different metrics {A m } to be close to a common metric A 0 . The general Bregman matrix divergence [38] is introduced to measure the difference between two metrics. The framework incorporates mtLMNN as a special case and the geometry is preserved in the transfer by adopting a special Bregman divergence, i.e., the von Neumann divergence [38].
TML via subspace approximation
Most of the TML approaches via direct metric approximation have a main drawback, i.e., when the feature dimension is high, the model is prone to overfitting due to the large number of parameters to be learned. This also leads to high computational cost in both training and prediction. To tackle this issue, some low-rank TML methods are proposed. They usually decompose the metric as A m = U m U T m , where U m ∈ R dm×r is a low-rank transformation matrix. This leads to a common subspace for different domains, and the knowledge transfer is conducted in the subspace. For example, a low-rank multi-task metric learning framework is proposed in [29], [39], which assumes that each transformation is a product of a common transformation and taskspecific one, i.e., U m = U m U 0 . As a special case, the large margin component analysis (LMCA) [40] is extended to multi-task LMCA (mtLMCA), which is shown to be superior to mtLMNN.
TML via distance approximation
Both the models of mtLMNN and mtLMCA are trained based on labeled sample triplets. Different from them, CP-mtML [10] learn the metrics using labeled pairs, which are often easier to collect. Similar to mtLMCA, CP-mtML decomposes the metric as A m = U m U T m , but the different projections {U m } are coupled by assuming that the distance function consists of a common part and task-specific one, i.e.,
d 2 Um (x i , x j ) = d 2 Um (x i , x j ) + d 2 U0 (x i , x j ).(5)
A main advantage of CP-mtML is that the optimization problem can be solved efficiently using stochastic gradient descent (SGD), and hence the model is scalable for highdimensional features and large amounts of training data. Besides, the learned transformation can be used to derive low-dimensional features, which are desirable in large-scale information retrieval.
Transductive TML
Under the transductive setting, there are no labeled data in the target domain and we only have large amounts of labeled source data, i.e., N S N M = 0.
TML via distribution approximation
In homogeneous TML, the data distributions vary for different domains. Therefore, we can minimize the distribution difference between the source and target domains, so that the source domain samples can be reused in the target metric learning. In [19], a domain adaptation metric learning (DAML) approach is proposed. In DAML, the distance metric is parameterized by a feature mapping φ M . The mapping is learned by first transforming the samples in the source and target domains using the mapping, and then minimizing the distribution difference of the source and target domains in the transformed space. At the same time, φ M is learned to make the transformed samples satisfy the similar/dissimilar constraints in the source domain. The general formulation for learning φ M is given by
arg min φ M (φ M ) = L(φ M ; D S ) + γD P D (P M (X M ), P S (X S )) ,(6)
where D P D (·, ·) is a measure of the difference between two probability distributions. Maximum mean discrepancy (MMD) [41] is adopted as the measure in DAML. The nonlinear mapping φ M is learned in the reproducing kernel Hilbert space (RKHS), and the solution is found using the kernel method. Since the source and target samples in the transformed space follow similar distribution, the mapping learned using the source label information is also discriminative in the target domain. The same idea is adopted in deep TML (DTML) [8], and the main difference is that the nonlinear mapping is assumed to be a multi-layer neural network. The knowledge transfer is conducted at the output layer and each hidden layer, and some weight hyperparameters are set to balance the importance of the losses in different layers. A major limitation of these works is that they only consider the marginal distribution difference. This limitation is overcame in [27], where a novel TML method is developed by simultaneously reducing the marginal and conditional distribution divergences between the source and target domains. The conditional distribution divergence is reduced by first assigning pseudo labels to target domain data using the classifiers trained on source domain data, and then applying the class-wise MMD [42].
Different from these methods, which reduce the distribution difference in a new space, the importance sampling [43] is introduced in [20] to handle DML under covariate shift. The formulation is given as follows,
arg min
A M 0 (A M ) = i,j w ij l(A M ; x Si , x Sj , y Sij ),(7)
where l(·) is some pre-defined loss function over a training pair (x Si , x Sj ) with y Sij = ±1 indicating the two samples are similar or not. The weight w ij =
P M (x Si )P M (x Sj ) P S (x Si )P S (x Sj )
indicates the importance of the pair in the source domain for learning the target metric. Intuitively, if the pair of source samples have large probability to be occurred in the target domain, they should contribute highly in the target metric learning. In particular, for the distance (such as the popular Mahalanobias distance) which is induced by a norm, i.e., d(x i , x j ) = ϕ(x i − x j ), we can calculate the weight as [20], the weights and target metric are learned separately and this may lead to error propagation across them. The issue is tackled by [28], where the weights and target metric are learned simultaneously in a unified framework.
w ij = P M (δ Sij ) P S (δ Sij ) , where δ Sij = x Si − x Sj . In
Discussion
TML via metric approximation is straightforward in that divergence between the source and target metrics (parameterized by PSD matrices) are directly minimized. A major difference of the various metric approximation based approaches is that the source and target metrics are enforced to be close in different ways, e.g., by adopting different types of divergence. These approaches are often limited in that the training complexity is high due to the PSD constraint and the distance calculation in the inference stage is not efficient for high-dimensional data. Subspace approximation based TML compensates for these shortcomings by reformulating the metric learning as learning a transformation or mapping. The PSD constraint is automatically satisfied and the learned transformation can be used to derive compressed representation, which would facilitate efficient distance estimation or sample matching, where the hash technique [44] can be involved. This is critical in many applications, such as information retrieval. The main disadvantage of the subspace approximation based methods is that their optimization problems are often non-convex and hence only local optimum can be obtained. The recent work [10] based on distance approximation also learn a projection instead of the metric but the optimization is more efficient. All of these approaches do not explicitly deal with the distribution difference, which is the main issue that transfer learning would like to tackle. Distribution approximation based methods focus on this point by usually minimizing the MMD measure or utilizing the importance sampling strategy.
Related work
TML is quite related to transfer subspace learning (TSL) [45], [46] or transfer feature learning (TFL) [47]. An early work on TSL is presented in [45] that finds a low-dimensional latent space, where the distribution difference between the source and target domain is minimized. This algorithm is conducted in a transductive manner and not convenient to derive a representation for new samples. This issue is tackled by Si et al. [46], where a generic regularization framework is proposed for TSL based on Bregman divergence [48]. A low-rank TSL (LTSL) framework is proposed in [49], [50], where the subspace is found by reconstructing the projected target data using the projected source data under the lowrank representation [51], [52] theme. The main advantage of the framework is that only relevant source data are utilized to find the subspace and noisy information can be filtered out. That is, it can avoid negative transfer. The framework is further extended in [53] to help recover missing modality in the target domain and improved in [54] by exploiting both low-rank and sparse structures on the reconstruction matrix.
TFL is very similar to TSL and a representative method is presented in [47], where the typical MMD is modified to take both the marginal and class-conditional distributions into consideration. More recent works on TFL are built upon the powerful deep feature learning. For example, considering that the features in deep neural networks are usually general in the first layers and task-specific in higher layers, Long et al. [55] propose the deep adaptation networks (DAN), which frozes the general layers in convolutional neural networks (CNN) [56] and only conduct adaption in the task-specific layers. Besides, multi-kernel MMD (MK-MMD) [57] is employed to improve kernel selection in MMD. In DAN, only the marginal distribution difference between the source and target domains is exploited. This is improved by the joint adaptation networks (JAN) [58], which is able to reduce the joint distribution divergence using a proposed joint MMD (JMMD). The JMMD can involve both the input features and output labels in domain adaptation. The constrained deep TSL [59] method can also exploit the joint distribution and the target domain knowledge is incorporated gradually during a progressive transfer procedure.
All of these TSL or TFL approaches have very close relationships to the subspace and distribution approximation Fig. 5. An example of heterogeneous transfer metric learning. In multi-lingual sentiment classification, distance metric learned for target reviews (such as the ones written in Spanish) may be not satisfactory due to the insufficient labeled data. Heterogeneous TML improves the metric by using abundant labeled source reviews (such as the ones written in English), where the data representation is different from the target reviews (e.g., due to the different vocabularies).
based TML. Although they do not aim to learn metrics, it is not hard to adapt them for TML by adopting some metric learning loss in these models.
HETEROGENEOUS TRANSFER METRIC LEARN-ING
In heterogeneous TML, the different domains have different features (data representations), and sometimes have semantic gap, such as the textual and visual domains. A typical example is the multi-lingual sentiment classification as shown in Fig. 5, where we would like to determine the sentiment polarity for a review written in Spanish. The labeled Spanish reviews may be scarce but it is much easier to collect abundant labeled reviews written in English. Directly applying the metric learned using the labeled English reviews to the sentiment classification of Spanish reviews is infeasible since the representations of Spanish and English reviews are different due to the varied vocabularies. This issue can be tackled by heterogeneous TML, which improves the distance metric for the target sentiment classification of Spanish reviews using labeled English reviews.
Inductive heterogeneous TML
Different from the inductive homogenous setting, the number of labeled data in the source domain can be zero under the inductive heterogeneous setting. This is because the source feature may have much stronger representation power than that in the target domain, and thus no labeled data are required to obtain a good distance function in the source domain.
Heterogeneous TML via subspace approximation
To our best knowledge, heterogeneous TML under the inductive setting is only studied in recent years. For example, a heterogeneous multi-task metric learning (HMTML) method is proposed in [23]. HMTML assumes that the similar/dissimilar constraints are limited in multiple heterogeneous domains, but there are large amounts of unlabeled data that have representations in all domains, i.e.,
D U = {(x U 1n , · · · , x U M n )} N U n=1 .
(8)
Since the different representations corresponding to the same (unlabeled) sample, the transformed representations should be close to each other in the subspace. By minimizing the divergence of transformed representations (or equivalently maximizing their correlations), each transformation is learned by using the information from all domains. This results in an improved transformation, and thus better metric than learning them separately. In [23], a tensor-based regularization term is designed to exploit the high-order correlations between different domains. A variant of the model is presented in [24], which uses the class labels to build domain connection.
In [25], a general heterogeneous TML approach is proposed based on the knowledge fragments transfer [60] strategy. The optimization problem is given by
arg min φ M (φ M ) = L(φ M ; D M )+γR({φ M c (·)}, {ϕ Sc (·)}; D U ),(9)
where φ M c (·) is the c'th coordinate of the mapping φ M , and ϕ Sc (·) is the c'th fragment of the knowledge in the source domain. The source knowledge fragments are represented by some mapping functions, which are learned by applying existing DML algorithms in the source domain beforehand. Then the target metric (which also consists of multiple mapping functions) is enforced to agree with the source fragments on the unlabeled corresponding data. This helps learn an improved metric in the target domain since the pretrained source distance function is assumed to be superior than the target distance function without knowledge transfer. Intuitively, the target subspace is enforced to approach a better source subspace. An improvement of the model is presented in [12], where the locality of the geometric structure of the data distribution is preserved via manifold regularization [35].
Heterogeneous TML via distance approximation
We can not only enforce the subspace representations of corresponding sample in different domains to be close, but also let the distances of corresponding sample pairs to agree with each other in different domains. For example, an online heterogeneous TML approach is proposed in [30], which also assumes that there are abundant unlabeled corresponding data, but the target labeled sample pairs are provided in a sequential manner (one by one). Given a new labeled training pair, the target metric is updated as:
A k+1 M = arg min A M 0 (A M ) = L(A M ) + γ A D LD (A M , A k M ) + γ I R(d A M , d A S ; D U ),(10)
where L(A M ) is the empirical loss w.r.t. the current labeled pair, D LD (·, ·) is the LogDet divergence [34], and
R(d A M , d A S ; D U )
is a regularization term that enforce agreements between the source and target distances (of corresponding pairs). Here, A k M is the target metric obtained previously and initialized as an identity matrix. The source metric A S can be an identity matrix if the source feature is much more powerful than the target feature. By precalculating A S and formulating the term R(·) under the manifold regularization theme [35], an online algorithm is developed to update the target metric A M efficiently.
Unsupervised heterogeneous TML
There exist a few unsupervised heterogeneous TML approaches that utilize unlabeled corresponding data for metric transfer and no label information is provided in either the source or target domains (N S = N M = 0). Under this unsupervised paradigm, the utilized source feature should be more expressive or interpretable than the target feature, so that the estimated distances in the source domain can be better than those in the target domain.
Heterogeneous TML via subspace approximation
An early work is done in [22], where the main idea is to maximize the similarity of any unlabeled corresponding pairs in a common subspace, i.e.,
arg min
A M 0 (A M ) = N U n=1 l (ϕ(θ)) ,(11)
where ϕ(θ) = 1 1+exp(−θ) with θ = (x U M n ) T Gx U Sn and G = U T M U S . Here, l(·) is chosen to be the negative logistic loss and the proximal gradient method is adopted for optimization. A main disadvantage of this TML approach is that the computational complexity is high since the costly singular value decomposition (SVD) is involved in each iteration of the optimization.
Heterogeneous TML via distance approximation
Instead of directly maximizing the likelihood between unlabeled sample pairs, Dai et al. [11] propose to use the target samples to approximate the source manifold. The method is inspired by locally linear embedding (LLE) [61], and metric transfer is conducted by enforcing embeddings of target samples to preserve local properties in the source domain. The optimization problem is given by
arg min U M (U M ) = N U i=1 U T M x U M i − N U j=1 w Sij (U T M x U M j ) ,(12)
where w Sij is the weight in the adjacency graph calculated using the source domain feature. This enables the distances (between samples) in the source and target domains to agree with each other on the manifold. The optimization is much more efficient than [22] since only a generalized eigenvector problem is needed to be solved.
Discussion
It is nature to conduct heterogeneous TML via subspace approximation since the representations of different domains vary and finding a common representation can facilitate the knowledge transfer. Similar to that in the homogeneous setting, the main drawback is that the optimization problem is usually non-convex. Although this drawback can be remedied by directly learning a PSD matrix, such as using the distance approximation strategy, it is nontrivial to perform efficient distance inference for high-dimensional data and extend the algorithm to learn nonlinear metric. Due to the strong ability and rapid development of deep learning, it may be more promising to learn transformation or mapping than PSD matrix in TML, based on either subspace or distance approximation.
Related work
Some early heterogeneous transfer learning approaches are not specially designed for DML, but the learned feature transformation or mapping for each domain can be used to derive a metric. For example, in the work of heterogeneous domain adaptation via manifold alignment (DAMA) [62], the class labels are utilized to align different domains. A mapping function is learned for each domain and all functions are learned together. After being projected into a common subspace, the samples should be close to each other if they belong to the same class and separated otherwise. This is conducted for all samples from either the same domain or different domains. The label information of all different domains can be utilized to learn the shared subspace, and thus better embeddings (representations) can be learned for different domains than learning them separately. In [63], a multi-task discriminant analysis (MTDA) approach is proposed to deal with heterogeneous feature spaces in different domains. MTDA assumes the linear transformation of the m'th domain is given by U m = W m H, which consists of a task-specific part W m and a common part H for all tasks. Then all the transformations are learned in a single optimization problem, which is similar to that of the well-known linear discriminant analysis (LDA) [64]. In [65], a multitask nonnegative matrix factorization (MTNMF) approach is proposed to learn the different mappings for all domains by simultaneously factorizing their data representation and feature-class correlation matrices. The factorized class representation matrix is assumed to be shared by all tasks. This leads to a common subspace for different domains.
All of these approaches have very close relationships to the subspace approximation based heterogeneous TML, but they mainly utilize the fully-supervised class labels to learn feature mappings for different domains. As we mentioned previously, it is common to utilize the weakly-supervised pair/triplet constraints in DML and it is not hard to adapt these approaches for heterogeneous TML by adopting some metric learning loss w.r.t. pair/triplet constraints in these models.
APPLICATIONS
In general, for any applications where DML is appropriate, TML is a good candidate when the label information is scarce or hard to collect. In Table 4, we summarize the different applications that TML utilized in.
Homogeneous TML
Computer vision
Similar to DML [66], most of the TML approaches are applied in computer vision. For example, effectiveness of many homogeneous TML methods are verified in the common image classification application, which includes handwritten letter/digit classification [15], [16], [18], [39], face recognition [14], [19], natural scene categorization and object recognition [16], [19], [27].
DML is particular suitable and crucial for some applications, such as face verification [8], person re-identification [5] and image retrieval [10]. This is because in these applications, results can be directly inferred from the distances between samples. Face verification aims to decide whether two face images belong to the same person or not. In [8], TML is applied for face verification across different datasets, where the distributions vary. The goal of person re-identification is to decide whether the people appear in multiple cameras are the same person or not, where the cameras often do not have overlapping views. The data distributions of the images captured by different cameras vary due to the varying illumination, background, etc. Besides, distribution may change over time for the same camera. Hence, TML can be very useful in person re-identification [5], [8], [67]. An efficient MTML approach is proposed in [10] to make use of auxiliary datasets for face retrieval, where the tasks vary for different datasets. Stochastic gradient descent (SGD) is adopted for optimization and the algorithm is scalable to large amounts of training data and high dimensional features.
Speech recognition
Different groups of speakers have different ways in uttering an English alphabet. In [17], [18], [36], alphabet recognition in each group is regarded as a task, and MTML is employed to learn the metrics of different groups together. Similarly, since men and women have different pronunciation styles, vowel classification is performed for two different groups according to the gender, and MTML is adopted to learn their metrics simultaneously by making use of all available labeled data [18].
Other applications
In [68], MTML is used for predictions in social networks. For example, citation prediction is to predict the referencing between articles given their contents. The citation patterns of different areas (such as computer science and engineering) are different but related, and thus MTML is adopted to learn the prediction models of multiple areas simultaneously. Social circle prediction is to assign a person to appropriate social circles given his/her profile. Different types of social circles (such as family members and colleges) are different but related with each other, and hence MTML is applied to improve the performance. In [17], [18], [29], MTML is applied to customer information prediction in insurance company. There are multiple variables that can be used to predict the interest of a person in buying a certain insurance policy. Each variable is a discrete value and can be predicted using other variables. The predictions of different variables can be conducted together since they are correlated with each other.
Heterogeneous TML
Computer vision
Similar to homogeneous TML, heterogeneous TML is also mainly applied to the computer vision community, such as image classification including face recognition [63], natural scene categorization [12], [24], [25] and object recognition [12], [23], [25], image clustering [11], image retrieval [11], [12], [69], and face verification [12]. In these applications, either the feature dimensions vary or different types of features are extracted for the source and target domains. In particular, expensive features (has strong representation power but high computational cost, such as CNN [70]) can be used to guide learning an improved metric for relatively cheap features (such as LBP [71]), and interpretable text feature can help the metric learning of visual feature, which is often to interpret [22], [65].
In [11], heterogeneous TML is adopted to improve image super-resolution, which is to generate a high-resolution (HR) image for its low-resolution (LR) counterpart. The method is based on JOR [72], which is an example-based super-resolution approach. JOR needs to find the nearest neighbors for the LR images, and a metric is learned in [11] to replace the Euclidean metric in the k-NN search by leveraging information from the HR domain.
Text analysis
In the text analysis area, heterogenous TML is mainly applied by using labeled documents written in one language (such as English) to help analysis of the documents in another language (such as Spanish). The utilized vocabularies vary for different languages, and thus the data representations are heterogeneous for different domains. Some typical examples including text categorization [23], [62], sentiment classification [65] and document retrieval [62]. In [65], heterogenous MTML is applied to email spam detection since the vocabularies for different persons' email vary.
CONCLUSION AND DISCUSSION
Summary
In this survey, we provide a comprehensive and structured overview of the transfer metric learning (TML) methods and their applications. We generally group TML as homogeneous and heterogeneous TML according to the feature setting. Similar to [7], the TML approaches can also be classified into inductive, transductive and unsupervised TML according to the label setting. According to the transfer strategy, we further categorize the TML approaches into four contexts, i.e., TML via metric approximation, TML via distribution approximation, TML via subspace approximation and TML via distance approximation.
Homogeneous TML has been studied extensively under the inductive setting and various transfer strategies can be adopted. In the transductive setting, TML is mainly conducted by distribution approximation, and there are still no unsupervised methods for homogeneous TML. Unsupervised TML can be carried out under the heterogeneous setting. This is because if more powerful feature is utilized in the source domain, then the distance estimation can be better than that in the target domain [11]. Since the data representations vary for different domains in heterogeneous TML, most of these approaches find a common subspace for knowledge transfer.
Challenges and future directions
We finally identify some challenges in TML and speculate several possible future directions.
Selective transfer in TML
Current transfer learning and TML algorithms usually assume that the source tasks or domain samples are positively related with the target ones. However, this assumption may not hold in real-world applications [15], [73]. The TML algorithm presented in [15] can leverage negatively correlated task by learning task correlation matrix. In [74], the relations of 26 popular visual learning tasks are learned using a large image dataset, where each image has annotations in all tasks. This leads to a task taxonomy map, which can be used to guide the chosen of appropriate supervision policies in transfer learning. Different from these approaches, which consider selective transfer [75] at the task-level, a heterogeneous transfer learning method based on the attention mechanism is proposed in [73], which can avoid negative transfer at the instance-level. The low-rank TML model presented in [50] can also avoid negative transfer to some extent by filtering noisy information in the source domain.
Task correlations have been exploited for metric approximation based TML [15], and the attention scheme can be used for subspace approximation based TML following [73]. It is still unclear how to conduct selective transfer in distribution and distance approximation based TML. Adopting the attention scheme may be a certain choice, but this scheme cannot make use of the negative transfer. Therefore, a promising future direction may be to conduct selective transfer at the hypothesis space-level so that both the positive and negative transfer can be effectively utilized.
More theoretical understanding of TML
There is a theoretical study in [12], which shows that generalization ability of the target metric can be improved by directly enforcing the source feature mappings to agree with the target mappings. But there is still lack of general analysis scheme (such as [76], [77], [78]) and theoretical results for TML. In particular, more theoretical studies should be conducted to understand when and how could the source domain knowledge help the target metric learning.
TML for handling complex data
Most of current TML approaches only learn linear metrics (such as the Mahalanobis metric). However, there may be nonlinear structure in the data, e.g., most of the visual feature representations. Linear metric may fail to capture such structure and hence it is desirable to learn nonlinear metric for the target domain in TML. There have been several works on nonlinear homogeneous TML based on neural networks [8], [55], [58]. But all of them are mainly designed for continuous real-valued data and learn real-valued metrics. More studies can be conducted for histogram data or learning binary target metrics. The histogram data is popular in visual analytic-based applications, and binary metric is efficient in distance calculation. As far as we are concerned, there is only one nonlinear TML work under the heterogeneous setting [25] (with an extension presented in [12]), where gradient boosting regression tree (GBRT) [79], [80] is adopted to learn nonlinear metric in the target domain. Some other nonlinear learning techniques can be investigated, and also binary metrics can be learned to accelerate prediction. In addition, when the structure of the target data distribution is very complex, it could be a good choice to learn Riemannian metric [81] or multiple local metrics [82] to approximate the geodesic distance in the target domain.
TML for handling changing and big data
In TML, all the training data in the source and target domains are usually assumed to be provided at once and a fixed target metric is learned. However, in real-world applications, the data are usually comes in a sequential order and the data distribution may change overtime. For example, tremendous amounts of data are uploaded on the web everyday, and for a robot, the environment changes overtime and feedbacks are provided continuously. Therefore, it is desirable to develop some TML algorithms to make the metric adapt to different changes. Some quite related topics including online learning [83], [84] and lifelong learning [85]. There is a recent try in [30], where an online heterogeneous TML is developed. However, this approach needs abundant unlabeled corresponding data in the source and target domains for knowledge transfer. Hence, the approach may be not efficient when vast amounts of unlabeled data are needed to achieve satisfactory accuracy.
Although the number of training data in the target domain is often assumed to be small, the continuously changing data is "big" in a long term. In addition, when the feature dimension is high, computational costs of the distances between vast amounts of samples based on a learned Mahalanobias metric is intolerable. A typical example is information retrieval. Therefore, it is desirable to learn some target metric that is efficient in distance calculation, e.g., learn hamming distance metric [86], [87] or feature hashing [44], [88], [89], [90] in the target domain.
TML for handling extreme cases
One-shot learning [91] and zero-shot learning [92] are two extreme cases of transfer learning. In these cases, the number of labeled data in the target domain is very small (such as only one) and even zero. The main goal is to recognize rare or unseen classes [93], where some additional knowledge (such as descriptions of the relations between existing and unseen classes) may be provided. This is more like human learning, and much useful in practice. They are quite related to the concepts of domain generalization [94], [95], [96].
DML has been found to be useful in learning unknown classifiers [97] (with an extension in [98]), but it does not aim to learn a metric in the target domain. In [99], an unbiased metric is learned across different domains, but no specific information about the target domain is leveraged. Although some existing TML algorithms allow no labeled data in the target domain [8], [11], they need large amounts of unlabeled target data, which can be regarded as additional knowledge. If we do not have unlabeled data, is it possible to utilize other semantic information to help the target metric learning? There exists a try in [100], where the ColorChecker Chart is utilized as additional information for person re-identification under the one-shot setting. But such information is not easy to access and not general for different applications. Hence, more common and easily accessible knowledge should be identified and explored for general TML under the one/zero-shot setting.
| 8,941 |
1810.03979
|
2966206672
|
After the tremendous success of convolutional neural networks in image classification, object detection, speech recognition, etc., there is now rising demand for deployment of these compute-intensive ML models on tightly power constrained embedded and mobile systems at low cost as well as for pushing the throughput in data centers. This has triggered a wave of research towards specialized hardware accelerators. Their performance is often constrained by I O bandwidth and the energy consumption is dominated by I O transfers to off-chip memory. We introduce and evaluate a novel, hardware-friendly compression scheme for the feature maps present within convolutional neural networks. We show that an average compression ratio of 4.4× relative to uncompressed data and a gain of 60 over existing method can be achieved for ResNet-34 with a compression block requiring <300 bit of sequential cells and minimal combinational logic.
|
There are several methods out there describing hardware accelerators which exploit feature map sparsity to reduce computation: Cnvlutin @cite_3 , SCNN @cite_29 , Cambricon-X @cite_28 , NullHop @cite_13 , Eyeriss @cite_8 , EIE @cite_26 . Their focus is on power gating or skipping some of the operations and memory accesses. While this automatically entails defining a scheme to feed the data into the system, minimizing the bandwidth was not the primary objective of any of them. They all use one of three methods: Zero-RLE (used in SCNN): A simple run-length encoding for the zero values, i.e. a single prefix bit followed by the number of zero-values or the non-zero value. Zero-free neuron array format (ZFNAf) (used in Cnvlutin): Similarly to the widely-used compressed sparse row (CSR) format, non-zero elements are encoded with an offset and their value. Compressed column storage (CCS) format (e.g. used in EIE): Similar to ZFNAf, but the offsets are stored in relative form, thus requiring less bits to store them. Few bits are sufficient, and in case they are all exhausted, a zero-value can be encoded as if it was non-zero.
|
{
"abstract": [
"State-of-the-art deep neural networks (DNNs) have hundreds of millions of connections and are both computationally and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources and power budgets. While custom hardware helps the computation, fetching weights from DRAM is two orders of magnitude more expensive than ALU operations, and dominates the required power. Previously proposed 'Deep Compression' makes it possible to fit large DNNs (AlexNet and VGGNet) fully in on-chip SRAM. This compression is achieved by pruning the redundant connections and having multiple connections share the same weight. We propose an energy efficient inference engine (EIE) that performs inference on this compressed network model and accelerates the resulting sparse matrix-vector multiplication with weight sharing. Going from DRAM to SRAM gives EIE 120× energy saving; Exploiting sparsity saves 10×; Weight sharing gives 8×; Skipping zero activations from ReLU saves another 3×. Evaluated on nine DNN benchmarks, EIE is 189× and 13× faster when compared to CPU and GPU implementations of the same DNN without compression. EIE has a processing power of 102 GOPS working directly on a compressed network, corresponding to 3 TOPS on an uncompressed network, and processes FC layers of AlexNet at 1.88×104 frames sec with a power dissipation of only 600mW. It is 24,000× and 3,400× more energy efficient than a CPU and GPU respectively. Compared with DaDianNao, EIE has 2.9×, 19× and 3× better throughput, energy efficiency and area efficiency.",
"Deep convolutional neural networks (CNNs) are widely used in modern AI systems for their superior accuracy but at the cost of high computational complexity. The complexity comes from the need to simultaneously process hundreds of filters and channels in the high-dimensional convolutions, which involve a significant amount of data movement. Although highly-parallel compute paradigms, such as SIMD SIMT, effectively address the computation requirement to achieve high throughput, energy consumption still remains high as data movement can be more expensive than computation. Accordingly, finding a dataflow that supports parallel processing with minimal data movement cost is crucial to achieving energy-efficient CNN processing without compromising accuracy. In this paper, we present a novel dataflow, called row-stationary (RS), that minimizes data movement energy consumption on a spatial architecture. This is realized by exploiting local data reuse of filter weights and feature map pixels, i.e., activations, in the high-dimensional convolutions, and minimizing data movement of partial sum accumulations. Unlike dataflows used in existing designs, which only reduce certain types of data movement, the proposed RS dataflow can adapt to different CNN shape configurations and reduces all types of data movement through maximally utilizing the processing engine (PE) local storage, direct inter-PE communication and spatial parallelism. To evaluate the energy efficiency of the different dataflows, we propose an analysis framework that compares energy cost under the same hardware area and processing parallelism constraints. Experiments using the CNN configurations of AlexNet show that the proposed RS dataflow is more energy efficient than existing dataflows in both convolutional (1.4× to 2.5×) and fully-connected layers (at least 1.3× for batch size larger than 16). The RS dataflow has also been demonstrated on a fabricated chip, which verifies our energy analysis.",
"Neural networks (NNs) have been demonstrated to be useful in a broad range of applications such as image recognition, automatic translation and advertisement recommendation. State-of-the-art NNs are known to be both computationally and memory intensive, due to the ever-increasing deep structure, i.e., multiple layers with massive neurons and connections (i.e., synapses). Sparse neural networks have emerged as an effective solution to reduce the amount of computation and memory required. Though existing NN accelerators are able to efficiently process dense and regular networks, they cannot benefit from the reduction of synaptic weights. In this paper, we propose a novel accelerator, Cambricon-X, to exploit the sparsity and irregularity of NN models for increased efficiency. The proposed accelerator features a PE-based architecture consisting of multiple Processing Elements (PE). An Indexing Module (IM) efficiently selects and transfers needed neurons to connected PEs with reduced bandwidth requirement, while each PE stores irregular and compressed synapses for local computation in an asynchronous fashion. With 16 PEs, our accelerator is able to achieve at most 544 GOP s in a small form factor (6.38 mm2 and 954 mW at 65 nm). Experimental results over a number of representative sparse networks show that our accelerator achieves, on average, 7.23x speedup and 6.43x energy saving against the state-of-the-art NN accelerator.",
"Convolutional Neural Networks (CNNs) have emerged as a fundamental technology for machine learning. High performance and extreme energy efficiency are critical for deployments of CNNs, especially in mobile platforms such as autonomous vehicles, cameras, and electronic personal assistants. This paper introduces the Sparse CNN (SCNN) accelerator architecture, which improves performance and energy efficiency by exploiting the zero-valued weights that stem from network pruning during training and zero-valued activations that arise from the common ReLU operator. Specifically, SCNN employs a novel dataflow that enables maintaining the sparse weights and activations in a compressed encoding, which eliminates unnecessary data transfers and reduces storage requirements. Furthermore, the SCNN dataflow facilitates efficient delivery of those weights and activations to a multiplier array, where they are extensively reused; product accumulation is performed in a novel accumulator array. On contemporary neural networks, SCNN can improve both performance and energy by a factor of 2.7x and 2.3x, respectively, over a comparably provisioned dense CNN accelerator.",
"This work observes that a large fraction of the computations performed by Deep Neural Networks (DNNs) are intrinsically ineffectual as they involve a multiplication where one of the inputs is zero. This observation motivates Cnvlutin (CNV), a value-based approach to hardware acceleration that eliminates most of these ineffectual operations, improving performance and energy over a state-of-the-art accelerator with no accuracy loss. CNV uses hierarchical data-parallel units, allowing groups of lanes to proceed mostly independently enabling them to skip over the ineffectual computations. A co-designed data storage format encodes the computation elimination decisions taking them off the critical path while avoiding control divergence in the data parallel units. Combined, the units and the data storage format result in a data-parallel architecture that maintains wide, aligned accesses to its memory hierarchy and that keeps its data lanes busy. By loosening the ineffectual computation identification criterion, CNV enables further performance and energy efficiency improvements, and more so if a loss in accuracy is acceptable. Experimental measurements over a set of state-of-the-art DNNs for image classification show that CNV improves performance over a state-of-the-art accelerator from 1.24× to 1.55× and by 1.37× on average without any loss in accuracy by removing zero-valued operand multiplications alone. While CNV incurs an area overhead of 4.49 , it improves overall EDP (Energy Delay Product) and ED2P (Energy Delay Squared Product) on average by 1.47× and 2.01×, respectively. The average performance improvements increase to 1.52× without any loss in accuracy with a broader ineffectual identification policy. Further improvements are demonstrated with a loss in accuracy.",
"Convolutional neural networks (CNNs) have become the dominant neural network architecture for solving many state-of-the-art (SOA) visual processing tasks. Even though graphical processing units are most often used in training and deploying CNNs, their power efficiency is less than 10 GOp s W for single-frame runtime inference. We propose a flexible and efficient CNN accelerator architecture called NullHop that implements SOA CNNs useful for low-power and low-latency application scenarios. NullHop exploits the sparsity of neuron activations in CNNs to accelerate the computation and reduce memory requirements. The flexible architecture allows high utilization of available computing resources across kernel sizes ranging from @math to @math . NullHop can process up to 128 input and 128 output feature maps per layer in a single pass. We implemented the proposed architecture on a Xilinx Zynq field-programmable gate array (FPGA) platform and presented the results showing how our implementation reduces external memory transfers and compute time in five different CNNs ranging from small ones up to the widely known large VGG16 and VGG19 CNNs. Postsynthesis simulations using Mentor Modelsim in a 28-nm process with a clock frequency of 500 MHz show that the VGG19 network achieves over 450 GOp s. By exploiting sparsity, NullHop achieves an efficiency of 368 , maintains over 98 utilization of the multiply–accumulate units, and achieves a power efficiency of over 3 TOp s W in a core area of 6.3 mm2. As further proof of NullHop’s usability, we interfaced its FPGA implementation with a neuromorphic event camera for real-time interactive demonstrations."
],
"cite_N": [
"@cite_26",
"@cite_8",
"@cite_28",
"@cite_29",
"@cite_3",
"@cite_13"
],
"mid": [
"2285660444",
"2442974303",
"2565851976",
"2625457103",
"2516141709",
"2623629680"
]
}
|
Extended Bit-Plane Compression for Convolutional Neural Network Accelerators
|
Computer vision has become a key ingredient for automatized data analysis over a board range of real-world applications: medical diagnostics [1], industrial quality assurance [2], video surveillance [3], advanced driver assistance systems [4], and many others. Many of these applications have only recently become feasible due to the tremendous increases in accuracy-even surpassing human capabilities [5]-that have come with the rise of deep learning, and particularly, convolutional neural networks (CNNs, ConvNets).
While CNN-based methods often require a significant computational effort, many of these applications should run in real-time on embedded and mobile systems. This has driven the development of specialized platforms, dedicated hardware accelerators, and optimized algorithms to reduce the number of compute operations as well as the precision requirements for the arithmetic operations [6]- [15].
When looking at these hardware platforms, the energy associated with loading and storing intermediate results/feature maps (and gradients during training) in external memory is not only significant, but often clearly higher than the energy used in computation and on-chip data buffering. This is even more striking when looking at networks optimized to reduce the computation energy by quantizing the weights to one bit, two bits, or power-of-two values, thereby eliminating the need for high-precision multiplications [16]- [20].
Many compression methods for CNNs have been proposed over the last few years. However, many of them are focusing exclusively on
The authors would like to thank armasuisse Science & Technology for funding this research. This project was supported in part by the EU's H2020 programme under grant no. 732631 (OPRECOMP). 1) compressing the parameters/weights, which make up only a small share of the energy-intensive off-chip communication [21]- [23], 2) exploiting the sparsity of intermediate results, which is not always present (e.g. in partial results of a convolution layer or otherwise before the activation function is applied) and is not optimal in the sense that the nonuniform value distribution is not capitalized [24]- [26], 3) very complex methods requiring large dictionaries, or otherwise not suitable for a small, energy-efficient hardware implementation-often targeting efficient distribution and storage of trained models to mobile devices or the transmission of intermediate feature maps from/to mobile devices over a costly communication link [23]. In this paper, we propose and evaluate a simple compression scheme for intermediate feature maps that can exploits sparsity as well as the distribution of the remaining values. It is suitable for a very small and energy-efficient implementation in hardware (<300 bit of registers), and could be inserted as a stream (de-)compressor before/after a DMA controller to compress the data by 4.4× for 8 bit AlexNet.
III. COMPRESSION ALGORITHM
An overview of the proposed algorithm is shown in Fig. 2. The value stream is decomposed into a zero/non-zero stream on which we apply run-length encoding to compress the zero burst commonly occurring in the data, and a stream of nonzero values which we encode using bit-plane compression. The later compresses a fixed number of words n jointly, and the resulting compressed bit-stream is injected immediately after at least n non-zero values have been compressed.
A. Zero/Non-Zero Encoding with RLE
The run-length encoder simply compresses bursts of 0s with a single 0 followed by a fixed number of bits which encode the burst length. Non-zero values, at this point 1-bits, are not run-length encoded, i.e. for each of them a 1 is emitted. If the length of a zero-burst exceeds the corresponding maximum burst length, the maximum is encoded and the remaining bits are encoded independently, i.e. in the next code symbol.
B. Bit-Plane Compression
An overview of the bit-plane compressor (BPC) used to compress the non-zero values is shown in Fig. 1. For BPC a set of n words of m bit, a data block, is compressed by first building differences between each two consecutive words and storing the first word as the base. This exploits that neighboring values are often similar.
The data items storing these differences are then viewed as m + 1 bit-planes of n bit each (delta bit-planes, DBPs). Neighboring DBPs are XOR-ed, now called DBX, and the DBP of the most significant bit is kept as the base-DBP. The results are fed into bit-plane encoders, which compress the DBX and DBP values to a bit-stream following Table I. Most of these encodings are applied independently per DBX symbol. However, the first can be used to jointly encode multiple consecutive bit-planes at once, if they are all zero. This is where the correlation of neighboring values is best exploited. Note also the importance of the XOR-ing step in order to map two's complement negative values close to zero also to words consisting mostly of zero-bits. The proposed compression method can be applied to integers of various word widths, but also to floating-point data types, although this affects the compression ratio negatively.
C. Hardware Suitability
The proposed algorithm is very hardware friendly: no codebook needs to be stored, just a few data words need to be kept in memory. From the overview (cf. Fig. 2), the Zero-RLE mostly consists of a counter and the non-zero check is also negligible in size. The buffer and packer assembles the bitstream and needs very little logic and a few bits of storage to pack the resulting bit-stream into words. The last remaining unit, the bit-plane encoder, is shown in Fig. 3. In terms of registers, only the base value (m bit), the previous value to build the differences (m bit), and a (n − 1) × (m + 1) bit shift register are needed (with e.g. n = m = 16 a total of < 300 bit). Only very little logic is required as well: a single subtractor, a simple zero-RLE encoder, and the DBP encoder unit realizing the mapping in Table I. Also the logic operations are very regular and fairly lowcost in terms of size and energy. The resulting compression reduces the energy spent on interfaces to DRAM, on inter-chip or back-plane communication-the corresponding standards specify very efficient power-down modes [32], [33]-as well as potentially saving DRAM refresh cycles for the saved memory area [1], and providing an alternative to increasing the bandwidth of such interfaces, which would imply more expensive packages, circuit boards, and additional on-chip circuits (e.g. PLLs, on-chip termination, etc.) [32], [33].
IV. RESULTS
A. Experimental Setup
Where not otherwise stated, we perform our experiments on AlexNet and are using images from the ILSVRC validation set. All models we used were pre-trained and downloaded from the PyTorch/Torchvision data repository. Some of the experiments are performed with fixed-point data types (default: 16-bit fixed-point). For these, the feature maps were normalized to exploit the full range, i.e. the worst-case scenario from a compression point of view. All the feature maps were extracted after the ReLU activations.
B. Sparsity, Activation Histogram & Data Layout
Neural networks are known to have sparse feature maps after applying a ReLU activation layer, which can be applied on-the-fly after the convolution layer and possibly batch normalization. However, it varies significantly for different layers within the network as well as for different CNNs. Sparsity is a key aspect when compressing feature maps, and we analyze it in Fig. 4.
The sparse values are not independently distributed but rather occur in bursts when the 4D data tensor is laid out in one of the obvious formats. The most commonly used formats are NCHW and NHWC, which are those supported by most frameworks and the widely used Nvidia cuDNN backend. NCHW is the preferred format for cuDNN and the default memory layout and means that neighboring values in horizontal direction are stored next to each other in memory before the vertical, channel, and batch dimensions. NHWC is the default format of TensorFlow and has long before been used in compute vision and has the advantage of simple nonstrided computation of inner products in channel (i.e. feature map) dimension. Further reasonable options which we include in out analysis are CHWN and HWCN, although most usecases with hardware acceleration are targeting real-time lowlatency inference and are thus operating with a batch size of 1. We analyze the distribution of the length of zero bursts for the these four data layouts at various depths within the network in Fig. 5.
The results clearly show that having the spatial dimensions (H, W) in next to each other in the data stream provide the longest zero bursts (lowest cumulative distribution curve) and thus the better compressibility than the other formats. This is also aligned with intuition: feature maps values mark the presence of certain features and can be expected to be smooth. Inspection the feature maps of CNNs is commonly known to show that they behave like 'heat maps' marking the presence of certain geometric features nearby. Based on these results we perform all the following evaluations based on the NCHW data layout. Not also that the burst length of non-zero values is mostly very short, such that there is limited gain in applying RLE also for the one-bits.
To compress further beyond exploiting the sparsity, the data has to remain compressible. This is definitely the case as can be seen when looking at histograms of the activation distributions as shown in Fig. 6 and a strong indication that additional compression of the non-zero data is possible.
C. Selecting Parameters
The proposed method has two parameters: the maximum length of a zero sequence that can be encoded with a single code symbol of the Zero-RLE, and the BPC block size (n, number of non-zero word encoded jointly).
Max. Zero Burst Length: We first analyze the effect of varying the maximum zero burst length for Zero-RLE on the compression ratio without for various data wordwidths in Table II. The optimal value is arguably the same for our proposed method, since an constant offset in compressing the non-zero values does not affect the optimal choice of this parameter (just like to wordwidth has no effect on it). The results also serve as a baseline for Zero-RLE and ZVC. It is worth noting that ZVC corresponds to Zero-RLE with a max. burst length of 1, yet breaks the trend shown in Table II. This is due to an inefficiency of Zero-RLE in this corner: for a zero burst length of 1, ZVC requires 1 bit whereas Zero-RLE with a max. burst length of 2 takes 2 bit. For a zero burst of length 2, ZVC encode 2 symbols of 1 bit each and Zero-RLE takes 2 bit as well. ZVC thus always performs at least as well for such a short max. burst length.
BPC Block Size: We analyze the effect of the BPC block size parameter in Fig. 7 at various depths within the network. The best compression ratio is achieved with a block size of 16 across all the layers. A block size of 8 might also be considered to minimize the resources of the (de-)compression hardware block at a small drop in compression ratio.
D. Total Compression Factor
We analyze the total compression factor of all feature maps of AlexNet, ResNet-34, and SqueezeNet in Fig. 8. For AlexNet, we can notice the high compression ratio of around 3× already introduced by Zero-RLE and ZVC and that it is very similar for all data types. We further see that pure BPC is not suitable since it introduces too much overhead when encoding only zero-values. For ResNet-34 and SqueezeNet, the gains by exploiting only the sparsity is significantly lower at around 1.55× and 1.7×. The proposed method outperforms previous approaches clearly with compression ratios of 4.45×, 2.45×, and 2.8× (for 8-bit fixed-point), respectively.
The gains for 8-bit fixed-point data is significantly higher than for other data formats. Most input data-also CNN feature maps-carry the most important information is in the more significant bits and in case of floats in the exponent. The less significant bits appear mostly as noise to the encoder and cannot be compressed without accuracy loss, such that this behavior-a lower compression ratio for wider word widthsis expected.
V. CONCLUSION We have presented and evaluated a novel compression method for CNN feature maps. The proposed algorithm achieves an average compression ratio of 4.4× on AlexNet (+35% over previous methods), 2.45× on ResNet-34 (+60%), and 2.8× on SqueezeNet (+65%) for 8 bit data, and thus clearly outperforms state-of-the-art, while fitting a very tight hardware resource budget with <300 bit of data and very little compute logic.
| 2,057 |
1810.03979
|
2966206672
|
After the tremendous success of convolutional neural networks in image classification, object detection, speech recognition, etc., there is now rising demand for deployment of these compute-intensive ML models on tightly power constrained embedded and mobile systems at low cost as well as for pushing the throughput in data centers. This has triggered a wave of research towards specialized hardware accelerators. Their performance is often constrained by I O bandwidth and the energy consumption is dominated by I O transfers to off-chip memory. We introduce and evaluate a novel, hardware-friendly compression scheme for the feature maps present within convolutional neural networks. We show that an average compression ratio of 4.4× relative to uncompressed data and a gain of 60 over existing method can be achieved for ResNet-34 with a compression block requiring <300 bit of sequential cells and minimal combinational logic.
|
Most compression methods are focusing on minimizing the model size. Most of them are very complex (area) to implement in hardware and need large dictionaries. One such method, deep compression @cite_30 , combines pruning, trained clustering-based quantization, and Huffman coding. Most of these steps involved cannot be applied to the intermediate feature map, which change for every inference as opposed to the weights which are static and can be optimized off-line. Furthermore, applying Huffman coding---while being optimal---implies storing a large dictionary (typically several MB). Similar issues arise when using Lempel-Ziv-Welch (LZW) coding @cite_20 @cite_25 as present in e.g. the ZIP compression scheme, where the dictionary is encoded in the compressed data stream. This makes it unsuitable for a lightweight and energy-efficient VLSI implementation @cite_1 @cite_7 .
|
{
"abstract": [
"Abstract: Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce \"deep compression\", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.",
"LZW algorithm is one of the most famous dictionary-based compression and decompression algorithms. The main contribution of this paper is to present a hardware LZW decompression algorithm and to implement it in an FPGA. The experimental results show that one proposed module on Virtex-7 family FPGA XC7VX485T-2 runs up to 2.16 times faster than sequential LZW decompression on a single CPU, where the frequency of FPGA is 301.02MHz. Since the proposed module is compactly designed and uses a few resources of the FPGA, we have succeeded to implement 150 identical modules which works in parallel on the FPGA, where the frequency of FPGA is 245.4MHz. In other words, our implementation runs up to 264 times faster than a sequential implementation on a single CPU.",
"In this paper, we propose a new two-stage hardware architecture that combines the features of both parallel dictionary LZW (PDLZW) and an approximated adaptive Huffman (AH) algorithms. In this architecture, an ordered list instead of the tree-based structure is used in the AH algorithm for speeding up the compression data rate. The resulting architecture shows that it not only outperforms the AH algorithm at the cost of only one-fourth the hardware resource but it is also competitive to the performance of LZW algorithm (compress). In addition, both compression and decompression rates of the proposed architecture are greater than those of the AH algorithm even in the case realized by software",
"Compressibility of individual sequences by the class of generalized finite-state information-lossless encoders is investigated. These encoders can operate in a variable-rate mode as well as a fixed-rate one, and they allow for any finite-state scheme of variable-length-to-variable-length coding. For every individual infinite sequence x a quantity (x) is defined, called the compressibility of x , which is shown to be the asymptotically attainable lower bound on the compression ratio that can be achieved for x by any finite-state encoder. This is demonstrated by means of a constructive coding theorem and its converse that, apart from their asymptotic significance, also provide useful performance criteria for finite and practical data-compression tasks. The proposed concept of compressibility is also shown to play a role analogous to that of entropy in classical information theory where one deals with probabilistic ensembles of sequences rather than with individual sequences. While the definition of (x) allows a different machine for each different sequence to be compressed, the constructive coding theorem leads to a universal algorithm that is asymptotically optimal for all sequences.",
""
],
"cite_N": [
"@cite_30",
"@cite_7",
"@cite_1",
"@cite_25",
"@cite_20"
],
"mid": [
"2964299589",
"2487148512",
"2162310490",
"2122962290",
"1990653637"
]
}
|
Extended Bit-Plane Compression for Convolutional Neural Network Accelerators
|
Computer vision has become a key ingredient for automatized data analysis over a board range of real-world applications: medical diagnostics [1], industrial quality assurance [2], video surveillance [3], advanced driver assistance systems [4], and many others. Many of these applications have only recently become feasible due to the tremendous increases in accuracy-even surpassing human capabilities [5]-that have come with the rise of deep learning, and particularly, convolutional neural networks (CNNs, ConvNets).
While CNN-based methods often require a significant computational effort, many of these applications should run in real-time on embedded and mobile systems. This has driven the development of specialized platforms, dedicated hardware accelerators, and optimized algorithms to reduce the number of compute operations as well as the precision requirements for the arithmetic operations [6]- [15].
When looking at these hardware platforms, the energy associated with loading and storing intermediate results/feature maps (and gradients during training) in external memory is not only significant, but often clearly higher than the energy used in computation and on-chip data buffering. This is even more striking when looking at networks optimized to reduce the computation energy by quantizing the weights to one bit, two bits, or power-of-two values, thereby eliminating the need for high-precision multiplications [16]- [20].
Many compression methods for CNNs have been proposed over the last few years. However, many of them are focusing exclusively on
The authors would like to thank armasuisse Science & Technology for funding this research. This project was supported in part by the EU's H2020 programme under grant no. 732631 (OPRECOMP). 1) compressing the parameters/weights, which make up only a small share of the energy-intensive off-chip communication [21]- [23], 2) exploiting the sparsity of intermediate results, which is not always present (e.g. in partial results of a convolution layer or otherwise before the activation function is applied) and is not optimal in the sense that the nonuniform value distribution is not capitalized [24]- [26], 3) very complex methods requiring large dictionaries, or otherwise not suitable for a small, energy-efficient hardware implementation-often targeting efficient distribution and storage of trained models to mobile devices or the transmission of intermediate feature maps from/to mobile devices over a costly communication link [23]. In this paper, we propose and evaluate a simple compression scheme for intermediate feature maps that can exploits sparsity as well as the distribution of the remaining values. It is suitable for a very small and energy-efficient implementation in hardware (<300 bit of registers), and could be inserted as a stream (de-)compressor before/after a DMA controller to compress the data by 4.4× for 8 bit AlexNet.
III. COMPRESSION ALGORITHM
An overview of the proposed algorithm is shown in Fig. 2. The value stream is decomposed into a zero/non-zero stream on which we apply run-length encoding to compress the zero burst commonly occurring in the data, and a stream of nonzero values which we encode using bit-plane compression. The later compresses a fixed number of words n jointly, and the resulting compressed bit-stream is injected immediately after at least n non-zero values have been compressed.
A. Zero/Non-Zero Encoding with RLE
The run-length encoder simply compresses bursts of 0s with a single 0 followed by a fixed number of bits which encode the burst length. Non-zero values, at this point 1-bits, are not run-length encoded, i.e. for each of them a 1 is emitted. If the length of a zero-burst exceeds the corresponding maximum burst length, the maximum is encoded and the remaining bits are encoded independently, i.e. in the next code symbol.
B. Bit-Plane Compression
An overview of the bit-plane compressor (BPC) used to compress the non-zero values is shown in Fig. 1. For BPC a set of n words of m bit, a data block, is compressed by first building differences between each two consecutive words and storing the first word as the base. This exploits that neighboring values are often similar.
The data items storing these differences are then viewed as m + 1 bit-planes of n bit each (delta bit-planes, DBPs). Neighboring DBPs are XOR-ed, now called DBX, and the DBP of the most significant bit is kept as the base-DBP. The results are fed into bit-plane encoders, which compress the DBX and DBP values to a bit-stream following Table I. Most of these encodings are applied independently per DBX symbol. However, the first can be used to jointly encode multiple consecutive bit-planes at once, if they are all zero. This is where the correlation of neighboring values is best exploited. Note also the importance of the XOR-ing step in order to map two's complement negative values close to zero also to words consisting mostly of zero-bits. The proposed compression method can be applied to integers of various word widths, but also to floating-point data types, although this affects the compression ratio negatively.
C. Hardware Suitability
The proposed algorithm is very hardware friendly: no codebook needs to be stored, just a few data words need to be kept in memory. From the overview (cf. Fig. 2), the Zero-RLE mostly consists of a counter and the non-zero check is also negligible in size. The buffer and packer assembles the bitstream and needs very little logic and a few bits of storage to pack the resulting bit-stream into words. The last remaining unit, the bit-plane encoder, is shown in Fig. 3. In terms of registers, only the base value (m bit), the previous value to build the differences (m bit), and a (n − 1) × (m + 1) bit shift register are needed (with e.g. n = m = 16 a total of < 300 bit). Only very little logic is required as well: a single subtractor, a simple zero-RLE encoder, and the DBP encoder unit realizing the mapping in Table I. Also the logic operations are very regular and fairly lowcost in terms of size and energy. The resulting compression reduces the energy spent on interfaces to DRAM, on inter-chip or back-plane communication-the corresponding standards specify very efficient power-down modes [32], [33]-as well as potentially saving DRAM refresh cycles for the saved memory area [1], and providing an alternative to increasing the bandwidth of such interfaces, which would imply more expensive packages, circuit boards, and additional on-chip circuits (e.g. PLLs, on-chip termination, etc.) [32], [33].
IV. RESULTS
A. Experimental Setup
Where not otherwise stated, we perform our experiments on AlexNet and are using images from the ILSVRC validation set. All models we used were pre-trained and downloaded from the PyTorch/Torchvision data repository. Some of the experiments are performed with fixed-point data types (default: 16-bit fixed-point). For these, the feature maps were normalized to exploit the full range, i.e. the worst-case scenario from a compression point of view. All the feature maps were extracted after the ReLU activations.
B. Sparsity, Activation Histogram & Data Layout
Neural networks are known to have sparse feature maps after applying a ReLU activation layer, which can be applied on-the-fly after the convolution layer and possibly batch normalization. However, it varies significantly for different layers within the network as well as for different CNNs. Sparsity is a key aspect when compressing feature maps, and we analyze it in Fig. 4.
The sparse values are not independently distributed but rather occur in bursts when the 4D data tensor is laid out in one of the obvious formats. The most commonly used formats are NCHW and NHWC, which are those supported by most frameworks and the widely used Nvidia cuDNN backend. NCHW is the preferred format for cuDNN and the default memory layout and means that neighboring values in horizontal direction are stored next to each other in memory before the vertical, channel, and batch dimensions. NHWC is the default format of TensorFlow and has long before been used in compute vision and has the advantage of simple nonstrided computation of inner products in channel (i.e. feature map) dimension. Further reasonable options which we include in out analysis are CHWN and HWCN, although most usecases with hardware acceleration are targeting real-time lowlatency inference and are thus operating with a batch size of 1. We analyze the distribution of the length of zero bursts for the these four data layouts at various depths within the network in Fig. 5.
The results clearly show that having the spatial dimensions (H, W) in next to each other in the data stream provide the longest zero bursts (lowest cumulative distribution curve) and thus the better compressibility than the other formats. This is also aligned with intuition: feature maps values mark the presence of certain features and can be expected to be smooth. Inspection the feature maps of CNNs is commonly known to show that they behave like 'heat maps' marking the presence of certain geometric features nearby. Based on these results we perform all the following evaluations based on the NCHW data layout. Not also that the burst length of non-zero values is mostly very short, such that there is limited gain in applying RLE also for the one-bits.
To compress further beyond exploiting the sparsity, the data has to remain compressible. This is definitely the case as can be seen when looking at histograms of the activation distributions as shown in Fig. 6 and a strong indication that additional compression of the non-zero data is possible.
C. Selecting Parameters
The proposed method has two parameters: the maximum length of a zero sequence that can be encoded with a single code symbol of the Zero-RLE, and the BPC block size (n, number of non-zero word encoded jointly).
Max. Zero Burst Length: We first analyze the effect of varying the maximum zero burst length for Zero-RLE on the compression ratio without for various data wordwidths in Table II. The optimal value is arguably the same for our proposed method, since an constant offset in compressing the non-zero values does not affect the optimal choice of this parameter (just like to wordwidth has no effect on it). The results also serve as a baseline for Zero-RLE and ZVC. It is worth noting that ZVC corresponds to Zero-RLE with a max. burst length of 1, yet breaks the trend shown in Table II. This is due to an inefficiency of Zero-RLE in this corner: for a zero burst length of 1, ZVC requires 1 bit whereas Zero-RLE with a max. burst length of 2 takes 2 bit. For a zero burst of length 2, ZVC encode 2 symbols of 1 bit each and Zero-RLE takes 2 bit as well. ZVC thus always performs at least as well for such a short max. burst length.
BPC Block Size: We analyze the effect of the BPC block size parameter in Fig. 7 at various depths within the network. The best compression ratio is achieved with a block size of 16 across all the layers. A block size of 8 might also be considered to minimize the resources of the (de-)compression hardware block at a small drop in compression ratio.
D. Total Compression Factor
We analyze the total compression factor of all feature maps of AlexNet, ResNet-34, and SqueezeNet in Fig. 8. For AlexNet, we can notice the high compression ratio of around 3× already introduced by Zero-RLE and ZVC and that it is very similar for all data types. We further see that pure BPC is not suitable since it introduces too much overhead when encoding only zero-values. For ResNet-34 and SqueezeNet, the gains by exploiting only the sparsity is significantly lower at around 1.55× and 1.7×. The proposed method outperforms previous approaches clearly with compression ratios of 4.45×, 2.45×, and 2.8× (for 8-bit fixed-point), respectively.
The gains for 8-bit fixed-point data is significantly higher than for other data formats. Most input data-also CNN feature maps-carry the most important information is in the more significant bits and in case of floats in the exponent. The less significant bits appear mostly as noise to the encoder and cannot be compressed without accuracy loss, such that this behavior-a lower compression ratio for wider word widthsis expected.
V. CONCLUSION We have presented and evaluated a novel compression method for CNN feature maps. The proposed algorithm achieves an average compression ratio of 4.4× on AlexNet (+35% over previous methods), 2.45× on ResNet-34 (+60%), and 2.8× on SqueezeNet (+65%) for 8 bit data, and thus clearly outperforms state-of-the-art, while fitting a very tight hardware resource budget with <300 bit of data and very little compute logic.
| 2,057 |
1810.03979
|
2966206672
|
After the tremendous success of convolutional neural networks in image classification, object detection, speech recognition, etc., there is now rising demand for deployment of these compute-intensive ML models on tightly power constrained embedded and mobile systems at low cost as well as for pushing the throughput in data centers. This has triggered a wave of research towards specialized hardware accelerators. Their performance is often constrained by I O bandwidth and the energy consumption is dominated by I O transfers to off-chip memory. We introduce and evaluate a novel, hardware-friendly compression scheme for the feature maps present within convolutional neural networks. We show that an average compression ratio of 4.4× relative to uncompressed data and a gain of 60 over existing method can be achieved for ResNet-34 with a compression block requiring <300 bit of sequential cells and minimal combinational logic.
|
The most directly comparable approach, cDMA @cite_14 , describes a hardware-friendly compression scheme to reduce the data size of intermediate feature maps. Their target application differs in that their main goal is to allow faster offloading of the feature maps from GPU to CPU memory through the PCIe bandwidth bottleneck during training, thereby enabling larger batch sizes and deeper and wider networks without sacrificing performance. They propose to use , which takes a block of 32 activation values, and generates a 32-bit mask where only the bits to the non-zero values are set. The non-zero values are transferred after the mask. This provides the main advantage over Zero-RLE that the resulting data volume is independent of how the values of the feature maps are serialized while also providing small compression ratio advantages. Note that this is a special case of Zero-RLE with a maximum zero burst length of 1.
|
{
"abstract": [
"Popular deep learning frameworks require users to fine-tune their memory usage so that the training data of a deep neural network (DNN) fits within the GPU physical memory. Prior work tries to address this restriction by virtualizing the memory usage of DNNs, enabling both CPU and GPU memory to be utilized for memory allocations. Despite its merits, virtualizing memory can incur significant performance overheads when the time needed to copy data back and forth from CPU memory is higher than the latency to perform DNN computations. We introduce a high-performance virtualization strategy based on a \"compressing DMA engine\" (cDMA) that drastically reduces the size of the data structures that are targeted for CPU-side allocations. The cDMA engine offers an average 2.6x (maximum 13.8x) compression ratio by exploiting the sparsity inherent in offloaded data, improving the performance of virtualized DNNs by an average 53 (maximum 79 ) when evaluated on an NVIDIA Titan Xp."
],
"cite_N": [
"@cite_14"
],
"mid": [
"2962821792"
]
}
|
Extended Bit-Plane Compression for Convolutional Neural Network Accelerators
|
Computer vision has become a key ingredient for automatized data analysis over a board range of real-world applications: medical diagnostics [1], industrial quality assurance [2], video surveillance [3], advanced driver assistance systems [4], and many others. Many of these applications have only recently become feasible due to the tremendous increases in accuracy-even surpassing human capabilities [5]-that have come with the rise of deep learning, and particularly, convolutional neural networks (CNNs, ConvNets).
While CNN-based methods often require a significant computational effort, many of these applications should run in real-time on embedded and mobile systems. This has driven the development of specialized platforms, dedicated hardware accelerators, and optimized algorithms to reduce the number of compute operations as well as the precision requirements for the arithmetic operations [6]- [15].
When looking at these hardware platforms, the energy associated with loading and storing intermediate results/feature maps (and gradients during training) in external memory is not only significant, but often clearly higher than the energy used in computation and on-chip data buffering. This is even more striking when looking at networks optimized to reduce the computation energy by quantizing the weights to one bit, two bits, or power-of-two values, thereby eliminating the need for high-precision multiplications [16]- [20].
Many compression methods for CNNs have been proposed over the last few years. However, many of them are focusing exclusively on
The authors would like to thank armasuisse Science & Technology for funding this research. This project was supported in part by the EU's H2020 programme under grant no. 732631 (OPRECOMP). 1) compressing the parameters/weights, which make up only a small share of the energy-intensive off-chip communication [21]- [23], 2) exploiting the sparsity of intermediate results, which is not always present (e.g. in partial results of a convolution layer or otherwise before the activation function is applied) and is not optimal in the sense that the nonuniform value distribution is not capitalized [24]- [26], 3) very complex methods requiring large dictionaries, or otherwise not suitable for a small, energy-efficient hardware implementation-often targeting efficient distribution and storage of trained models to mobile devices or the transmission of intermediate feature maps from/to mobile devices over a costly communication link [23]. In this paper, we propose and evaluate a simple compression scheme for intermediate feature maps that can exploits sparsity as well as the distribution of the remaining values. It is suitable for a very small and energy-efficient implementation in hardware (<300 bit of registers), and could be inserted as a stream (de-)compressor before/after a DMA controller to compress the data by 4.4× for 8 bit AlexNet.
III. COMPRESSION ALGORITHM
An overview of the proposed algorithm is shown in Fig. 2. The value stream is decomposed into a zero/non-zero stream on which we apply run-length encoding to compress the zero burst commonly occurring in the data, and a stream of nonzero values which we encode using bit-plane compression. The later compresses a fixed number of words n jointly, and the resulting compressed bit-stream is injected immediately after at least n non-zero values have been compressed.
A. Zero/Non-Zero Encoding with RLE
The run-length encoder simply compresses bursts of 0s with a single 0 followed by a fixed number of bits which encode the burst length. Non-zero values, at this point 1-bits, are not run-length encoded, i.e. for each of them a 1 is emitted. If the length of a zero-burst exceeds the corresponding maximum burst length, the maximum is encoded and the remaining bits are encoded independently, i.e. in the next code symbol.
B. Bit-Plane Compression
An overview of the bit-plane compressor (BPC) used to compress the non-zero values is shown in Fig. 1. For BPC a set of n words of m bit, a data block, is compressed by first building differences between each two consecutive words and storing the first word as the base. This exploits that neighboring values are often similar.
The data items storing these differences are then viewed as m + 1 bit-planes of n bit each (delta bit-planes, DBPs). Neighboring DBPs are XOR-ed, now called DBX, and the DBP of the most significant bit is kept as the base-DBP. The results are fed into bit-plane encoders, which compress the DBX and DBP values to a bit-stream following Table I. Most of these encodings are applied independently per DBX symbol. However, the first can be used to jointly encode multiple consecutive bit-planes at once, if they are all zero. This is where the correlation of neighboring values is best exploited. Note also the importance of the XOR-ing step in order to map two's complement negative values close to zero also to words consisting mostly of zero-bits. The proposed compression method can be applied to integers of various word widths, but also to floating-point data types, although this affects the compression ratio negatively.
C. Hardware Suitability
The proposed algorithm is very hardware friendly: no codebook needs to be stored, just a few data words need to be kept in memory. From the overview (cf. Fig. 2), the Zero-RLE mostly consists of a counter and the non-zero check is also negligible in size. The buffer and packer assembles the bitstream and needs very little logic and a few bits of storage to pack the resulting bit-stream into words. The last remaining unit, the bit-plane encoder, is shown in Fig. 3. In terms of registers, only the base value (m bit), the previous value to build the differences (m bit), and a (n − 1) × (m + 1) bit shift register are needed (with e.g. n = m = 16 a total of < 300 bit). Only very little logic is required as well: a single subtractor, a simple zero-RLE encoder, and the DBP encoder unit realizing the mapping in Table I. Also the logic operations are very regular and fairly lowcost in terms of size and energy. The resulting compression reduces the energy spent on interfaces to DRAM, on inter-chip or back-plane communication-the corresponding standards specify very efficient power-down modes [32], [33]-as well as potentially saving DRAM refresh cycles for the saved memory area [1], and providing an alternative to increasing the bandwidth of such interfaces, which would imply more expensive packages, circuit boards, and additional on-chip circuits (e.g. PLLs, on-chip termination, etc.) [32], [33].
IV. RESULTS
A. Experimental Setup
Where not otherwise stated, we perform our experiments on AlexNet and are using images from the ILSVRC validation set. All models we used were pre-trained and downloaded from the PyTorch/Torchvision data repository. Some of the experiments are performed with fixed-point data types (default: 16-bit fixed-point). For these, the feature maps were normalized to exploit the full range, i.e. the worst-case scenario from a compression point of view. All the feature maps were extracted after the ReLU activations.
B. Sparsity, Activation Histogram & Data Layout
Neural networks are known to have sparse feature maps after applying a ReLU activation layer, which can be applied on-the-fly after the convolution layer and possibly batch normalization. However, it varies significantly for different layers within the network as well as for different CNNs. Sparsity is a key aspect when compressing feature maps, and we analyze it in Fig. 4.
The sparse values are not independently distributed but rather occur in bursts when the 4D data tensor is laid out in one of the obvious formats. The most commonly used formats are NCHW and NHWC, which are those supported by most frameworks and the widely used Nvidia cuDNN backend. NCHW is the preferred format for cuDNN and the default memory layout and means that neighboring values in horizontal direction are stored next to each other in memory before the vertical, channel, and batch dimensions. NHWC is the default format of TensorFlow and has long before been used in compute vision and has the advantage of simple nonstrided computation of inner products in channel (i.e. feature map) dimension. Further reasonable options which we include in out analysis are CHWN and HWCN, although most usecases with hardware acceleration are targeting real-time lowlatency inference and are thus operating with a batch size of 1. We analyze the distribution of the length of zero bursts for the these four data layouts at various depths within the network in Fig. 5.
The results clearly show that having the spatial dimensions (H, W) in next to each other in the data stream provide the longest zero bursts (lowest cumulative distribution curve) and thus the better compressibility than the other formats. This is also aligned with intuition: feature maps values mark the presence of certain features and can be expected to be smooth. Inspection the feature maps of CNNs is commonly known to show that they behave like 'heat maps' marking the presence of certain geometric features nearby. Based on these results we perform all the following evaluations based on the NCHW data layout. Not also that the burst length of non-zero values is mostly very short, such that there is limited gain in applying RLE also for the one-bits.
To compress further beyond exploiting the sparsity, the data has to remain compressible. This is definitely the case as can be seen when looking at histograms of the activation distributions as shown in Fig. 6 and a strong indication that additional compression of the non-zero data is possible.
C. Selecting Parameters
The proposed method has two parameters: the maximum length of a zero sequence that can be encoded with a single code symbol of the Zero-RLE, and the BPC block size (n, number of non-zero word encoded jointly).
Max. Zero Burst Length: We first analyze the effect of varying the maximum zero burst length for Zero-RLE on the compression ratio without for various data wordwidths in Table II. The optimal value is arguably the same for our proposed method, since an constant offset in compressing the non-zero values does not affect the optimal choice of this parameter (just like to wordwidth has no effect on it). The results also serve as a baseline for Zero-RLE and ZVC. It is worth noting that ZVC corresponds to Zero-RLE with a max. burst length of 1, yet breaks the trend shown in Table II. This is due to an inefficiency of Zero-RLE in this corner: for a zero burst length of 1, ZVC requires 1 bit whereas Zero-RLE with a max. burst length of 2 takes 2 bit. For a zero burst of length 2, ZVC encode 2 symbols of 1 bit each and Zero-RLE takes 2 bit as well. ZVC thus always performs at least as well for such a short max. burst length.
BPC Block Size: We analyze the effect of the BPC block size parameter in Fig. 7 at various depths within the network. The best compression ratio is achieved with a block size of 16 across all the layers. A block size of 8 might also be considered to minimize the resources of the (de-)compression hardware block at a small drop in compression ratio.
D. Total Compression Factor
We analyze the total compression factor of all feature maps of AlexNet, ResNet-34, and SqueezeNet in Fig. 8. For AlexNet, we can notice the high compression ratio of around 3× already introduced by Zero-RLE and ZVC and that it is very similar for all data types. We further see that pure BPC is not suitable since it introduces too much overhead when encoding only zero-values. For ResNet-34 and SqueezeNet, the gains by exploiting only the sparsity is significantly lower at around 1.55× and 1.7×. The proposed method outperforms previous approaches clearly with compression ratios of 4.45×, 2.45×, and 2.8× (for 8-bit fixed-point), respectively.
The gains for 8-bit fixed-point data is significantly higher than for other data formats. Most input data-also CNN feature maps-carry the most important information is in the more significant bits and in case of floats in the exponent. The less significant bits appear mostly as noise to the encoder and cannot be compressed without accuracy loss, such that this behavior-a lower compression ratio for wider word widthsis expected.
V. CONCLUSION We have presented and evaluated a novel compression method for CNN feature maps. The proposed algorithm achieves an average compression ratio of 4.4× on AlexNet (+35% over previous methods), 2.45× on ResNet-34 (+60%), and 2.8× on SqueezeNet (+65%) for 8 bit data, and thus clearly outperforms state-of-the-art, while fitting a very tight hardware resource budget with <300 bit of data and very little compute logic.
| 2,057 |
1810.03979
|
2966206672
|
After the tremendous success of convolutional neural networks in image classification, object detection, speech recognition, etc., there is now rising demand for deployment of these compute-intensive ML models on tightly power constrained embedded and mobile systems at low cost as well as for pushing the throughput in data centers. This has triggered a wave of research towards specialized hardware accelerators. Their performance is often constrained by I O bandwidth and the energy consumption is dominated by I O transfers to off-chip memory. We introduce and evaluate a novel, hardware-friendly compression scheme for the feature maps present within convolutional neural networks. We show that an average compression ratio of 4.4× relative to uncompressed data and a gain of 60 over existing method can be achieved for ResNet-34 with a compression block requiring <300 bit of sequential cells and minimal combinational logic.
|
For this work, we build on a method known in the area of texture compression for GPUs, @cite_27 , fuse it with sparsity-focused compression methods, and evaluate the resulting compression algorithm on intermediate feature maps to show compression ratios of 4.4 and 2.8 for 8 ,bit AlexNet and SqueezeNet, respectively.
|
{
"abstract": [
"As key applications become more data-intensive and the computational throughput of processors increases, the amount of data to be transferred in modern memory subsystems grows. Increasing physical bandwidth to keep up with the demand growth is challenging, however, due to strict area and energy limitations. This paper presents a novel and lightweight compression algorithm, Bit-Plane Compression (BPC), to increase the effective memory bandwidth. BPC aims at homogeneously-typed memory blocks, which are prevalent in many-core architectures, and applies a smart data transformation to both improve the inherent data compressibility and to reduce the complexity of compression hardware. We demonstrate that BPC provides superior compression ratios of 4.1:1 for integer benchmarks and reduces memory bandwidth requirements significantly."
],
"cite_N": [
"@cite_27"
],
"mid": [
"2516109628"
]
}
|
Extended Bit-Plane Compression for Convolutional Neural Network Accelerators
|
Computer vision has become a key ingredient for automatized data analysis over a board range of real-world applications: medical diagnostics [1], industrial quality assurance [2], video surveillance [3], advanced driver assistance systems [4], and many others. Many of these applications have only recently become feasible due to the tremendous increases in accuracy-even surpassing human capabilities [5]-that have come with the rise of deep learning, and particularly, convolutional neural networks (CNNs, ConvNets).
While CNN-based methods often require a significant computational effort, many of these applications should run in real-time on embedded and mobile systems. This has driven the development of specialized platforms, dedicated hardware accelerators, and optimized algorithms to reduce the number of compute operations as well as the precision requirements for the arithmetic operations [6]- [15].
When looking at these hardware platforms, the energy associated with loading and storing intermediate results/feature maps (and gradients during training) in external memory is not only significant, but often clearly higher than the energy used in computation and on-chip data buffering. This is even more striking when looking at networks optimized to reduce the computation energy by quantizing the weights to one bit, two bits, or power-of-two values, thereby eliminating the need for high-precision multiplications [16]- [20].
Many compression methods for CNNs have been proposed over the last few years. However, many of them are focusing exclusively on
The authors would like to thank armasuisse Science & Technology for funding this research. This project was supported in part by the EU's H2020 programme under grant no. 732631 (OPRECOMP). 1) compressing the parameters/weights, which make up only a small share of the energy-intensive off-chip communication [21]- [23], 2) exploiting the sparsity of intermediate results, which is not always present (e.g. in partial results of a convolution layer or otherwise before the activation function is applied) and is not optimal in the sense that the nonuniform value distribution is not capitalized [24]- [26], 3) very complex methods requiring large dictionaries, or otherwise not suitable for a small, energy-efficient hardware implementation-often targeting efficient distribution and storage of trained models to mobile devices or the transmission of intermediate feature maps from/to mobile devices over a costly communication link [23]. In this paper, we propose and evaluate a simple compression scheme for intermediate feature maps that can exploits sparsity as well as the distribution of the remaining values. It is suitable for a very small and energy-efficient implementation in hardware (<300 bit of registers), and could be inserted as a stream (de-)compressor before/after a DMA controller to compress the data by 4.4× for 8 bit AlexNet.
III. COMPRESSION ALGORITHM
An overview of the proposed algorithm is shown in Fig. 2. The value stream is decomposed into a zero/non-zero stream on which we apply run-length encoding to compress the zero burst commonly occurring in the data, and a stream of nonzero values which we encode using bit-plane compression. The later compresses a fixed number of words n jointly, and the resulting compressed bit-stream is injected immediately after at least n non-zero values have been compressed.
A. Zero/Non-Zero Encoding with RLE
The run-length encoder simply compresses bursts of 0s with a single 0 followed by a fixed number of bits which encode the burst length. Non-zero values, at this point 1-bits, are not run-length encoded, i.e. for each of them a 1 is emitted. If the length of a zero-burst exceeds the corresponding maximum burst length, the maximum is encoded and the remaining bits are encoded independently, i.e. in the next code symbol.
B. Bit-Plane Compression
An overview of the bit-plane compressor (BPC) used to compress the non-zero values is shown in Fig. 1. For BPC a set of n words of m bit, a data block, is compressed by first building differences between each two consecutive words and storing the first word as the base. This exploits that neighboring values are often similar.
The data items storing these differences are then viewed as m + 1 bit-planes of n bit each (delta bit-planes, DBPs). Neighboring DBPs are XOR-ed, now called DBX, and the DBP of the most significant bit is kept as the base-DBP. The results are fed into bit-plane encoders, which compress the DBX and DBP values to a bit-stream following Table I. Most of these encodings are applied independently per DBX symbol. However, the first can be used to jointly encode multiple consecutive bit-planes at once, if they are all zero. This is where the correlation of neighboring values is best exploited. Note also the importance of the XOR-ing step in order to map two's complement negative values close to zero also to words consisting mostly of zero-bits. The proposed compression method can be applied to integers of various word widths, but also to floating-point data types, although this affects the compression ratio negatively.
C. Hardware Suitability
The proposed algorithm is very hardware friendly: no codebook needs to be stored, just a few data words need to be kept in memory. From the overview (cf. Fig. 2), the Zero-RLE mostly consists of a counter and the non-zero check is also negligible in size. The buffer and packer assembles the bitstream and needs very little logic and a few bits of storage to pack the resulting bit-stream into words. The last remaining unit, the bit-plane encoder, is shown in Fig. 3. In terms of registers, only the base value (m bit), the previous value to build the differences (m bit), and a (n − 1) × (m + 1) bit shift register are needed (with e.g. n = m = 16 a total of < 300 bit). Only very little logic is required as well: a single subtractor, a simple zero-RLE encoder, and the DBP encoder unit realizing the mapping in Table I. Also the logic operations are very regular and fairly lowcost in terms of size and energy. The resulting compression reduces the energy spent on interfaces to DRAM, on inter-chip or back-plane communication-the corresponding standards specify very efficient power-down modes [32], [33]-as well as potentially saving DRAM refresh cycles for the saved memory area [1], and providing an alternative to increasing the bandwidth of such interfaces, which would imply more expensive packages, circuit boards, and additional on-chip circuits (e.g. PLLs, on-chip termination, etc.) [32], [33].
IV. RESULTS
A. Experimental Setup
Where not otherwise stated, we perform our experiments on AlexNet and are using images from the ILSVRC validation set. All models we used were pre-trained and downloaded from the PyTorch/Torchvision data repository. Some of the experiments are performed with fixed-point data types (default: 16-bit fixed-point). For these, the feature maps were normalized to exploit the full range, i.e. the worst-case scenario from a compression point of view. All the feature maps were extracted after the ReLU activations.
B. Sparsity, Activation Histogram & Data Layout
Neural networks are known to have sparse feature maps after applying a ReLU activation layer, which can be applied on-the-fly after the convolution layer and possibly batch normalization. However, it varies significantly for different layers within the network as well as for different CNNs. Sparsity is a key aspect when compressing feature maps, and we analyze it in Fig. 4.
The sparse values are not independently distributed but rather occur in bursts when the 4D data tensor is laid out in one of the obvious formats. The most commonly used formats are NCHW and NHWC, which are those supported by most frameworks and the widely used Nvidia cuDNN backend. NCHW is the preferred format for cuDNN and the default memory layout and means that neighboring values in horizontal direction are stored next to each other in memory before the vertical, channel, and batch dimensions. NHWC is the default format of TensorFlow and has long before been used in compute vision and has the advantage of simple nonstrided computation of inner products in channel (i.e. feature map) dimension. Further reasonable options which we include in out analysis are CHWN and HWCN, although most usecases with hardware acceleration are targeting real-time lowlatency inference and are thus operating with a batch size of 1. We analyze the distribution of the length of zero bursts for the these four data layouts at various depths within the network in Fig. 5.
The results clearly show that having the spatial dimensions (H, W) in next to each other in the data stream provide the longest zero bursts (lowest cumulative distribution curve) and thus the better compressibility than the other formats. This is also aligned with intuition: feature maps values mark the presence of certain features and can be expected to be smooth. Inspection the feature maps of CNNs is commonly known to show that they behave like 'heat maps' marking the presence of certain geometric features nearby. Based on these results we perform all the following evaluations based on the NCHW data layout. Not also that the burst length of non-zero values is mostly very short, such that there is limited gain in applying RLE also for the one-bits.
To compress further beyond exploiting the sparsity, the data has to remain compressible. This is definitely the case as can be seen when looking at histograms of the activation distributions as shown in Fig. 6 and a strong indication that additional compression of the non-zero data is possible.
C. Selecting Parameters
The proposed method has two parameters: the maximum length of a zero sequence that can be encoded with a single code symbol of the Zero-RLE, and the BPC block size (n, number of non-zero word encoded jointly).
Max. Zero Burst Length: We first analyze the effect of varying the maximum zero burst length for Zero-RLE on the compression ratio without for various data wordwidths in Table II. The optimal value is arguably the same for our proposed method, since an constant offset in compressing the non-zero values does not affect the optimal choice of this parameter (just like to wordwidth has no effect on it). The results also serve as a baseline for Zero-RLE and ZVC. It is worth noting that ZVC corresponds to Zero-RLE with a max. burst length of 1, yet breaks the trend shown in Table II. This is due to an inefficiency of Zero-RLE in this corner: for a zero burst length of 1, ZVC requires 1 bit whereas Zero-RLE with a max. burst length of 2 takes 2 bit. For a zero burst of length 2, ZVC encode 2 symbols of 1 bit each and Zero-RLE takes 2 bit as well. ZVC thus always performs at least as well for such a short max. burst length.
BPC Block Size: We analyze the effect of the BPC block size parameter in Fig. 7 at various depths within the network. The best compression ratio is achieved with a block size of 16 across all the layers. A block size of 8 might also be considered to minimize the resources of the (de-)compression hardware block at a small drop in compression ratio.
D. Total Compression Factor
We analyze the total compression factor of all feature maps of AlexNet, ResNet-34, and SqueezeNet in Fig. 8. For AlexNet, we can notice the high compression ratio of around 3× already introduced by Zero-RLE and ZVC and that it is very similar for all data types. We further see that pure BPC is not suitable since it introduces too much overhead when encoding only zero-values. For ResNet-34 and SqueezeNet, the gains by exploiting only the sparsity is significantly lower at around 1.55× and 1.7×. The proposed method outperforms previous approaches clearly with compression ratios of 4.45×, 2.45×, and 2.8× (for 8-bit fixed-point), respectively.
The gains for 8-bit fixed-point data is significantly higher than for other data formats. Most input data-also CNN feature maps-carry the most important information is in the more significant bits and in case of floats in the exponent. The less significant bits appear mostly as noise to the encoder and cannot be compressed without accuracy loss, such that this behavior-a lower compression ratio for wider word widthsis expected.
V. CONCLUSION We have presented and evaluated a novel compression method for CNN feature maps. The proposed algorithm achieves an average compression ratio of 4.4× on AlexNet (+35% over previous methods), 2.45× on ResNet-34 (+60%), and 2.8× on SqueezeNet (+65%) for 8 bit data, and thus clearly outperforms state-of-the-art, while fitting a very tight hardware resource budget with <300 bit of data and very little compute logic.
| 2,057 |
1810.03736
|
2897082611
|
Moral responsibility is a major concern in automated decision-making, with applications ranging from self-driving cars to kidney exchanges. From the viewpoint of automated systems, the urgent questions are: (a) How can models of moral scenarios and blameworthiness be extracted and learnt automatically from data? (b) How can judgements be computed tractably, given the split-second decision points faced by the system? By building on deep tractable probabilistic learning, we propose a learning regime for inducing models of such scenarios automatically from data and reasoning tractably from them. We report on experiments that compare our system with human judgement in three illustrative domains: lung cancer staging, teamwork management, and trolley problems.
|
As mentioned before, we do not motivate new definitions for moral responsibility here, but draw on HK, which, in turn, is based upon @cite_20 and the work on causality in @cite_7 . Their framework is also related to the intentions model in @cite_6 which considers predictions about the moral permissibility of actions via influence diagrams, though there is no emphasis on learning or tractability. In fact, the use of tractable architectures for decision-making itself is recent (see, e.g. @cite_10 @cite_17 ). The authors in @cite_4 learn PSDDs over preference rankings (as opposed to decision-making scenarios more generally), though their approach does not take account of different preferences in different contexts.
|
{
"abstract": [
"Probabilistic sentential decision diagrams (PSDDs) are a tractable representation of structured probability spaces, which are characterized by complex logical constraints on what constitutes a possible world. We develop general-purpose techniques for probabilistic reasoning and learning with PSDDs, allowing one to compute the probabilities of arbitrary logical formulas and to learn PSDDs from incomplete data. We illustrate the effectiveness of these techniques in the context of learning preference distributions, to which considerable work has been devoted in the past. We show, analytically and empirically, that our proposed framework is general enough to support diverse and complex data and query types. In particular, we show that it can learn maximum-likelihood models from partial rankings, pairwise preferences, and arbitrary preference constraints. Moreover, we show that it can efficiently answer many queries exactly, from expected and most likely rankings, to the probability of pairwise preferences, and diversified recommendations. This case study illustrates the effectiveness and flexibility of the developed PSDD framework as a domain-independent tool for learning and reasoning with structured probability spaces.",
"We propose a new definition of actual causes, using structural equations to model counterfactuals. We show that the definition yields a plausible and elegant account of causation that handles well examples which have caused problems for other definitions and resolves major difficulties in the traditional account.",
"The actions of a rational agent reveal information about its mental states. These inferred mental states, particularly the agent’s intentions, play an important role in the evaluation of moral permissibility. While previous computational models have shown that beliefs and desires can be inferred from behavior under the assumption of rational action they have critically lacked a third mental state, intentions. In this work, we develop a novel formalism for intentions and show how they can be inferred as counterfactual contrasts over influence diagrams. This model is used to quantitatively explain judgments about intention and moral permissibility in classic and novel trolley problems.",
"Although a number of related algorithms have been developed to evaluate influence diagrams, exploiting the conditional independence in the diagram, the exact solution has remained intractable for many important problems. In this paper we introduce decision circuits as a means to exploit the local structure usually found in decision problems and to improve the performance of influence diagram analysis. This work builds on the probabilistic inference algorithms using arithmetic circuits to represent Bayesian belief networks [Darwiche, 2003]. Once compiled, these arithmetic circuits efficiently evaluate probabilistic queries on the belief network, and methods have been developed to exploit both the global and local structure of the network. We show that decision circuits can be constructed in a similar fashion and promise similar benefits.",
"",
"Investigations into probabilistic graphical models for decision making have predominantly centered on influence diagrams (IDs) and decision circuits (DCs) for representation and computation of decision rules that maximize expected utility. Since IDs are typically handcrafted and DCs are compiled from IDs, in this paper we propose an approach to learn the structure and parameters of decision-making problems directly from data. We present a new representation called sum-product-max network (SPMN) that generalizes a sum-product network (SPN) to the class of decision-making problems and whose solution, analogous to DCs, scales linearly in the size of the network. We show that SPMNs may be reduced to DCs linearly and present a first method for learning SPMNs from data. This approach is significant because it facilitates a novel paradigm of tractable decision making driven by data."
],
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_6",
"@cite_10",
"@cite_20",
"@cite_17"
],
"mid": [
"1848089821",
"2137819283",
"2400975081",
"2951953286",
"",
"2577239123"
]
}
|
Deep Tractable Probabilistic Models for Moral Responsibility
|
Moral responsibility is a major concern in automated decision-making. In applications ranging from self-driving cars to kidney exchanges [Conitzer et al., 2017], contextualising and enabling judgements of morality and blame is becoming a difficult challenge, owing in part to the philosophically vexing nature of these notions. In the infamous trolley problem [Thomson, 1985], for example, a putative agent encounters a runaway trolley headed towards five individuals who are unable to escape the trolley's path. Their death is certain if the trolley were to collide with them. The agent, however, can divert the trolley to a side track by means of a switch, but at the cost of the death of a sixth individual, who happens to be on this latter track. While one would hope that in practice the situations encountered by, say, self-driving cars would not involve such extreme choices, providing a decision-making framework with the capability of reasoning about blame seems prudent.
Moral reasoning has been actively studied by philosophers, lawyers, and psychologists for many decades. Especially when considering quantitative frameworks, 1 a definition of responsibility that is based on causality has been argued to be particularly appealing [Chockler and Halpern, 2004]. But most of these definitions are motivated and instantiated by carefully constructed examples designed by the expert, and so are not necessarily viable in large-scale applications. Indeed, problematic situations encountered by automated systems are likely to be in a high-dimensional setting, with hundreds and thousands of latent variables capturing the low-level aspects of the application domain. Thus, the urgent questions are:
(a) How can models of moral scenarios and blameworthiness be extracted and learnt automatically from data?
(b) How can judgements be computed tractably, given the split-second decision points faced by the system?
In this work, we propose a learning regime for inducing models of moral scenarios and blameworthiness automatically from data, and reasoning tractably from them. To the best of our knowledge, this is the first of such proposals. The regime leverages the tractable learning paradigm [Poon and Domingos, 2011, Choi et al., 2015, Kisa et al., 2014, which can induce both high-and low-tree width graphical models with latent variables, and thus realises a deep probabilistic architecture [Pronobis et al., 2017]. We remark that we do not motivate any new definitions for moral responsibility, but show how an existing model can be embedded in the learning framework. We suspect it should be possible to analogously embed other definitions from the literature too. We then study the computational features of this regime. Finally, we report on experiments regarding the alignment between automated morally-responsible decision-making and human judgement in three illustrative domains: lung cancer staging, teamwork management, and trolley problems. 1 The quantitative nature of the framework used in this work implicitly takes a consequentialist stance when it comes to the normative ethical theory used to assess responsibility and blame, and we also rely on our utility functions being cardinal as opposed to merely ordinal. See, for example, [Sinnott-Armstrong, 2015] and [Strotz, 1953] for definitions and discussions on these stances. We use the word blameworthiness to capture an important part of what can more broadly be described as moral responsibility, and consider a set of definitions (taken directly from the original work, with slight changes in notation for the sake of clarity and conciseness) put forward by [Halpern and Kleiman-Weiner, 2018] (henceforth HK). In HK, environments are modelled in terms of variables and structural equations relating their values [Halpern and Pearl, 2005]. More formally, the variables are partitioned into exogenous variables X external to the model in question, and endogenous variables V that are internal to the model and whose values are determined by those of the exogenous variables. A range function R maps every variable to the set of possible values it may take. In any model, there exists one structural equation
F V : × Y∈X∪V\{V} R(Y) → R(V) for each V ∈ V. Definition 1. A causal model M is a pair (S, F ) where S is a signature (U, V, R) and F is a set of modifiable structural equations {F V : V ∈ V}. A causal setting is a pair (M, X) where X ∈ × X∈X R(X) is a context.
In general we denote an assignment of values to variables in a set Y as Y. Following HK, we restrict our considerations to recursive models, in which, given a context X, the values of all variables in V are uniquely determined. Definition 2. A primitive event is an equation of the form
V = v for some V ∈ V, v ∈ R(V). A causal formula is denoted [Y ← Y]ϕ
where Y ⊆ V and ϕ is a Boolean formula of primitive events. This says that if the variables in Y were set to values Y (i.e. by intervention) then ϕ would hold. For a causal formula ψ we write (M, X) ψ if ψ is satisfied in causal setting (M, X).
An agent's epistemic state is given by (Pr, K, U) where K is a set of causal settings, Pr is a probability distribution over this set, and U is utility function U : W → R ≥0 on the set of worlds, where a world w ∈ W is defined as a setting of values to all variables in V. w M,X denotes the unique world determined by the causal setting (M, X). Definition 3. We define how much more likely it is that ϕ will result from performing a than from a using:
δ a,a ,ϕ = max (M,X)∈ [A←a]ϕ Pr(M, X) − (M,X)∈ [A←a ]ϕ Pr(M, X) , 0
where A ∈ V is a variable identified in order to capture an action of the agent and ψ = {(M, X) ∈ K : (M, X) ψ} is the set of causal settings in which ψ (a causal formula) is satisfied.
The costs of actions are measured with respect to a set of outcome variables O ⊆ V whose values are determined by an assignment to all other variables. In a given causal setting (M, X), O A←a denotes the setting of the outcome variables when action a is performed and w M,O←O A←a ,X denotes the corresponding world.
Definition 4. The (expected) cost of a relative to O is:
c(a) = (M,X)∈K Pr(M, X) U(w M,X ) − U(w M,O←O A←a ,X )
Finally, HK introduce one last quantity N to measure how important the costs of actions are when attributing blame (this varies according to the scenario). Specifically, as N → ∞ then db N (a, a , ϕ) → δ a,a ,ϕ and thus the less we care about cost. Note that blame is assumed to be non-negative and so it is required that N > max a∈A c(a).
Definition 5. The degree of blameworthiness of a for ϕ relative to a (given c and N) is:
db N (a, a , ϕ) = δ a,a ,ϕ N − max(c(a ) − c(a), 0) N
The overall degree of blameworthiness of a for ϕ is then:
db N (a, ϕ) = max a ∈R(A)\{a} db N (a, a , ϕ))
For reasons of space we omit an example here, but include several when reporting the results of our experiments. For further examples and discussions, we refer the reader to HK.
PSDDs
Since, in general, probabilistic inference is intractable [Bacchus et al., 2009], tractable learning has emerged as a recent paradigm where one attempts to learn classes of Arithmetic Circuits (ACs), for which inference is tractable Pedro, 2013, Kisa et al., 2014]. In particular, we use Probabilistic Sentential Decision Diagrams (PSDDs) [Kisa et al., 2014] which are tractable representations of a probability distribution over a propositional logic theory (a set of sentences in propositional logic) represented by a Sentential Decision Diagram (SDD). Space precludes us from discussing SDDs and PSDDs in detail, but the main idea behind SDDs is to factor the theory recursively as a binary tree: terminal nodes are either 1 or 0, and the decision nodes are of the form (p 1 , s 1 ), . . . , (p k , s k ) where primes p 1 , . . . , p k are SDDs corresponding to the left branch, subs s 1 , . . . , s k are SDDs corresponding to the right branch, and p 1 , . . . , p k form a partition (the primes are consistent, mutually exclusive, and their disjunction p 1 ∨ ... ∨ p k is valid). In PSDDs, each prime p i in a decision node (p 1 , s 1 ), . . . , (p k , s k ) is associated with a nonnegative parameter θ i such that k i=1 θ i = 1 and θ i = 0 if and only if s i = ⊥. Each terminal node also has a a parameter θ such that 0 < θ < 1, and together these parameters can be used to capture probability distributions. Most significantly, probabilistic queries, such as conditionals and marginals, can be computed in time linear in the size of the model. PSDDs can be learnt from data [Liang et al., 2017], possibly with the inclusion of logical constraints standing for background knowledge. The ability to encode logical constraints into the model directly enforces sparsity which in turn can lead to increased accuracy and decreased size. In our setting, we can draw parallels between these logical constraints and deontological ethical principles (e.g. it is forbidden to kill another human being), and between learnt distributions over decision-making scenarios (which can encode preferences) and the utility functions used in consequentialist ethical theories.
BLAMEWORTHINESS VIA PSDDS
We aim to leverage the learning of PSDDs, their tractable query interface, and their ability to handle domain constraints for inducing models of moral scenarios. 2 This is made possible by means of an embedding that we sketch below, while also discussing assumptions and choices. At the outset, we reiterate that we do not introduce new definitions here, but show how an existing one, that of HK, can be embedded within a learning regime.
Variables
We first distinguish between scenarios in which we do and do not model outcome variables. In both cases we have exogenous variables X, but in the former the endogenous variables V are partitioned into decision variables D and outcome variables O, and in the latter we have V = D = O (this does not affect the notation in our definitions, however). This is because we do not assume that outcomes can always be recorded, and in some scenarios it makes sense to think of decisions as an end in themselves.
The range function R is defined by the scenario we model, but in practice we one-hot encode the variables and so the range of each is simply {0, 1}. A subset (possibly empty) of the structural equations in F are implicitly encoded within the structure of the SDD underlying the PSDD, consisting of the logical constraints that remain true in every causal model M. The remaining equations are those that vary depending on the causal model. Each possible assignment D, O given X corresponds to a set of structural equations that 2 Our technical development can leverage both parameter and (possibly partial) structure learning for PSDDs. Of course, learning causal models is a challenging problem [Acharya et al., 2018], and in this regard, probabilistic structure learning is not assumed to be a recipe for causal discovery in general [Pearl, 1998]. Rather, under the assumptions discussed later, we are able to use our probabilistic model for causal reasoning. combine with those encoded by the SDD to determine the values of the variables in V given X. The PSDD then corresponds to the probability distribution over K, compacting everything neatly into a single structure.
Our critical assumption here is that the signature S = (U, V, R) (the variables and the values they may take) remains the same in all models, although the structural equations F (the ways in which said variables are related) may vary. Given that each model represents an agent's uncertain view of a decision-making scenario we do not think it too restrictive to keep the elements of this scenario the same across the potential eventualities, so long as the way these elements interact may differ. Indeed, learning PSDDs from decision-making data requires that the data points measure the same variables each time.
Probabilities
Thus, our distribution Pr : × Y∈X∪D∪O R(Y) → [0, 1] ranges over assignments to variables instead of K. As a slight abuse of notation we write Pr(X, D, O). The key observation needed to translate between these two distributions (we denote the original as Pr HK ) and which relies on our assumption above is that each set of structural equations F together with a context X deterministically leads to a unique, complete assignment V of the endogenous variables, which we write (abusing notation slightly) as (F , X) | = V, though there may be many such sets of equations that lead to the same assignment. Hence, for any context X and any assignment Y for Y ⊆ V we have:
Pr(X, Y) = F :(F ,X)| =Y Pr HK (F , X)
We view a Boolean formula of primitive events (possibly resulting from decision A) as a function ϕ : × Y∈O∪D\{A} R(Y) → {0, 1} that returns 1 if the original formula is satisfied by the assignment, or 0 otherwise. We write D \a for a general vector of values over D \ {A}, and hence ϕ(D \a , O). Here, the probability of ϕ occurring given that action a is performed (i.e. conditioning on intervention) (M,X)∈ [A←a]ϕ Pr(M, X) given by HK can also be written as Pr(ϕ|do(a)). In general, it is not the case that Pr(ϕ|do(a)) = Pr(ϕ|a), but by assuming that the direct causes of action a are captured by the context X and that the other decisions and outcomes D \a and O are in turn caused by X and a we may use the back-door criterion [Pearl, 2009] with X as a sufficient set to write:
Pr(D \a , O|do(a)) = X Pr(D \a , O|a, X) Pr(X)
and thus may use D \a ,O,X ϕ(D \a , O) Pr(D \a , O|a, X) Pr(X) for Pr(ϕ|do(a)). In order not to re-learn a separate model for each scenario we also allow the user of our system the option of specifying a current, alternative distribution over contexts Pr (X).
Utilities
We now consider the utility function U, the output of which we assume is normalised to the range [0, 1]. 3 We avoid unnecessary extra notation by defining the utility function in terms of X, D, and O = (O 1 , ..., O n ) instead of worlds w. In our implementation we allow the user to input an existing utility function or to learn one from data. In the latter case the user further specifies whether or not the function should be context-relative, i.e. whether we have U(O) or U(O; X) (our notation) as, in some cases, how good a certain outcome O is depends on the context X. Similarly, the user also decides whether the function should be linear in the outcome variables, in which case the final utility is
U(O) = i U i (O i ) or U(O; X) = i U i (O i ; X) respec- tively (where we assume that each U i (O i ; X), U i (O i ) ≥ 0).
Here the utility function is simply a vector of weights and the total utility of an outcome is the dot product of this vector with the vector of outcome variables.
When learning utility functions, the key assumption we make (before normalisation) is that the probability of a certain decision being made given a context is linearly proportional to the expected utility of that decision in the context. Note that here a decision is a general assignment D and not a single action a. For example, in the case where there are outcome variables, and the utility function is both linear and context-relative, we assume that
Pr(D|X) ∝ i U i (O i ; X) Pr(O i |D, X)
. The linearity of this relationship is neither critical to our work nor imposes any real restrictions on it, but simplifies our calculations somewhat and means that we do not have to make any further assumptions about the noisiness of the decision-making scenario, or how sophisticated the agent is with respect to making utility-maximising decisions. The existence of a proportionality relationship itself is far more important. However, we believe this is, in fact, relatively uncontroversial and can be restated as the simple principle that an agent is more likely to choose a decision that leads to a higher expected utility than one that leads to a lower expected utility. If we view decisions as guided by a utility function, then it follows that the decisions should, on average, be consistent with and representative of that utility function.
Costs and Blameworthiness
We also adapt the cost function given in HK, denoted here by c HK . As actions do not deterministically lead to outcomes in our work, we cannot use O A←a to represent the specific outcome when decision a is made (in some context). For our purposes it suffices to use c(a) = − O,X U(O; X) Pr(O|a, X) Pr(X) or c(a) = − O,X U(O) Pr(O|a, X) Pr(X), depending on whether U is context-relative or not. This is simply the negative expected utility over all contexts, conditioning by intervention on decision A ← a. Using our conversion between Pr HK and Pr, the back-door criterion [Pearl, 2009], and our assumption that action a is not caused by the other endogenous variables (i.e. X is a sufficient set for A), it is straightforward to to show that this cost function is equivalent to the one in HK (with respect to determining blameworthiness scores). 4 Again, we also give the user the option of updating the distribution over contexts Pr(X) to some other distribution Pr (X) so that the current model can be re-used in different scenarios. Given δ a,a ,ϕ and c, both db N (a, a , ϕ) and db N (a, ϕ) are computed as in HK, although we instead require that N > −min a∈A c(a) (the equivalence of this condition to the one in HK is an easy exercise). With this the embedding is complete.
COMPLEXITY RESULTS
Given our concerns over tractability we provide several computational complexity results for our embedding. Basic results were given in [Halpern and Kleiman-Weiner, 2018], but only in terms of the computations being polynomial in |M|, |K|, and |R(A)|. Here we provide more detailed results that are specific to our embedding and to the properties of PSDDs. The complexity of calculating blameworthiness scores depends on whether the user specifies an alternative distribution Pr , although in practice this is unlikely to have a major effect on tractability. Finally, note that we assume here that the PSDD and utility function are given in advance and so we do not consider the computational cost of learning. A summary of our results is given in Table 1. We observe that all of the final time complexities are exponential in the size of at least some subset of the variables. This is a result of the Boolean representation; our results are, in fact, more tightly bounded versions of those in HK, which are polynomial in the size of |K| = O(2 |X|+|D|+|O| ). In practice, however, we only sum over worlds with non-zero probability of occurring. Using PSDDs allows us to exploit this fact in ways that other models cannot, as we can logically constrain the model to have zero probability on any impossible world. Thus, when calculating blameworthiness we can ignore a great many of the terms in each sum and speed up computation dramatically. To give some concrete examples, the model counts of the PSDDs in our experiments were 52, 4800, and 180 out of 2 12 , 2 21 , and 2 23 possible variable assignments, respectively.
Term Time Complexity δ a,a ,ϕ O(2 |X|+|D|+|O| (|ϕ| + |P|)) c(a) O(2 |X|+|O| (U + |P|)) db N (a, a , ϕ) O(2 |X|+|O| (U + 2 |D| (|ϕ| + |P|))) db N (a, ϕ) O(|R(A)|2 |X|+|O| (U + 2 |D| (|ϕ| + |P|)))
IMPLEMENTATION
The underlying motivation behind our system was that a user should be able to go from any stage of creating a model to generating blameworthiness scores as conveniently and as straightforwardly as possible. With this in mind our package runs from the command line and prompts the user for a series of inputs including: data; existing PSDDs, SDDs, or vtrees; logical constraints; utility function specifications; variable descriptions; and finally the decisions, outcomes, and other details needed to compute a particular blameworthiness score. These inputs and any outputs from the system are saved and thus each model and its results can be easily accessed and re-used if needed. Note that we assume each datum is a sequence of fully observed values for binary (possibly as a result of one-hot encoding) variables that correspond to the context, the decisions made, and the resulting outcome, if recorded.
Our implementation makes use of two existing resources: [The SDD Package 2.0, 2018], an open-source system for creating and managing SDDs, including compiling them from logical constraints; and LearnPSDD [Liang et al., 2017], a recently developed set of algorithms that can be used to learn the parameters and structure of PS-DDs from data, learn vtrees from data, and to convert SDDs into PSDDs. The resulting functionalities of our system can then be broken down into four broad areas:
• Building and managing models, including converting logical constraints specified by the user in simple infix notation to restrictions upon the learnt model. For example, (A ∧ B) ↔ C can be entered as a command line prompt using = (&(A,B),C).
• Performing inference by evaluating the model or by calculating the MPE, both possibly given partial evidence. Each of our inference algorithms are linear in the size of the model, and are based on pseudocode given in [Kisa et al., 2014] and [Peharz et al., 2017] respectively.
• Learning utility functions from data, whose properties (such as being linear or being context-relative) are specified by the user in advance. This learning is done by forming a matrix equation representing our assumed proportionality relationship across all decisions and contexts, then solving to find utilities using non-negative linear regression with L2 regularisation (equivalent to solving a quadratic program).
• Computing blameworthiness by efficiently calculating the key quantities from our embedding, using parameters from particular queries given by the user. Results are then displayed in natural language and automatically saved for future reference.
A high-level overview of the complete structure of the system and full documentation are included in a package, which will be made available online.
EXPERIMENTS AND RESULTS
Using our implementation we learnt several models using a selection of datasets from varying domains in order to test our hypotheses.
In particular we answer three questions in each case:
(Q1) Does our system learn the correct overall probability distribution?
(Q2) Does our system capture the correct utility function?
(Q3) Does our system produce reasonable blameworthiness scores?
Full datasets are available as part of the package and summaries of each (including the domain constraints underlying our datasets) are given in the appendix.
Lung Cancer Staging
We use a synthetic dataset generated with the lung cancer staging influence diagram given in [Nease Jr and Owens, 1997]. The data was generated assuming that the overall decision strategy recommended in the original paper is followed with some high probability at each decision point. In this strategy, a thoractomy is the usual treatment unless the patient has mediastinal metastases, in which case a thoractomy will not result in greater life expectancy than the lower risk option of radiation therapy, which is then the preferred treatment.
The first decision made is whether a CT scan should be performed to test for mediastinal metastases, the second is whether to perform a mediastinoscopy. If the CT scan results are positive for mediastinal metastases then a mediastinoscopy is usually recommended in order to provide a second check, but if the CT scan result is negative then a mediastinoscopy is not seen as worth the extra risk involved in the operation. Possible outcomes are determined by variables that indicate whether the patient survives the diagnosis procedure and survives the treatment, and utility is measured by life expectancy.
For (Q1) we measure the overall log likelihood of the models learnt by our system on training, validation, and test datasets (see Table 2). A full comparison across a range of similar models and learning techniques is beyond the scope of our work here, although to provide some evidence of the competitiveness of PSDDs we include the log likelihood scores of a sum-product network (SPN) as a benchmark.
We follow a similar pattern in our remaining experiments, each time using Tachyon [Kalra, 2017] (an open source library for SPNs) to produce an SPN using the same training, validation, and test sets of our data, with the standard learning parameters as given in the Tachyon documentation example. We also compare the sizes (measured by the number of nodes) and the log likelihoods of PSDDs learnt with and without logical constraints in order to demonstrate the effectiveness of the former approach. Our model is able to recover the artificial decision-making strategy well (see Figure 1); at most points of the staging procedure the model learns a very similar distribution over decisions, and in all cases the correct decision is made the majority of times. Table 2: Log likelihoods and sizes of the constrained PS-DDs (the models we use in our system, indicated by the * symbol), unconstrained PSDDs, and the SPNs learnt in our three experiments.
Model
Answering (Q2) here is more difficult as the given utilities are not necessarily such that our decisions are linearly proportional to the expected utility of that decision. However, our strategy was chosen so as to maximise expected utility in the majority of cases. Thus, when comparing the given life expectancies with the learnt utility function, we still expect the same ordinality of utility values, even if not the same cardinality. In particular, our function assigns maximal utility (1.000) to the successful performing of a thoractomy when the patient does not have mediastinal metastases (the optimal scenario), and any scenario in which the patient dies has markedly lower utility (mean value 0.134).
In attempting to answer (Q3) we divide our question into two parts: does the system attribute no blame in the correct cases?; and does the system attribute more blame in the cases we would expect it to (and less in others)? Needless to say, it is very difficult (perhaps even impossible, at least without an extensive survey of human opinions) to produce an appropriate metric for how correct our attributions of blame are, but we suggest that these two criteria are the most fundamental and capture the core of what we want to evaluate. We successfully queried our model in a variety of settings corresponding to the two questions above and present representative examples below (we follow this same pattern in our second and third experiments). Regarding the first part of (Q3), one case in which we have blameworthiness scores of zero is when performing the action being judged is less likely to result in the outcome we are concerned with than the action(s) we are comparing it to. The chance of the patient dying in the diagnostic process (¬S DP ) is increased if a mediastinoscopy (M) is performed, hence the blameworthiness for such a death due to not performing a mediastinoscopy should be zero. As expected, our model assigns db N (¬M, M, ¬S DP ) = 0. To answer the second part of (Q3), we show that the system produces higher blameworthiness scores when a negative outcome is more likely to occur (assuming the actions being compared have relatively similar costs). For example, in the case where the patient does not have mediastinal metastases then the best treatment is a thoractomy, but a thoractomy will not be performed if the result of the last diagnostic test performed is positive. The specificity of a mediastinoscopy is higher than that of a CT scan, hence a CT scan is more likely to produce a false positive and thus (assuming no mediastinoscopy is performed as a second check) lead to the wrong treatment. 5 In the case where only one diagnostic procedure is performed we therefore have a higher degree of blame attributed to the decision to conduct a CT scan (0.013) as opposed to a mediastinoscopy (0.000), where we use N = 1.
Teamwork Management
Our second experiment uses a recently collected dataset of human decision-making in teamwork management [Yu et al., 2017]. This data was recorded from over 1000 participants as they played a game that simulates task allocation processes in a management environment. In each level of the game the player has different tasks to allocate to a group of virtual workers that have different attributes and capabilities. The tasks vary in difficulty, value, and time requirements, and the player gains feedback from the virtual workers as tasks are completed. At the end of the level the player receives a score based on the quality and timeliness of their work. Finally, the player is asked to record their emotional response to the result of the game in terms of scores corresponding to six basic emotions. We simplify matters slightly by considering only the self-declared management strategy of the player as our decisions. Within the game this is recorded by five check-boxes at the end of the level that are not mutually exclusive, giving 32 possible overall strategies. These strategy choices concern methods of task allocation such as load-balancing (keeping each worker's workload roughly even) and skill-based (assigning tasks by how likely the worker is to complete the task well and on time), amongst others. We also measure utility purely by the self-reported happiness of the player, rather than any other emotions. As part of our answer to (Q1) we investigate how often the model would employ each of the 32 possible strategies (where a strategy is represented by an assignment of values to the binary indicator decision variables) compared to the average participant (across all contexts), which can be seen in Figure 2. In general the learnt probabilities are similar to the actual proportions in the data, though noisier. The discrepancies are more noticeable (though understandably so) for decisions that were made very rarely, perhaps only once or twice in the entire dataset. These differences are also partly due to smoothing (i.e. all strategies have a nonzero probability of being played). For (Q2) we use the self-reported happiness scores to investigate our assumption that the number of times a decision is made is (linearly) proportional to the expected utility based on that decision. In order to do this we split the data up based on the context (game level) and produce a scatter plot (Figure 3) of the proportion of times a set of decisions is made against the average utility (happiness score) of that decision. Overall there is no obvious positive linear correlation as our original assumption would imply, although this could be because of any one or combination of the following reasons: players do not play enough rounds of the game to find out which strategies reliably lead to higher scores and thus (presumably) higher utilities; players do not accurately self-report their strategies; or players' strategies have relatively little impact on their overall utility based on the result of the game. We recall here that our assumption essentially comes down to supposing that people more often make decisions that result in greater utilities. The eminent plausibility of this statement, along with the relatively high likelihood of at least one of the factors in the list above means we do not have enough evidence here to refute the statement, although certainly further empirical work is required in order to demonstrate its truth.
Investigating this discrepancy further, we learnt a utility function (linear and context-relative) from the data and inspected the average weights given to the outcome variables (see right plot in Figure 4). A correct function should place higher weights on the outcome variables corresponding to higher ratings, which is true for timeliness, but not quite true for quality as the top rating is weighted only third highest. We found that the learnt utility weights are in fact almost identical to the distribution of the outcomes in the data (see left plot in Figure 4). Because our utility weights were learnt on the assumption that players more often use strategies that will lead to better expected outcomes, the similarity between these two graphs adds further weight to our suggestion that, in fact, the self-reported strategies of players have very little to do with the final outcome. To answer (Q3) we examine cases in which the blameworthiness score should be zero, and then compare cases that should have lower or higher scores with respect to one another. Once again, comprehensive descriptions of each of our tested queries are omitted for reasons of space, but here we present some representative examples. 6 Firstly, we considered level 1 of the game by choosing an alternative distribution Pr over contexts when generating our scores.
Distribution Of Outcomes
Here a player is less likely to receive a low rating for quality (Q 1 or Q 2 ) if they employ a skill-based strategy where tasks are more frequently allocated to better workers (S ). As expected, our system returns db N (S , ¬S , Q 1 ∨ Q 2 ) = 0. Secondly, we look at the timeliness outcomes. A player is less likely to obtain the top timeliness rating (T 5 ) if they do not use a strategy that uniformly allocates tasks (U) compared to their not using a random strategy of allocation (R). Accordingly, we find that db N (¬U, ¬T 5 ) > db N (¬R, ¬T 5 ), and more specifically we have db N (¬U, ¬T 5 ) = 0.002 and db N (¬R, ¬T 5 ) = 0 (i.e. a player should avoid using a random strategy completely if they wish to obtain the top timeliness rating).
Trolley Problems
We also devised our own experimental setup with human participants, using a small-scale survey (the relevant documents and data are included in the package) to gather data about hypothetical moral decision-making scenarios. These scenarios took the form of variants on the infamous trolley problem [Thomson, 1985]. We extended this idea, as is not uncommon in the literature (see, e.g. [Moral Machine, 2016]), by introducing a series of different characters that might be on either track: one person, five people, 100 people, one's pet, one's best friend, and one's family. We also added two further decision options: pushing whoever is on the side track into the way of the train in order to save whoever is on the main track, and sacrificing oneself by jumping in front of the train, saving both characters in the process. The survey then took the form of asking each participant which of the four actions they would perform (the fourth being inaction) given each possible permutation of the six characters on the main and side tracks (we assume that a character could not appear on both tracks in the same scenario). The general setup can be seen in Figure 5, with locations A and B denoting the locations of the characters on the main track and side track respectively. Last of all, we added a probabilistic element (which was explained in advance to participants) to our scenarios whereby the switch only works with probability 0.6, and pushing the character at location B onto the main track in order to stop the train succeeds with probability 0.8. This was used to account for the fact that people are generally more averse to actively pushing someone than to flipping a switch [Singer, 2005], and people are certainly more averse to sacrificing themselves than doing either of the former. However, depending on how much one values the character on the main track's life, one might be prepared to perform a less desirable action in order to increase their chance of survival.
In answering (Q1) we investigate how well our model serves as a representation of the aggregated decision preferences of participants by calculating how likely the system would be to make particular decisions in each of the 30 contexts and comparing this with the average across participants in the survey. For reasons of space we focus here on a representative subset of these comparisons: namely, the five possible scenarios in which the best friend character is on the main track (see Figure 6). In general, the model's predictions are similar to the answers given in the survey, although the effect of smoothing our distribution during learning is noticeable, especially due to the fact that the model was learnt with relatively few data points. Despite this handicap, the most likely decision in any of the 30 contexts according to the model is in fact the majority decision in the survey, with the ranking of other decisions in each context also highly accurate. Unlike our other two experiments, the survey data does not explicitly contain any utility information, meaning our system was forced to learn a utility function by using the probability distribution encoded by the PSDD. Within the decision-making scenarios we presented, it is plausible that the decisions made by participants were guided by weights that they assigned to the lives of each of the six characters and to their own life. Given that each of these is captured by a particular outcome variable we chose to construct a utility function that was linear in said variables. We also chose to make the utility function insensitive to context, as we would not expect how much one values the life of a particular character to depend on which track that character was on, or whether they were on a track at all.
Survey Answers
For (Q2), with no existing utility data to compare our learnt function, we interpreted the survival rates of each character as the approximate weight assigned to their lives by the participants. While the survival rate is a non-deterministic function of the decisions made in each context, we assume that over the experiment these rates average out enough for us to make a meaningful comparison with the weights learnt by our model. A visual representation of this comparison can be seen in Figure 7. It is immediately obvious that our system has captured the correct utility function to a high degree of accuracy. With that said, our assumption about using survival rates as a proxy for real utility weights does lend itself to favourable comparison with a utility function learnt from a probability distribution over contexts, decisions, and outcomes (which thus includes survival rates). Given the setup of the experiment, however, this assumption seems justified and, furthermore, to be in line with how most of the participants answered the survey. Figure 7: A comparison between the average survival rates of the seven characters (including the participants in the survey), normalised to sum to one, and the corresponding utility function weights learnt by our system.
Because of the symmetric nature of the set of contexts in our experiment, the probability of a particular character surviving as a result of a particular action across all contexts is just the same as the probability of that character not surviving. Hence in answering (Q3) we use our system's feature of being able to accept particular distributions Pr over the contexts in which we wish to attribute blame, allowing us to focus only on particular scenarios. Clearly, in any of the possible contexts one should not be blamed at all for the the death of the character on the main track for flipping the switch (F) as opposed to inaction (I), because in the latter case they will die with certainty, but not in the former. 7 Choosing a scenario arbitrarily to illustrate this point, with one person on the side track and five people on the main track, we have db N (F, I, ¬L 5 ) = 0 and db N (F, ¬L 5 ) = 0.307 (with our measure of cost importance N = 0.762, 1.1 times the negative minimum cost of any action). Now consider the scenario in which there is a large crowd of a hundred or so people on the main track, but one is unable to tell from a distance if the five or so people on the side track are strangers or one's family. Of course, the more likely it is that the family is on the side track, the more responsible one is for their deaths (¬L Fa ) if one, say, flips the switch (F) to divert the train. Conversely, we would also expect there to be less blame for the deaths of the 100 people (¬L 100 ) say, if one did nothing (I), the more likely it is that the family is on the side track (because the cost, for the participant at least, of somehow diverting the train is higher). We compare cases where there is a 0.3 probability that the family is on the side track against a 0.6 probability and for all calculations use the cost importance measure N = 1. Therefore, not only would we expect the blame for the death of the family to be higher when pulling the switch in the latter case, we would expect the value to be approximately twice as high as in the former case. Accordingly, we compute values db N (F, ¬L Fa ) = 0.264 and db N (F, ¬L Fa ) = 0.554 respectively. Similarly, when considering blame for the deaths of the 100 people due inaction, we find that db N (I, ¬L 100 ) = 0.153 in the former case and that db N (I, ¬L 100 ) = 0.110 in the latter case (when the cost of performing any other action is higher).
CONCLUSION
Our system utilises the specification of decision-making scenarios in HK, and at the same time exploits many of the desirable properties of PSDDs (such as tractability, semantically meaningful parameters, and the ability to be both learnt from data and include logical constraints). The system is flexible in its usage, allowing various inputs and specifications. In general, the models in our experiments are accurate representations of the distributions over the moral scenarios that they are learnt from. Our learnt utility functions, while simple in nature, are still able to capture subtle details and in some scenarios are able to match human preferences with high accuracy using very little data. With these two elements we are able to generate blameworthiness scores that are, prima facie, in line with human intuitions. We hope that our work here goes some way towards bridging the gap between the existing philosophical work on moral responsibility and the existing technical work on decision-making in automated systems. Table 3: A summary of the trolley problem data used in our third experiment.
No. data points 7446 No. variables 21 X variables Level 1 (L 1 ), ... , Level 6 (L 6 ) D variables Other (O), Load-balancing (L), Uniform (U), Skill-based (S ), Random (R) O variables Timeliness 1 (T 1 ), ... , Timeliness 5 (T 5 ), Quality 1 (Q 1 ), ... , Quality 5 (Q 5 ) Constraints i∈{1,...,6} L i , L i → ¬ j∈{1,...,6}\i L j for all i ∈ {1, ..., 6}, i∈{1,...,5} T i , T i → ¬ j∈{1,...,5}\i T j for all i ∈ {1, ..., 5}, i∈{1,...,5} Q i , Q i → ¬ j∈{1,...,5}\i Q j for all i ∈ {1, ..., 5} Model count 4800 Utilities given? Yes (Self-reported Happiness Score)
| 7,390 |
1810.03736
|
2897082611
|
Moral responsibility is a major concern in automated decision-making, with applications ranging from self-driving cars to kidney exchanges. From the viewpoint of automated systems, the urgent questions are: (a) How can models of moral scenarios and blameworthiness be extracted and learnt automatically from data? (b) How can judgements be computed tractably, given the split-second decision points faced by the system? By building on deep tractable probabilistic learning, we propose a learning regime for inducing models of such scenarios automatically from data and reasoning tractably from them. We report on experiments that compare our system with human judgement in three illustrative domains: lung cancer staging, teamwork management, and trolley problems.
|
An important part of learning a model of moral decision-making is in learning a utility function. This is often referred to as (IRL) @cite_23 or @cite_29 . Our current implementation considers a simple approach for learning utilities (similar to @cite_12 ), but more involved paradigms such as those above could indeed have been used.
|
{
"abstract": [
"Humans are adept at inferring the mental states underlying other agents’ actions, such as goals, beliefs, desires, emotions and other thoughts. We propose a computational framework based on Bayesian inverse planning for modeling human action understanding. The framework represents an intuitive theory of intentional agents’ behavior based on the principle of rationality: the expectation that agents will plan approximately rationally to achieve their goals, given their beliefs about the world. The mental states that caused an agent’s behavior are inferred by inverting this model of rational planning using Bayesian inference, integrating the likelihood of the observed actions with the prior over mental states. This approach formalizes in precise probabilistic terms the essence of previous qualitative approaches to action understanding based on an ‘‘intentional stance” [Dennett, D. C. (1987). The intentional stance. Cambridge, MA: MIT Press] or a ‘‘teleological stance” [Gergely, G., Nadasdy, Z., Csibra, G., & Biro, S. (1995). Taking the intentional stance at 12 months of age. Cognition, 56, 165–193]. In three psychophysical experiments using animated stimuli of agents moving in simple mazes, we assess how well different inverse planning models based on different goal priors can predict human goal inferences. The results provide quantitative evidence for an approximately rational inference mechanism in human goal inference within our simplified stimulus paradigm, and for the flexible nature of goal representations that human observers can adopt. We discuss the implications of our experimental results for human action understanding in real-world contexts, and suggest how our framework might be extended to capture other kinds of mental state inferences, such as inferences about beliefs, or inferring whether an entity is an intentional agent.",
"When modeling a decision problem using the influence diagram framework, the quantitative part rests on two principal components: probabilities for representing the decision maker's uncertainty about the domain and utilities for representing preferences. Over the last decade, several methods have been developed for learning the probabilities from a database. However, methods for learning the utilities have only received limited attention in the computer science community.A promising approach for learning a decision maker's utility function is to take outset in the decision maker's observed behavioral patterns, and then find a utility function which (together with a domain model) can explain this behavior. That is, it is assumed that decision maker's preferences are reflected in the behavior. Standard learning algorithms also assume that the decision maker is behavioral consistent, i.e., given a model of the decision problem, there exists a utility function which can account for all the observed behavior. Unfortunately, this assumption is rarely valid in real-world decision problems, and in these situations existing learning methods may only identify a trivial utility function. In this paper we relax this consistency assumption, and propose two algorithms for learning a decision maker's utility function from possibly inconsistent behavior; inconsistent behavior is interpreted as random deviations from an underlying (true) utility function. The main difference between the two algorithms is that the first facilitates a form of batch learning whereas the second focuses on adaptation and is particularly well-suited for scenarios where the DM's preferences change over time. Empirical results demonstrate the tractability of the algorithms, and they also show that the algorithms converge toward the true utility function for even very small sets of observations.",
"Objective—To evaluate the pharmacokinetics of a novel commercial formulation of ivermectin after administration to goats. Animals—6 healthy adult goats. Procedure—Ivermectin (200 μg kg) was initially administered IV to each goat, and plasma samples were obtained for 36 days. After a washout period of 3 weeks, each goat received a novel commercial formulation of ivermectin (200 μg kg) by SC injection. Plasma samples were then obtained for 42 days. Drug concentrations were quantified by use of high-performance liquid chromatography with fluorescence detection. Results—Pharmacokinetics of ivermectin after IV administration were best described by a 2-compartment open model; values for main compartmental variables included volume of distribution at a steady state (9.94 L kg), clearance (1.54 L kg d), and area under the plasma concentration-time curve (AUC; 143 [ng•d] mL). Values for the noncompartmental variables included mean residence time (7.37 days), AUC (153 [ng•d] mL), and clearance (1.43 L kg d). After ..."
],
"cite_N": [
"@cite_29",
"@cite_12",
"@cite_23"
],
"mid": [
"2151516755",
"2109628722",
"2061562262"
]
}
|
Deep Tractable Probabilistic Models for Moral Responsibility
|
Moral responsibility is a major concern in automated decision-making. In applications ranging from self-driving cars to kidney exchanges [Conitzer et al., 2017], contextualising and enabling judgements of morality and blame is becoming a difficult challenge, owing in part to the philosophically vexing nature of these notions. In the infamous trolley problem [Thomson, 1985], for example, a putative agent encounters a runaway trolley headed towards five individuals who are unable to escape the trolley's path. Their death is certain if the trolley were to collide with them. The agent, however, can divert the trolley to a side track by means of a switch, but at the cost of the death of a sixth individual, who happens to be on this latter track. While one would hope that in practice the situations encountered by, say, self-driving cars would not involve such extreme choices, providing a decision-making framework with the capability of reasoning about blame seems prudent.
Moral reasoning has been actively studied by philosophers, lawyers, and psychologists for many decades. Especially when considering quantitative frameworks, 1 a definition of responsibility that is based on causality has been argued to be particularly appealing [Chockler and Halpern, 2004]. But most of these definitions are motivated and instantiated by carefully constructed examples designed by the expert, and so are not necessarily viable in large-scale applications. Indeed, problematic situations encountered by automated systems are likely to be in a high-dimensional setting, with hundreds and thousands of latent variables capturing the low-level aspects of the application domain. Thus, the urgent questions are:
(a) How can models of moral scenarios and blameworthiness be extracted and learnt automatically from data?
(b) How can judgements be computed tractably, given the split-second decision points faced by the system?
In this work, we propose a learning regime for inducing models of moral scenarios and blameworthiness automatically from data, and reasoning tractably from them. To the best of our knowledge, this is the first of such proposals. The regime leverages the tractable learning paradigm [Poon and Domingos, 2011, Choi et al., 2015, Kisa et al., 2014, which can induce both high-and low-tree width graphical models with latent variables, and thus realises a deep probabilistic architecture [Pronobis et al., 2017]. We remark that we do not motivate any new definitions for moral responsibility, but show how an existing model can be embedded in the learning framework. We suspect it should be possible to analogously embed other definitions from the literature too. We then study the computational features of this regime. Finally, we report on experiments regarding the alignment between automated morally-responsible decision-making and human judgement in three illustrative domains: lung cancer staging, teamwork management, and trolley problems. 1 The quantitative nature of the framework used in this work implicitly takes a consequentialist stance when it comes to the normative ethical theory used to assess responsibility and blame, and we also rely on our utility functions being cardinal as opposed to merely ordinal. See, for example, [Sinnott-Armstrong, 2015] and [Strotz, 1953] for definitions and discussions on these stances. We use the word blameworthiness to capture an important part of what can more broadly be described as moral responsibility, and consider a set of definitions (taken directly from the original work, with slight changes in notation for the sake of clarity and conciseness) put forward by [Halpern and Kleiman-Weiner, 2018] (henceforth HK). In HK, environments are modelled in terms of variables and structural equations relating their values [Halpern and Pearl, 2005]. More formally, the variables are partitioned into exogenous variables X external to the model in question, and endogenous variables V that are internal to the model and whose values are determined by those of the exogenous variables. A range function R maps every variable to the set of possible values it may take. In any model, there exists one structural equation
F V : × Y∈X∪V\{V} R(Y) → R(V) for each V ∈ V. Definition 1. A causal model M is a pair (S, F ) where S is a signature (U, V, R) and F is a set of modifiable structural equations {F V : V ∈ V}. A causal setting is a pair (M, X) where X ∈ × X∈X R(X) is a context.
In general we denote an assignment of values to variables in a set Y as Y. Following HK, we restrict our considerations to recursive models, in which, given a context X, the values of all variables in V are uniquely determined. Definition 2. A primitive event is an equation of the form
V = v for some V ∈ V, v ∈ R(V). A causal formula is denoted [Y ← Y]ϕ
where Y ⊆ V and ϕ is a Boolean formula of primitive events. This says that if the variables in Y were set to values Y (i.e. by intervention) then ϕ would hold. For a causal formula ψ we write (M, X) ψ if ψ is satisfied in causal setting (M, X).
An agent's epistemic state is given by (Pr, K, U) where K is a set of causal settings, Pr is a probability distribution over this set, and U is utility function U : W → R ≥0 on the set of worlds, where a world w ∈ W is defined as a setting of values to all variables in V. w M,X denotes the unique world determined by the causal setting (M, X). Definition 3. We define how much more likely it is that ϕ will result from performing a than from a using:
δ a,a ,ϕ = max (M,X)∈ [A←a]ϕ Pr(M, X) − (M,X)∈ [A←a ]ϕ Pr(M, X) , 0
where A ∈ V is a variable identified in order to capture an action of the agent and ψ = {(M, X) ∈ K : (M, X) ψ} is the set of causal settings in which ψ (a causal formula) is satisfied.
The costs of actions are measured with respect to a set of outcome variables O ⊆ V whose values are determined by an assignment to all other variables. In a given causal setting (M, X), O A←a denotes the setting of the outcome variables when action a is performed and w M,O←O A←a ,X denotes the corresponding world.
Definition 4. The (expected) cost of a relative to O is:
c(a) = (M,X)∈K Pr(M, X) U(w M,X ) − U(w M,O←O A←a ,X )
Finally, HK introduce one last quantity N to measure how important the costs of actions are when attributing blame (this varies according to the scenario). Specifically, as N → ∞ then db N (a, a , ϕ) → δ a,a ,ϕ and thus the less we care about cost. Note that blame is assumed to be non-negative and so it is required that N > max a∈A c(a).
Definition 5. The degree of blameworthiness of a for ϕ relative to a (given c and N) is:
db N (a, a , ϕ) = δ a,a ,ϕ N − max(c(a ) − c(a), 0) N
The overall degree of blameworthiness of a for ϕ is then:
db N (a, ϕ) = max a ∈R(A)\{a} db N (a, a , ϕ))
For reasons of space we omit an example here, but include several when reporting the results of our experiments. For further examples and discussions, we refer the reader to HK.
PSDDs
Since, in general, probabilistic inference is intractable [Bacchus et al., 2009], tractable learning has emerged as a recent paradigm where one attempts to learn classes of Arithmetic Circuits (ACs), for which inference is tractable Pedro, 2013, Kisa et al., 2014]. In particular, we use Probabilistic Sentential Decision Diagrams (PSDDs) [Kisa et al., 2014] which are tractable representations of a probability distribution over a propositional logic theory (a set of sentences in propositional logic) represented by a Sentential Decision Diagram (SDD). Space precludes us from discussing SDDs and PSDDs in detail, but the main idea behind SDDs is to factor the theory recursively as a binary tree: terminal nodes are either 1 or 0, and the decision nodes are of the form (p 1 , s 1 ), . . . , (p k , s k ) where primes p 1 , . . . , p k are SDDs corresponding to the left branch, subs s 1 , . . . , s k are SDDs corresponding to the right branch, and p 1 , . . . , p k form a partition (the primes are consistent, mutually exclusive, and their disjunction p 1 ∨ ... ∨ p k is valid). In PSDDs, each prime p i in a decision node (p 1 , s 1 ), . . . , (p k , s k ) is associated with a nonnegative parameter θ i such that k i=1 θ i = 1 and θ i = 0 if and only if s i = ⊥. Each terminal node also has a a parameter θ such that 0 < θ < 1, and together these parameters can be used to capture probability distributions. Most significantly, probabilistic queries, such as conditionals and marginals, can be computed in time linear in the size of the model. PSDDs can be learnt from data [Liang et al., 2017], possibly with the inclusion of logical constraints standing for background knowledge. The ability to encode logical constraints into the model directly enforces sparsity which in turn can lead to increased accuracy and decreased size. In our setting, we can draw parallels between these logical constraints and deontological ethical principles (e.g. it is forbidden to kill another human being), and between learnt distributions over decision-making scenarios (which can encode preferences) and the utility functions used in consequentialist ethical theories.
BLAMEWORTHINESS VIA PSDDS
We aim to leverage the learning of PSDDs, their tractable query interface, and their ability to handle domain constraints for inducing models of moral scenarios. 2 This is made possible by means of an embedding that we sketch below, while also discussing assumptions and choices. At the outset, we reiterate that we do not introduce new definitions here, but show how an existing one, that of HK, can be embedded within a learning regime.
Variables
We first distinguish between scenarios in which we do and do not model outcome variables. In both cases we have exogenous variables X, but in the former the endogenous variables V are partitioned into decision variables D and outcome variables O, and in the latter we have V = D = O (this does not affect the notation in our definitions, however). This is because we do not assume that outcomes can always be recorded, and in some scenarios it makes sense to think of decisions as an end in themselves.
The range function R is defined by the scenario we model, but in practice we one-hot encode the variables and so the range of each is simply {0, 1}. A subset (possibly empty) of the structural equations in F are implicitly encoded within the structure of the SDD underlying the PSDD, consisting of the logical constraints that remain true in every causal model M. The remaining equations are those that vary depending on the causal model. Each possible assignment D, O given X corresponds to a set of structural equations that 2 Our technical development can leverage both parameter and (possibly partial) structure learning for PSDDs. Of course, learning causal models is a challenging problem [Acharya et al., 2018], and in this regard, probabilistic structure learning is not assumed to be a recipe for causal discovery in general [Pearl, 1998]. Rather, under the assumptions discussed later, we are able to use our probabilistic model for causal reasoning. combine with those encoded by the SDD to determine the values of the variables in V given X. The PSDD then corresponds to the probability distribution over K, compacting everything neatly into a single structure.
Our critical assumption here is that the signature S = (U, V, R) (the variables and the values they may take) remains the same in all models, although the structural equations F (the ways in which said variables are related) may vary. Given that each model represents an agent's uncertain view of a decision-making scenario we do not think it too restrictive to keep the elements of this scenario the same across the potential eventualities, so long as the way these elements interact may differ. Indeed, learning PSDDs from decision-making data requires that the data points measure the same variables each time.
Probabilities
Thus, our distribution Pr : × Y∈X∪D∪O R(Y) → [0, 1] ranges over assignments to variables instead of K. As a slight abuse of notation we write Pr(X, D, O). The key observation needed to translate between these two distributions (we denote the original as Pr HK ) and which relies on our assumption above is that each set of structural equations F together with a context X deterministically leads to a unique, complete assignment V of the endogenous variables, which we write (abusing notation slightly) as (F , X) | = V, though there may be many such sets of equations that lead to the same assignment. Hence, for any context X and any assignment Y for Y ⊆ V we have:
Pr(X, Y) = F :(F ,X)| =Y Pr HK (F , X)
We view a Boolean formula of primitive events (possibly resulting from decision A) as a function ϕ : × Y∈O∪D\{A} R(Y) → {0, 1} that returns 1 if the original formula is satisfied by the assignment, or 0 otherwise. We write D \a for a general vector of values over D \ {A}, and hence ϕ(D \a , O). Here, the probability of ϕ occurring given that action a is performed (i.e. conditioning on intervention) (M,X)∈ [A←a]ϕ Pr(M, X) given by HK can also be written as Pr(ϕ|do(a)). In general, it is not the case that Pr(ϕ|do(a)) = Pr(ϕ|a), but by assuming that the direct causes of action a are captured by the context X and that the other decisions and outcomes D \a and O are in turn caused by X and a we may use the back-door criterion [Pearl, 2009] with X as a sufficient set to write:
Pr(D \a , O|do(a)) = X Pr(D \a , O|a, X) Pr(X)
and thus may use D \a ,O,X ϕ(D \a , O) Pr(D \a , O|a, X) Pr(X) for Pr(ϕ|do(a)). In order not to re-learn a separate model for each scenario we also allow the user of our system the option of specifying a current, alternative distribution over contexts Pr (X).
Utilities
We now consider the utility function U, the output of which we assume is normalised to the range [0, 1]. 3 We avoid unnecessary extra notation by defining the utility function in terms of X, D, and O = (O 1 , ..., O n ) instead of worlds w. In our implementation we allow the user to input an existing utility function or to learn one from data. In the latter case the user further specifies whether or not the function should be context-relative, i.e. whether we have U(O) or U(O; X) (our notation) as, in some cases, how good a certain outcome O is depends on the context X. Similarly, the user also decides whether the function should be linear in the outcome variables, in which case the final utility is
U(O) = i U i (O i ) or U(O; X) = i U i (O i ; X) respec- tively (where we assume that each U i (O i ; X), U i (O i ) ≥ 0).
Here the utility function is simply a vector of weights and the total utility of an outcome is the dot product of this vector with the vector of outcome variables.
When learning utility functions, the key assumption we make (before normalisation) is that the probability of a certain decision being made given a context is linearly proportional to the expected utility of that decision in the context. Note that here a decision is a general assignment D and not a single action a. For example, in the case where there are outcome variables, and the utility function is both linear and context-relative, we assume that
Pr(D|X) ∝ i U i (O i ; X) Pr(O i |D, X)
. The linearity of this relationship is neither critical to our work nor imposes any real restrictions on it, but simplifies our calculations somewhat and means that we do not have to make any further assumptions about the noisiness of the decision-making scenario, or how sophisticated the agent is with respect to making utility-maximising decisions. The existence of a proportionality relationship itself is far more important. However, we believe this is, in fact, relatively uncontroversial and can be restated as the simple principle that an agent is more likely to choose a decision that leads to a higher expected utility than one that leads to a lower expected utility. If we view decisions as guided by a utility function, then it follows that the decisions should, on average, be consistent with and representative of that utility function.
Costs and Blameworthiness
We also adapt the cost function given in HK, denoted here by c HK . As actions do not deterministically lead to outcomes in our work, we cannot use O A←a to represent the specific outcome when decision a is made (in some context). For our purposes it suffices to use c(a) = − O,X U(O; X) Pr(O|a, X) Pr(X) or c(a) = − O,X U(O) Pr(O|a, X) Pr(X), depending on whether U is context-relative or not. This is simply the negative expected utility over all contexts, conditioning by intervention on decision A ← a. Using our conversion between Pr HK and Pr, the back-door criterion [Pearl, 2009], and our assumption that action a is not caused by the other endogenous variables (i.e. X is a sufficient set for A), it is straightforward to to show that this cost function is equivalent to the one in HK (with respect to determining blameworthiness scores). 4 Again, we also give the user the option of updating the distribution over contexts Pr(X) to some other distribution Pr (X) so that the current model can be re-used in different scenarios. Given δ a,a ,ϕ and c, both db N (a, a , ϕ) and db N (a, ϕ) are computed as in HK, although we instead require that N > −min a∈A c(a) (the equivalence of this condition to the one in HK is an easy exercise). With this the embedding is complete.
COMPLEXITY RESULTS
Given our concerns over tractability we provide several computational complexity results for our embedding. Basic results were given in [Halpern and Kleiman-Weiner, 2018], but only in terms of the computations being polynomial in |M|, |K|, and |R(A)|. Here we provide more detailed results that are specific to our embedding and to the properties of PSDDs. The complexity of calculating blameworthiness scores depends on whether the user specifies an alternative distribution Pr , although in practice this is unlikely to have a major effect on tractability. Finally, note that we assume here that the PSDD and utility function are given in advance and so we do not consider the computational cost of learning. A summary of our results is given in Table 1. We observe that all of the final time complexities are exponential in the size of at least some subset of the variables. This is a result of the Boolean representation; our results are, in fact, more tightly bounded versions of those in HK, which are polynomial in the size of |K| = O(2 |X|+|D|+|O| ). In practice, however, we only sum over worlds with non-zero probability of occurring. Using PSDDs allows us to exploit this fact in ways that other models cannot, as we can logically constrain the model to have zero probability on any impossible world. Thus, when calculating blameworthiness we can ignore a great many of the terms in each sum and speed up computation dramatically. To give some concrete examples, the model counts of the PSDDs in our experiments were 52, 4800, and 180 out of 2 12 , 2 21 , and 2 23 possible variable assignments, respectively.
Term Time Complexity δ a,a ,ϕ O(2 |X|+|D|+|O| (|ϕ| + |P|)) c(a) O(2 |X|+|O| (U + |P|)) db N (a, a , ϕ) O(2 |X|+|O| (U + 2 |D| (|ϕ| + |P|))) db N (a, ϕ) O(|R(A)|2 |X|+|O| (U + 2 |D| (|ϕ| + |P|)))
IMPLEMENTATION
The underlying motivation behind our system was that a user should be able to go from any stage of creating a model to generating blameworthiness scores as conveniently and as straightforwardly as possible. With this in mind our package runs from the command line and prompts the user for a series of inputs including: data; existing PSDDs, SDDs, or vtrees; logical constraints; utility function specifications; variable descriptions; and finally the decisions, outcomes, and other details needed to compute a particular blameworthiness score. These inputs and any outputs from the system are saved and thus each model and its results can be easily accessed and re-used if needed. Note that we assume each datum is a sequence of fully observed values for binary (possibly as a result of one-hot encoding) variables that correspond to the context, the decisions made, and the resulting outcome, if recorded.
Our implementation makes use of two existing resources: [The SDD Package 2.0, 2018], an open-source system for creating and managing SDDs, including compiling them from logical constraints; and LearnPSDD [Liang et al., 2017], a recently developed set of algorithms that can be used to learn the parameters and structure of PS-DDs from data, learn vtrees from data, and to convert SDDs into PSDDs. The resulting functionalities of our system can then be broken down into four broad areas:
• Building and managing models, including converting logical constraints specified by the user in simple infix notation to restrictions upon the learnt model. For example, (A ∧ B) ↔ C can be entered as a command line prompt using = (&(A,B),C).
• Performing inference by evaluating the model or by calculating the MPE, both possibly given partial evidence. Each of our inference algorithms are linear in the size of the model, and are based on pseudocode given in [Kisa et al., 2014] and [Peharz et al., 2017] respectively.
• Learning utility functions from data, whose properties (such as being linear or being context-relative) are specified by the user in advance. This learning is done by forming a matrix equation representing our assumed proportionality relationship across all decisions and contexts, then solving to find utilities using non-negative linear regression with L2 regularisation (equivalent to solving a quadratic program).
• Computing blameworthiness by efficiently calculating the key quantities from our embedding, using parameters from particular queries given by the user. Results are then displayed in natural language and automatically saved for future reference.
A high-level overview of the complete structure of the system and full documentation are included in a package, which will be made available online.
EXPERIMENTS AND RESULTS
Using our implementation we learnt several models using a selection of datasets from varying domains in order to test our hypotheses.
In particular we answer three questions in each case:
(Q1) Does our system learn the correct overall probability distribution?
(Q2) Does our system capture the correct utility function?
(Q3) Does our system produce reasonable blameworthiness scores?
Full datasets are available as part of the package and summaries of each (including the domain constraints underlying our datasets) are given in the appendix.
Lung Cancer Staging
We use a synthetic dataset generated with the lung cancer staging influence diagram given in [Nease Jr and Owens, 1997]. The data was generated assuming that the overall decision strategy recommended in the original paper is followed with some high probability at each decision point. In this strategy, a thoractomy is the usual treatment unless the patient has mediastinal metastases, in which case a thoractomy will not result in greater life expectancy than the lower risk option of radiation therapy, which is then the preferred treatment.
The first decision made is whether a CT scan should be performed to test for mediastinal metastases, the second is whether to perform a mediastinoscopy. If the CT scan results are positive for mediastinal metastases then a mediastinoscopy is usually recommended in order to provide a second check, but if the CT scan result is negative then a mediastinoscopy is not seen as worth the extra risk involved in the operation. Possible outcomes are determined by variables that indicate whether the patient survives the diagnosis procedure and survives the treatment, and utility is measured by life expectancy.
For (Q1) we measure the overall log likelihood of the models learnt by our system on training, validation, and test datasets (see Table 2). A full comparison across a range of similar models and learning techniques is beyond the scope of our work here, although to provide some evidence of the competitiveness of PSDDs we include the log likelihood scores of a sum-product network (SPN) as a benchmark.
We follow a similar pattern in our remaining experiments, each time using Tachyon [Kalra, 2017] (an open source library for SPNs) to produce an SPN using the same training, validation, and test sets of our data, with the standard learning parameters as given in the Tachyon documentation example. We also compare the sizes (measured by the number of nodes) and the log likelihoods of PSDDs learnt with and without logical constraints in order to demonstrate the effectiveness of the former approach. Our model is able to recover the artificial decision-making strategy well (see Figure 1); at most points of the staging procedure the model learns a very similar distribution over decisions, and in all cases the correct decision is made the majority of times. Table 2: Log likelihoods and sizes of the constrained PS-DDs (the models we use in our system, indicated by the * symbol), unconstrained PSDDs, and the SPNs learnt in our three experiments.
Model
Answering (Q2) here is more difficult as the given utilities are not necessarily such that our decisions are linearly proportional to the expected utility of that decision. However, our strategy was chosen so as to maximise expected utility in the majority of cases. Thus, when comparing the given life expectancies with the learnt utility function, we still expect the same ordinality of utility values, even if not the same cardinality. In particular, our function assigns maximal utility (1.000) to the successful performing of a thoractomy when the patient does not have mediastinal metastases (the optimal scenario), and any scenario in which the patient dies has markedly lower utility (mean value 0.134).
In attempting to answer (Q3) we divide our question into two parts: does the system attribute no blame in the correct cases?; and does the system attribute more blame in the cases we would expect it to (and less in others)? Needless to say, it is very difficult (perhaps even impossible, at least without an extensive survey of human opinions) to produce an appropriate metric for how correct our attributions of blame are, but we suggest that these two criteria are the most fundamental and capture the core of what we want to evaluate. We successfully queried our model in a variety of settings corresponding to the two questions above and present representative examples below (we follow this same pattern in our second and third experiments). Regarding the first part of (Q3), one case in which we have blameworthiness scores of zero is when performing the action being judged is less likely to result in the outcome we are concerned with than the action(s) we are comparing it to. The chance of the patient dying in the diagnostic process (¬S DP ) is increased if a mediastinoscopy (M) is performed, hence the blameworthiness for such a death due to not performing a mediastinoscopy should be zero. As expected, our model assigns db N (¬M, M, ¬S DP ) = 0. To answer the second part of (Q3), we show that the system produces higher blameworthiness scores when a negative outcome is more likely to occur (assuming the actions being compared have relatively similar costs). For example, in the case where the patient does not have mediastinal metastases then the best treatment is a thoractomy, but a thoractomy will not be performed if the result of the last diagnostic test performed is positive. The specificity of a mediastinoscopy is higher than that of a CT scan, hence a CT scan is more likely to produce a false positive and thus (assuming no mediastinoscopy is performed as a second check) lead to the wrong treatment. 5 In the case where only one diagnostic procedure is performed we therefore have a higher degree of blame attributed to the decision to conduct a CT scan (0.013) as opposed to a mediastinoscopy (0.000), where we use N = 1.
Teamwork Management
Our second experiment uses a recently collected dataset of human decision-making in teamwork management [Yu et al., 2017]. This data was recorded from over 1000 participants as they played a game that simulates task allocation processes in a management environment. In each level of the game the player has different tasks to allocate to a group of virtual workers that have different attributes and capabilities. The tasks vary in difficulty, value, and time requirements, and the player gains feedback from the virtual workers as tasks are completed. At the end of the level the player receives a score based on the quality and timeliness of their work. Finally, the player is asked to record their emotional response to the result of the game in terms of scores corresponding to six basic emotions. We simplify matters slightly by considering only the self-declared management strategy of the player as our decisions. Within the game this is recorded by five check-boxes at the end of the level that are not mutually exclusive, giving 32 possible overall strategies. These strategy choices concern methods of task allocation such as load-balancing (keeping each worker's workload roughly even) and skill-based (assigning tasks by how likely the worker is to complete the task well and on time), amongst others. We also measure utility purely by the self-reported happiness of the player, rather than any other emotions. As part of our answer to (Q1) we investigate how often the model would employ each of the 32 possible strategies (where a strategy is represented by an assignment of values to the binary indicator decision variables) compared to the average participant (across all contexts), which can be seen in Figure 2. In general the learnt probabilities are similar to the actual proportions in the data, though noisier. The discrepancies are more noticeable (though understandably so) for decisions that were made very rarely, perhaps only once or twice in the entire dataset. These differences are also partly due to smoothing (i.e. all strategies have a nonzero probability of being played). For (Q2) we use the self-reported happiness scores to investigate our assumption that the number of times a decision is made is (linearly) proportional to the expected utility based on that decision. In order to do this we split the data up based on the context (game level) and produce a scatter plot (Figure 3) of the proportion of times a set of decisions is made against the average utility (happiness score) of that decision. Overall there is no obvious positive linear correlation as our original assumption would imply, although this could be because of any one or combination of the following reasons: players do not play enough rounds of the game to find out which strategies reliably lead to higher scores and thus (presumably) higher utilities; players do not accurately self-report their strategies; or players' strategies have relatively little impact on their overall utility based on the result of the game. We recall here that our assumption essentially comes down to supposing that people more often make decisions that result in greater utilities. The eminent plausibility of this statement, along with the relatively high likelihood of at least one of the factors in the list above means we do not have enough evidence here to refute the statement, although certainly further empirical work is required in order to demonstrate its truth.
Investigating this discrepancy further, we learnt a utility function (linear and context-relative) from the data and inspected the average weights given to the outcome variables (see right plot in Figure 4). A correct function should place higher weights on the outcome variables corresponding to higher ratings, which is true for timeliness, but not quite true for quality as the top rating is weighted only third highest. We found that the learnt utility weights are in fact almost identical to the distribution of the outcomes in the data (see left plot in Figure 4). Because our utility weights were learnt on the assumption that players more often use strategies that will lead to better expected outcomes, the similarity between these two graphs adds further weight to our suggestion that, in fact, the self-reported strategies of players have very little to do with the final outcome. To answer (Q3) we examine cases in which the blameworthiness score should be zero, and then compare cases that should have lower or higher scores with respect to one another. Once again, comprehensive descriptions of each of our tested queries are omitted for reasons of space, but here we present some representative examples. 6 Firstly, we considered level 1 of the game by choosing an alternative distribution Pr over contexts when generating our scores.
Distribution Of Outcomes
Here a player is less likely to receive a low rating for quality (Q 1 or Q 2 ) if they employ a skill-based strategy where tasks are more frequently allocated to better workers (S ). As expected, our system returns db N (S , ¬S , Q 1 ∨ Q 2 ) = 0. Secondly, we look at the timeliness outcomes. A player is less likely to obtain the top timeliness rating (T 5 ) if they do not use a strategy that uniformly allocates tasks (U) compared to their not using a random strategy of allocation (R). Accordingly, we find that db N (¬U, ¬T 5 ) > db N (¬R, ¬T 5 ), and more specifically we have db N (¬U, ¬T 5 ) = 0.002 and db N (¬R, ¬T 5 ) = 0 (i.e. a player should avoid using a random strategy completely if they wish to obtain the top timeliness rating).
Trolley Problems
We also devised our own experimental setup with human participants, using a small-scale survey (the relevant documents and data are included in the package) to gather data about hypothetical moral decision-making scenarios. These scenarios took the form of variants on the infamous trolley problem [Thomson, 1985]. We extended this idea, as is not uncommon in the literature (see, e.g. [Moral Machine, 2016]), by introducing a series of different characters that might be on either track: one person, five people, 100 people, one's pet, one's best friend, and one's family. We also added two further decision options: pushing whoever is on the side track into the way of the train in order to save whoever is on the main track, and sacrificing oneself by jumping in front of the train, saving both characters in the process. The survey then took the form of asking each participant which of the four actions they would perform (the fourth being inaction) given each possible permutation of the six characters on the main and side tracks (we assume that a character could not appear on both tracks in the same scenario). The general setup can be seen in Figure 5, with locations A and B denoting the locations of the characters on the main track and side track respectively. Last of all, we added a probabilistic element (which was explained in advance to participants) to our scenarios whereby the switch only works with probability 0.6, and pushing the character at location B onto the main track in order to stop the train succeeds with probability 0.8. This was used to account for the fact that people are generally more averse to actively pushing someone than to flipping a switch [Singer, 2005], and people are certainly more averse to sacrificing themselves than doing either of the former. However, depending on how much one values the character on the main track's life, one might be prepared to perform a less desirable action in order to increase their chance of survival.
In answering (Q1) we investigate how well our model serves as a representation of the aggregated decision preferences of participants by calculating how likely the system would be to make particular decisions in each of the 30 contexts and comparing this with the average across participants in the survey. For reasons of space we focus here on a representative subset of these comparisons: namely, the five possible scenarios in which the best friend character is on the main track (see Figure 6). In general, the model's predictions are similar to the answers given in the survey, although the effect of smoothing our distribution during learning is noticeable, especially due to the fact that the model was learnt with relatively few data points. Despite this handicap, the most likely decision in any of the 30 contexts according to the model is in fact the majority decision in the survey, with the ranking of other decisions in each context also highly accurate. Unlike our other two experiments, the survey data does not explicitly contain any utility information, meaning our system was forced to learn a utility function by using the probability distribution encoded by the PSDD. Within the decision-making scenarios we presented, it is plausible that the decisions made by participants were guided by weights that they assigned to the lives of each of the six characters and to their own life. Given that each of these is captured by a particular outcome variable we chose to construct a utility function that was linear in said variables. We also chose to make the utility function insensitive to context, as we would not expect how much one values the life of a particular character to depend on which track that character was on, or whether they were on a track at all.
Survey Answers
For (Q2), with no existing utility data to compare our learnt function, we interpreted the survival rates of each character as the approximate weight assigned to their lives by the participants. While the survival rate is a non-deterministic function of the decisions made in each context, we assume that over the experiment these rates average out enough for us to make a meaningful comparison with the weights learnt by our model. A visual representation of this comparison can be seen in Figure 7. It is immediately obvious that our system has captured the correct utility function to a high degree of accuracy. With that said, our assumption about using survival rates as a proxy for real utility weights does lend itself to favourable comparison with a utility function learnt from a probability distribution over contexts, decisions, and outcomes (which thus includes survival rates). Given the setup of the experiment, however, this assumption seems justified and, furthermore, to be in line with how most of the participants answered the survey. Figure 7: A comparison between the average survival rates of the seven characters (including the participants in the survey), normalised to sum to one, and the corresponding utility function weights learnt by our system.
Because of the symmetric nature of the set of contexts in our experiment, the probability of a particular character surviving as a result of a particular action across all contexts is just the same as the probability of that character not surviving. Hence in answering (Q3) we use our system's feature of being able to accept particular distributions Pr over the contexts in which we wish to attribute blame, allowing us to focus only on particular scenarios. Clearly, in any of the possible contexts one should not be blamed at all for the the death of the character on the main track for flipping the switch (F) as opposed to inaction (I), because in the latter case they will die with certainty, but not in the former. 7 Choosing a scenario arbitrarily to illustrate this point, with one person on the side track and five people on the main track, we have db N (F, I, ¬L 5 ) = 0 and db N (F, ¬L 5 ) = 0.307 (with our measure of cost importance N = 0.762, 1.1 times the negative minimum cost of any action). Now consider the scenario in which there is a large crowd of a hundred or so people on the main track, but one is unable to tell from a distance if the five or so people on the side track are strangers or one's family. Of course, the more likely it is that the family is on the side track, the more responsible one is for their deaths (¬L Fa ) if one, say, flips the switch (F) to divert the train. Conversely, we would also expect there to be less blame for the deaths of the 100 people (¬L 100 ) say, if one did nothing (I), the more likely it is that the family is on the side track (because the cost, for the participant at least, of somehow diverting the train is higher). We compare cases where there is a 0.3 probability that the family is on the side track against a 0.6 probability and for all calculations use the cost importance measure N = 1. Therefore, not only would we expect the blame for the death of the family to be higher when pulling the switch in the latter case, we would expect the value to be approximately twice as high as in the former case. Accordingly, we compute values db N (F, ¬L Fa ) = 0.264 and db N (F, ¬L Fa ) = 0.554 respectively. Similarly, when considering blame for the deaths of the 100 people due inaction, we find that db N (I, ¬L 100 ) = 0.153 in the former case and that db N (I, ¬L 100 ) = 0.110 in the latter case (when the cost of performing any other action is higher).
CONCLUSION
Our system utilises the specification of decision-making scenarios in HK, and at the same time exploits many of the desirable properties of PSDDs (such as tractability, semantically meaningful parameters, and the ability to be both learnt from data and include logical constraints). The system is flexible in its usage, allowing various inputs and specifications. In general, the models in our experiments are accurate representations of the distributions over the moral scenarios that they are learnt from. Our learnt utility functions, while simple in nature, are still able to capture subtle details and in some scenarios are able to match human preferences with high accuracy using very little data. With these two elements we are able to generate blameworthiness scores that are, prima facie, in line with human intuitions. We hope that our work here goes some way towards bridging the gap between the existing philosophical work on moral responsibility and the existing technical work on decision-making in automated systems. Table 3: A summary of the trolley problem data used in our third experiment.
No. data points 7446 No. variables 21 X variables Level 1 (L 1 ), ... , Level 6 (L 6 ) D variables Other (O), Load-balancing (L), Uniform (U), Skill-based (S ), Random (R) O variables Timeliness 1 (T 1 ), ... , Timeliness 5 (T 5 ), Quality 1 (Q 1 ), ... , Quality 5 (Q 5 ) Constraints i∈{1,...,6} L i , L i → ¬ j∈{1,...,6}\i L j for all i ∈ {1, ..., 6}, i∈{1,...,5} T i , T i → ¬ j∈{1,...,5}\i T j for all i ∈ {1, ..., 5}, i∈{1,...,5} Q i , Q i → ¬ j∈{1,...,5}\i Q j for all i ∈ {1, ..., 5} Model count 4800 Utilities given? Yes (Self-reported Happiness Score)
| 7,390 |
1810.03736
|
2897082611
|
Moral responsibility is a major concern in automated decision-making, with applications ranging from self-driving cars to kidney exchanges. From the viewpoint of automated systems, the urgent questions are: (a) How can models of moral scenarios and blameworthiness be extracted and learnt automatically from data? (b) How can judgements be computed tractably, given the split-second decision points faced by the system? By building on deep tractable probabilistic learning, we propose a learning regime for inducing models of such scenarios automatically from data and reasoning tractably from them. We report on experiments that compare our system with human judgement in three illustrative domains: lung cancer staging, teamwork management, and trolley problems.
|
Our contributions here are related to the body of work surrounding MIT's Moral Machine website @cite_3 . For example, @cite_8 build on the theory of @cite_30 by developing a computational model of moral decision-making whose predictions they test against Moral Machine data. Their focus is on learning abstract moral principles via hierarchical Bayesian inference, and although our framework can be used to these ends, it is also flexible with respect to different contexts, and allows constraints on learnt models. @cite_13 develop a method of aggregating the preferences of all participants (again, a secondary feature of our system) in order to make a given decision. However, due to the large numbers of such preference orderings, tractability issues arise and so sampling must be used.
|
{
"abstract": [
"We introduce a computational framework for understanding the structure and dynamics of moral learning, with a focus on how people learn to trade off the interests and welfare of different individuals in their social groups and the larger society. We posit a minimal set of cognitive capacities that together can solve this learning problem: (1) an abstract and recursive utility calculus to quantitatively represent welfare trade-offs; (2) hierarchical Bayesian inference to understand the actions and judgments of others; and (3) meta-values for learning by value alignment both externally to the values of others and internally to make moral theories consistent with one’s own attachments and feelings. Our model explains how children can build from sparse noisy observations of how a small set of individuals make moral decisions to a broad moral competence, able to support an infinite range of judgments and decisions that generalizes even to people they have never met and situations they have not been in or observed. It also provides insight into the causes and dynamics of moral change across time, including cases when moral change can be rapidly progressive, changing values significantly in just a few generations, and cases when it is likely to move more slowly.",
"We present a general approach to automating ethical decisions, drawing on machine learning and computational social choice. In a nutshell, we propose to learn a model of societ al preferences, and, when faced with a specific ethical dilemma at runtime, efficiently aggregate those preferences to identify a desirable choice. We provide a concrete algorithm that instantiates our approach; some of its crucial steps are informed by a new theory of swap-dominance efficient voting rules. Finally, we implement and evaluate a system for ethical decision making in the autonomous vehicle domain, using preference data collected from 1.3 million people through the Moral Machine website.",
"",
"We introduce a new computational model of moral decision making, drawing on a recent theory of commonsense moral learning via social dynamics. Our model describes moral dilemmas as a utility function that computes trade-offs in values over abstract moral dimensions, which provide interpretable parameter values when implemented in machine-led ethical decision-making. Moreover, characterizing the social structures of individuals and groups as a hierarchical Bayesian model, we show that a useful description of an individual's moral values - as well as a group's shared values - can be inferred from a limited amount of observed data. Finally, we apply and evaluate our approach to data from the Moral Machine, a web application that collects human judgments on moral dilemmas involving autonomous vehicles."
],
"cite_N": [
"@cite_30",
"@cite_13",
"@cite_3",
"@cite_8"
],
"mid": [
"2601920874",
"2758041078",
"",
"2783136805"
]
}
|
Deep Tractable Probabilistic Models for Moral Responsibility
|
Moral responsibility is a major concern in automated decision-making. In applications ranging from self-driving cars to kidney exchanges [Conitzer et al., 2017], contextualising and enabling judgements of morality and blame is becoming a difficult challenge, owing in part to the philosophically vexing nature of these notions. In the infamous trolley problem [Thomson, 1985], for example, a putative agent encounters a runaway trolley headed towards five individuals who are unable to escape the trolley's path. Their death is certain if the trolley were to collide with them. The agent, however, can divert the trolley to a side track by means of a switch, but at the cost of the death of a sixth individual, who happens to be on this latter track. While one would hope that in practice the situations encountered by, say, self-driving cars would not involve such extreme choices, providing a decision-making framework with the capability of reasoning about blame seems prudent.
Moral reasoning has been actively studied by philosophers, lawyers, and psychologists for many decades. Especially when considering quantitative frameworks, 1 a definition of responsibility that is based on causality has been argued to be particularly appealing [Chockler and Halpern, 2004]. But most of these definitions are motivated and instantiated by carefully constructed examples designed by the expert, and so are not necessarily viable in large-scale applications. Indeed, problematic situations encountered by automated systems are likely to be in a high-dimensional setting, with hundreds and thousands of latent variables capturing the low-level aspects of the application domain. Thus, the urgent questions are:
(a) How can models of moral scenarios and blameworthiness be extracted and learnt automatically from data?
(b) How can judgements be computed tractably, given the split-second decision points faced by the system?
In this work, we propose a learning regime for inducing models of moral scenarios and blameworthiness automatically from data, and reasoning tractably from them. To the best of our knowledge, this is the first of such proposals. The regime leverages the tractable learning paradigm [Poon and Domingos, 2011, Choi et al., 2015, Kisa et al., 2014, which can induce both high-and low-tree width graphical models with latent variables, and thus realises a deep probabilistic architecture [Pronobis et al., 2017]. We remark that we do not motivate any new definitions for moral responsibility, but show how an existing model can be embedded in the learning framework. We suspect it should be possible to analogously embed other definitions from the literature too. We then study the computational features of this regime. Finally, we report on experiments regarding the alignment between automated morally-responsible decision-making and human judgement in three illustrative domains: lung cancer staging, teamwork management, and trolley problems. 1 The quantitative nature of the framework used in this work implicitly takes a consequentialist stance when it comes to the normative ethical theory used to assess responsibility and blame, and we also rely on our utility functions being cardinal as opposed to merely ordinal. See, for example, [Sinnott-Armstrong, 2015] and [Strotz, 1953] for definitions and discussions on these stances. We use the word blameworthiness to capture an important part of what can more broadly be described as moral responsibility, and consider a set of definitions (taken directly from the original work, with slight changes in notation for the sake of clarity and conciseness) put forward by [Halpern and Kleiman-Weiner, 2018] (henceforth HK). In HK, environments are modelled in terms of variables and structural equations relating their values [Halpern and Pearl, 2005]. More formally, the variables are partitioned into exogenous variables X external to the model in question, and endogenous variables V that are internal to the model and whose values are determined by those of the exogenous variables. A range function R maps every variable to the set of possible values it may take. In any model, there exists one structural equation
F V : × Y∈X∪V\{V} R(Y) → R(V) for each V ∈ V. Definition 1. A causal model M is a pair (S, F ) where S is a signature (U, V, R) and F is a set of modifiable structural equations {F V : V ∈ V}. A causal setting is a pair (M, X) where X ∈ × X∈X R(X) is a context.
In general we denote an assignment of values to variables in a set Y as Y. Following HK, we restrict our considerations to recursive models, in which, given a context X, the values of all variables in V are uniquely determined. Definition 2. A primitive event is an equation of the form
V = v for some V ∈ V, v ∈ R(V). A causal formula is denoted [Y ← Y]ϕ
where Y ⊆ V and ϕ is a Boolean formula of primitive events. This says that if the variables in Y were set to values Y (i.e. by intervention) then ϕ would hold. For a causal formula ψ we write (M, X) ψ if ψ is satisfied in causal setting (M, X).
An agent's epistemic state is given by (Pr, K, U) where K is a set of causal settings, Pr is a probability distribution over this set, and U is utility function U : W → R ≥0 on the set of worlds, where a world w ∈ W is defined as a setting of values to all variables in V. w M,X denotes the unique world determined by the causal setting (M, X). Definition 3. We define how much more likely it is that ϕ will result from performing a than from a using:
δ a,a ,ϕ = max (M,X)∈ [A←a]ϕ Pr(M, X) − (M,X)∈ [A←a ]ϕ Pr(M, X) , 0
where A ∈ V is a variable identified in order to capture an action of the agent and ψ = {(M, X) ∈ K : (M, X) ψ} is the set of causal settings in which ψ (a causal formula) is satisfied.
The costs of actions are measured with respect to a set of outcome variables O ⊆ V whose values are determined by an assignment to all other variables. In a given causal setting (M, X), O A←a denotes the setting of the outcome variables when action a is performed and w M,O←O A←a ,X denotes the corresponding world.
Definition 4. The (expected) cost of a relative to O is:
c(a) = (M,X)∈K Pr(M, X) U(w M,X ) − U(w M,O←O A←a ,X )
Finally, HK introduce one last quantity N to measure how important the costs of actions are when attributing blame (this varies according to the scenario). Specifically, as N → ∞ then db N (a, a , ϕ) → δ a,a ,ϕ and thus the less we care about cost. Note that blame is assumed to be non-negative and so it is required that N > max a∈A c(a).
Definition 5. The degree of blameworthiness of a for ϕ relative to a (given c and N) is:
db N (a, a , ϕ) = δ a,a ,ϕ N − max(c(a ) − c(a), 0) N
The overall degree of blameworthiness of a for ϕ is then:
db N (a, ϕ) = max a ∈R(A)\{a} db N (a, a , ϕ))
For reasons of space we omit an example here, but include several when reporting the results of our experiments. For further examples and discussions, we refer the reader to HK.
PSDDs
Since, in general, probabilistic inference is intractable [Bacchus et al., 2009], tractable learning has emerged as a recent paradigm where one attempts to learn classes of Arithmetic Circuits (ACs), for which inference is tractable Pedro, 2013, Kisa et al., 2014]. In particular, we use Probabilistic Sentential Decision Diagrams (PSDDs) [Kisa et al., 2014] which are tractable representations of a probability distribution over a propositional logic theory (a set of sentences in propositional logic) represented by a Sentential Decision Diagram (SDD). Space precludes us from discussing SDDs and PSDDs in detail, but the main idea behind SDDs is to factor the theory recursively as a binary tree: terminal nodes are either 1 or 0, and the decision nodes are of the form (p 1 , s 1 ), . . . , (p k , s k ) where primes p 1 , . . . , p k are SDDs corresponding to the left branch, subs s 1 , . . . , s k are SDDs corresponding to the right branch, and p 1 , . . . , p k form a partition (the primes are consistent, mutually exclusive, and their disjunction p 1 ∨ ... ∨ p k is valid). In PSDDs, each prime p i in a decision node (p 1 , s 1 ), . . . , (p k , s k ) is associated with a nonnegative parameter θ i such that k i=1 θ i = 1 and θ i = 0 if and only if s i = ⊥. Each terminal node also has a a parameter θ such that 0 < θ < 1, and together these parameters can be used to capture probability distributions. Most significantly, probabilistic queries, such as conditionals and marginals, can be computed in time linear in the size of the model. PSDDs can be learnt from data [Liang et al., 2017], possibly with the inclusion of logical constraints standing for background knowledge. The ability to encode logical constraints into the model directly enforces sparsity which in turn can lead to increased accuracy and decreased size. In our setting, we can draw parallels between these logical constraints and deontological ethical principles (e.g. it is forbidden to kill another human being), and between learnt distributions over decision-making scenarios (which can encode preferences) and the utility functions used in consequentialist ethical theories.
BLAMEWORTHINESS VIA PSDDS
We aim to leverage the learning of PSDDs, their tractable query interface, and their ability to handle domain constraints for inducing models of moral scenarios. 2 This is made possible by means of an embedding that we sketch below, while also discussing assumptions and choices. At the outset, we reiterate that we do not introduce new definitions here, but show how an existing one, that of HK, can be embedded within a learning regime.
Variables
We first distinguish between scenarios in which we do and do not model outcome variables. In both cases we have exogenous variables X, but in the former the endogenous variables V are partitioned into decision variables D and outcome variables O, and in the latter we have V = D = O (this does not affect the notation in our definitions, however). This is because we do not assume that outcomes can always be recorded, and in some scenarios it makes sense to think of decisions as an end in themselves.
The range function R is defined by the scenario we model, but in practice we one-hot encode the variables and so the range of each is simply {0, 1}. A subset (possibly empty) of the structural equations in F are implicitly encoded within the structure of the SDD underlying the PSDD, consisting of the logical constraints that remain true in every causal model M. The remaining equations are those that vary depending on the causal model. Each possible assignment D, O given X corresponds to a set of structural equations that 2 Our technical development can leverage both parameter and (possibly partial) structure learning for PSDDs. Of course, learning causal models is a challenging problem [Acharya et al., 2018], and in this regard, probabilistic structure learning is not assumed to be a recipe for causal discovery in general [Pearl, 1998]. Rather, under the assumptions discussed later, we are able to use our probabilistic model for causal reasoning. combine with those encoded by the SDD to determine the values of the variables in V given X. The PSDD then corresponds to the probability distribution over K, compacting everything neatly into a single structure.
Our critical assumption here is that the signature S = (U, V, R) (the variables and the values they may take) remains the same in all models, although the structural equations F (the ways in which said variables are related) may vary. Given that each model represents an agent's uncertain view of a decision-making scenario we do not think it too restrictive to keep the elements of this scenario the same across the potential eventualities, so long as the way these elements interact may differ. Indeed, learning PSDDs from decision-making data requires that the data points measure the same variables each time.
Probabilities
Thus, our distribution Pr : × Y∈X∪D∪O R(Y) → [0, 1] ranges over assignments to variables instead of K. As a slight abuse of notation we write Pr(X, D, O). The key observation needed to translate between these two distributions (we denote the original as Pr HK ) and which relies on our assumption above is that each set of structural equations F together with a context X deterministically leads to a unique, complete assignment V of the endogenous variables, which we write (abusing notation slightly) as (F , X) | = V, though there may be many such sets of equations that lead to the same assignment. Hence, for any context X and any assignment Y for Y ⊆ V we have:
Pr(X, Y) = F :(F ,X)| =Y Pr HK (F , X)
We view a Boolean formula of primitive events (possibly resulting from decision A) as a function ϕ : × Y∈O∪D\{A} R(Y) → {0, 1} that returns 1 if the original formula is satisfied by the assignment, or 0 otherwise. We write D \a for a general vector of values over D \ {A}, and hence ϕ(D \a , O). Here, the probability of ϕ occurring given that action a is performed (i.e. conditioning on intervention) (M,X)∈ [A←a]ϕ Pr(M, X) given by HK can also be written as Pr(ϕ|do(a)). In general, it is not the case that Pr(ϕ|do(a)) = Pr(ϕ|a), but by assuming that the direct causes of action a are captured by the context X and that the other decisions and outcomes D \a and O are in turn caused by X and a we may use the back-door criterion [Pearl, 2009] with X as a sufficient set to write:
Pr(D \a , O|do(a)) = X Pr(D \a , O|a, X) Pr(X)
and thus may use D \a ,O,X ϕ(D \a , O) Pr(D \a , O|a, X) Pr(X) for Pr(ϕ|do(a)). In order not to re-learn a separate model for each scenario we also allow the user of our system the option of specifying a current, alternative distribution over contexts Pr (X).
Utilities
We now consider the utility function U, the output of which we assume is normalised to the range [0, 1]. 3 We avoid unnecessary extra notation by defining the utility function in terms of X, D, and O = (O 1 , ..., O n ) instead of worlds w. In our implementation we allow the user to input an existing utility function or to learn one from data. In the latter case the user further specifies whether or not the function should be context-relative, i.e. whether we have U(O) or U(O; X) (our notation) as, in some cases, how good a certain outcome O is depends on the context X. Similarly, the user also decides whether the function should be linear in the outcome variables, in which case the final utility is
U(O) = i U i (O i ) or U(O; X) = i U i (O i ; X) respec- tively (where we assume that each U i (O i ; X), U i (O i ) ≥ 0).
Here the utility function is simply a vector of weights and the total utility of an outcome is the dot product of this vector with the vector of outcome variables.
When learning utility functions, the key assumption we make (before normalisation) is that the probability of a certain decision being made given a context is linearly proportional to the expected utility of that decision in the context. Note that here a decision is a general assignment D and not a single action a. For example, in the case where there are outcome variables, and the utility function is both linear and context-relative, we assume that
Pr(D|X) ∝ i U i (O i ; X) Pr(O i |D, X)
. The linearity of this relationship is neither critical to our work nor imposes any real restrictions on it, but simplifies our calculations somewhat and means that we do not have to make any further assumptions about the noisiness of the decision-making scenario, or how sophisticated the agent is with respect to making utility-maximising decisions. The existence of a proportionality relationship itself is far more important. However, we believe this is, in fact, relatively uncontroversial and can be restated as the simple principle that an agent is more likely to choose a decision that leads to a higher expected utility than one that leads to a lower expected utility. If we view decisions as guided by a utility function, then it follows that the decisions should, on average, be consistent with and representative of that utility function.
Costs and Blameworthiness
We also adapt the cost function given in HK, denoted here by c HK . As actions do not deterministically lead to outcomes in our work, we cannot use O A←a to represent the specific outcome when decision a is made (in some context). For our purposes it suffices to use c(a) = − O,X U(O; X) Pr(O|a, X) Pr(X) or c(a) = − O,X U(O) Pr(O|a, X) Pr(X), depending on whether U is context-relative or not. This is simply the negative expected utility over all contexts, conditioning by intervention on decision A ← a. Using our conversion between Pr HK and Pr, the back-door criterion [Pearl, 2009], and our assumption that action a is not caused by the other endogenous variables (i.e. X is a sufficient set for A), it is straightforward to to show that this cost function is equivalent to the one in HK (with respect to determining blameworthiness scores). 4 Again, we also give the user the option of updating the distribution over contexts Pr(X) to some other distribution Pr (X) so that the current model can be re-used in different scenarios. Given δ a,a ,ϕ and c, both db N (a, a , ϕ) and db N (a, ϕ) are computed as in HK, although we instead require that N > −min a∈A c(a) (the equivalence of this condition to the one in HK is an easy exercise). With this the embedding is complete.
COMPLEXITY RESULTS
Given our concerns over tractability we provide several computational complexity results for our embedding. Basic results were given in [Halpern and Kleiman-Weiner, 2018], but only in terms of the computations being polynomial in |M|, |K|, and |R(A)|. Here we provide more detailed results that are specific to our embedding and to the properties of PSDDs. The complexity of calculating blameworthiness scores depends on whether the user specifies an alternative distribution Pr , although in practice this is unlikely to have a major effect on tractability. Finally, note that we assume here that the PSDD and utility function are given in advance and so we do not consider the computational cost of learning. A summary of our results is given in Table 1. We observe that all of the final time complexities are exponential in the size of at least some subset of the variables. This is a result of the Boolean representation; our results are, in fact, more tightly bounded versions of those in HK, which are polynomial in the size of |K| = O(2 |X|+|D|+|O| ). In practice, however, we only sum over worlds with non-zero probability of occurring. Using PSDDs allows us to exploit this fact in ways that other models cannot, as we can logically constrain the model to have zero probability on any impossible world. Thus, when calculating blameworthiness we can ignore a great many of the terms in each sum and speed up computation dramatically. To give some concrete examples, the model counts of the PSDDs in our experiments were 52, 4800, and 180 out of 2 12 , 2 21 , and 2 23 possible variable assignments, respectively.
Term Time Complexity δ a,a ,ϕ O(2 |X|+|D|+|O| (|ϕ| + |P|)) c(a) O(2 |X|+|O| (U + |P|)) db N (a, a , ϕ) O(2 |X|+|O| (U + 2 |D| (|ϕ| + |P|))) db N (a, ϕ) O(|R(A)|2 |X|+|O| (U + 2 |D| (|ϕ| + |P|)))
IMPLEMENTATION
The underlying motivation behind our system was that a user should be able to go from any stage of creating a model to generating blameworthiness scores as conveniently and as straightforwardly as possible. With this in mind our package runs from the command line and prompts the user for a series of inputs including: data; existing PSDDs, SDDs, or vtrees; logical constraints; utility function specifications; variable descriptions; and finally the decisions, outcomes, and other details needed to compute a particular blameworthiness score. These inputs and any outputs from the system are saved and thus each model and its results can be easily accessed and re-used if needed. Note that we assume each datum is a sequence of fully observed values for binary (possibly as a result of one-hot encoding) variables that correspond to the context, the decisions made, and the resulting outcome, if recorded.
Our implementation makes use of two existing resources: [The SDD Package 2.0, 2018], an open-source system for creating and managing SDDs, including compiling them from logical constraints; and LearnPSDD [Liang et al., 2017], a recently developed set of algorithms that can be used to learn the parameters and structure of PS-DDs from data, learn vtrees from data, and to convert SDDs into PSDDs. The resulting functionalities of our system can then be broken down into four broad areas:
• Building and managing models, including converting logical constraints specified by the user in simple infix notation to restrictions upon the learnt model. For example, (A ∧ B) ↔ C can be entered as a command line prompt using = (&(A,B),C).
• Performing inference by evaluating the model or by calculating the MPE, both possibly given partial evidence. Each of our inference algorithms are linear in the size of the model, and are based on pseudocode given in [Kisa et al., 2014] and [Peharz et al., 2017] respectively.
• Learning utility functions from data, whose properties (such as being linear or being context-relative) are specified by the user in advance. This learning is done by forming a matrix equation representing our assumed proportionality relationship across all decisions and contexts, then solving to find utilities using non-negative linear regression with L2 regularisation (equivalent to solving a quadratic program).
• Computing blameworthiness by efficiently calculating the key quantities from our embedding, using parameters from particular queries given by the user. Results are then displayed in natural language and automatically saved for future reference.
A high-level overview of the complete structure of the system and full documentation are included in a package, which will be made available online.
EXPERIMENTS AND RESULTS
Using our implementation we learnt several models using a selection of datasets from varying domains in order to test our hypotheses.
In particular we answer three questions in each case:
(Q1) Does our system learn the correct overall probability distribution?
(Q2) Does our system capture the correct utility function?
(Q3) Does our system produce reasonable blameworthiness scores?
Full datasets are available as part of the package and summaries of each (including the domain constraints underlying our datasets) are given in the appendix.
Lung Cancer Staging
We use a synthetic dataset generated with the lung cancer staging influence diagram given in [Nease Jr and Owens, 1997]. The data was generated assuming that the overall decision strategy recommended in the original paper is followed with some high probability at each decision point. In this strategy, a thoractomy is the usual treatment unless the patient has mediastinal metastases, in which case a thoractomy will not result in greater life expectancy than the lower risk option of radiation therapy, which is then the preferred treatment.
The first decision made is whether a CT scan should be performed to test for mediastinal metastases, the second is whether to perform a mediastinoscopy. If the CT scan results are positive for mediastinal metastases then a mediastinoscopy is usually recommended in order to provide a second check, but if the CT scan result is negative then a mediastinoscopy is not seen as worth the extra risk involved in the operation. Possible outcomes are determined by variables that indicate whether the patient survives the diagnosis procedure and survives the treatment, and utility is measured by life expectancy.
For (Q1) we measure the overall log likelihood of the models learnt by our system on training, validation, and test datasets (see Table 2). A full comparison across a range of similar models and learning techniques is beyond the scope of our work here, although to provide some evidence of the competitiveness of PSDDs we include the log likelihood scores of a sum-product network (SPN) as a benchmark.
We follow a similar pattern in our remaining experiments, each time using Tachyon [Kalra, 2017] (an open source library for SPNs) to produce an SPN using the same training, validation, and test sets of our data, with the standard learning parameters as given in the Tachyon documentation example. We also compare the sizes (measured by the number of nodes) and the log likelihoods of PSDDs learnt with and without logical constraints in order to demonstrate the effectiveness of the former approach. Our model is able to recover the artificial decision-making strategy well (see Figure 1); at most points of the staging procedure the model learns a very similar distribution over decisions, and in all cases the correct decision is made the majority of times. Table 2: Log likelihoods and sizes of the constrained PS-DDs (the models we use in our system, indicated by the * symbol), unconstrained PSDDs, and the SPNs learnt in our three experiments.
Model
Answering (Q2) here is more difficult as the given utilities are not necessarily such that our decisions are linearly proportional to the expected utility of that decision. However, our strategy was chosen so as to maximise expected utility in the majority of cases. Thus, when comparing the given life expectancies with the learnt utility function, we still expect the same ordinality of utility values, even if not the same cardinality. In particular, our function assigns maximal utility (1.000) to the successful performing of a thoractomy when the patient does not have mediastinal metastases (the optimal scenario), and any scenario in which the patient dies has markedly lower utility (mean value 0.134).
In attempting to answer (Q3) we divide our question into two parts: does the system attribute no blame in the correct cases?; and does the system attribute more blame in the cases we would expect it to (and less in others)? Needless to say, it is very difficult (perhaps even impossible, at least without an extensive survey of human opinions) to produce an appropriate metric for how correct our attributions of blame are, but we suggest that these two criteria are the most fundamental and capture the core of what we want to evaluate. We successfully queried our model in a variety of settings corresponding to the two questions above and present representative examples below (we follow this same pattern in our second and third experiments). Regarding the first part of (Q3), one case in which we have blameworthiness scores of zero is when performing the action being judged is less likely to result in the outcome we are concerned with than the action(s) we are comparing it to. The chance of the patient dying in the diagnostic process (¬S DP ) is increased if a mediastinoscopy (M) is performed, hence the blameworthiness for such a death due to not performing a mediastinoscopy should be zero. As expected, our model assigns db N (¬M, M, ¬S DP ) = 0. To answer the second part of (Q3), we show that the system produces higher blameworthiness scores when a negative outcome is more likely to occur (assuming the actions being compared have relatively similar costs). For example, in the case where the patient does not have mediastinal metastases then the best treatment is a thoractomy, but a thoractomy will not be performed if the result of the last diagnostic test performed is positive. The specificity of a mediastinoscopy is higher than that of a CT scan, hence a CT scan is more likely to produce a false positive and thus (assuming no mediastinoscopy is performed as a second check) lead to the wrong treatment. 5 In the case where only one diagnostic procedure is performed we therefore have a higher degree of blame attributed to the decision to conduct a CT scan (0.013) as opposed to a mediastinoscopy (0.000), where we use N = 1.
Teamwork Management
Our second experiment uses a recently collected dataset of human decision-making in teamwork management [Yu et al., 2017]. This data was recorded from over 1000 participants as they played a game that simulates task allocation processes in a management environment. In each level of the game the player has different tasks to allocate to a group of virtual workers that have different attributes and capabilities. The tasks vary in difficulty, value, and time requirements, and the player gains feedback from the virtual workers as tasks are completed. At the end of the level the player receives a score based on the quality and timeliness of their work. Finally, the player is asked to record their emotional response to the result of the game in terms of scores corresponding to six basic emotions. We simplify matters slightly by considering only the self-declared management strategy of the player as our decisions. Within the game this is recorded by five check-boxes at the end of the level that are not mutually exclusive, giving 32 possible overall strategies. These strategy choices concern methods of task allocation such as load-balancing (keeping each worker's workload roughly even) and skill-based (assigning tasks by how likely the worker is to complete the task well and on time), amongst others. We also measure utility purely by the self-reported happiness of the player, rather than any other emotions. As part of our answer to (Q1) we investigate how often the model would employ each of the 32 possible strategies (where a strategy is represented by an assignment of values to the binary indicator decision variables) compared to the average participant (across all contexts), which can be seen in Figure 2. In general the learnt probabilities are similar to the actual proportions in the data, though noisier. The discrepancies are more noticeable (though understandably so) for decisions that were made very rarely, perhaps only once or twice in the entire dataset. These differences are also partly due to smoothing (i.e. all strategies have a nonzero probability of being played). For (Q2) we use the self-reported happiness scores to investigate our assumption that the number of times a decision is made is (linearly) proportional to the expected utility based on that decision. In order to do this we split the data up based on the context (game level) and produce a scatter plot (Figure 3) of the proportion of times a set of decisions is made against the average utility (happiness score) of that decision. Overall there is no obvious positive linear correlation as our original assumption would imply, although this could be because of any one or combination of the following reasons: players do not play enough rounds of the game to find out which strategies reliably lead to higher scores and thus (presumably) higher utilities; players do not accurately self-report their strategies; or players' strategies have relatively little impact on their overall utility based on the result of the game. We recall here that our assumption essentially comes down to supposing that people more often make decisions that result in greater utilities. The eminent plausibility of this statement, along with the relatively high likelihood of at least one of the factors in the list above means we do not have enough evidence here to refute the statement, although certainly further empirical work is required in order to demonstrate its truth.
Investigating this discrepancy further, we learnt a utility function (linear and context-relative) from the data and inspected the average weights given to the outcome variables (see right plot in Figure 4). A correct function should place higher weights on the outcome variables corresponding to higher ratings, which is true for timeliness, but not quite true for quality as the top rating is weighted only third highest. We found that the learnt utility weights are in fact almost identical to the distribution of the outcomes in the data (see left plot in Figure 4). Because our utility weights were learnt on the assumption that players more often use strategies that will lead to better expected outcomes, the similarity between these two graphs adds further weight to our suggestion that, in fact, the self-reported strategies of players have very little to do with the final outcome. To answer (Q3) we examine cases in which the blameworthiness score should be zero, and then compare cases that should have lower or higher scores with respect to one another. Once again, comprehensive descriptions of each of our tested queries are omitted for reasons of space, but here we present some representative examples. 6 Firstly, we considered level 1 of the game by choosing an alternative distribution Pr over contexts when generating our scores.
Distribution Of Outcomes
Here a player is less likely to receive a low rating for quality (Q 1 or Q 2 ) if they employ a skill-based strategy where tasks are more frequently allocated to better workers (S ). As expected, our system returns db N (S , ¬S , Q 1 ∨ Q 2 ) = 0. Secondly, we look at the timeliness outcomes. A player is less likely to obtain the top timeliness rating (T 5 ) if they do not use a strategy that uniformly allocates tasks (U) compared to their not using a random strategy of allocation (R). Accordingly, we find that db N (¬U, ¬T 5 ) > db N (¬R, ¬T 5 ), and more specifically we have db N (¬U, ¬T 5 ) = 0.002 and db N (¬R, ¬T 5 ) = 0 (i.e. a player should avoid using a random strategy completely if they wish to obtain the top timeliness rating).
Trolley Problems
We also devised our own experimental setup with human participants, using a small-scale survey (the relevant documents and data are included in the package) to gather data about hypothetical moral decision-making scenarios. These scenarios took the form of variants on the infamous trolley problem [Thomson, 1985]. We extended this idea, as is not uncommon in the literature (see, e.g. [Moral Machine, 2016]), by introducing a series of different characters that might be on either track: one person, five people, 100 people, one's pet, one's best friend, and one's family. We also added two further decision options: pushing whoever is on the side track into the way of the train in order to save whoever is on the main track, and sacrificing oneself by jumping in front of the train, saving both characters in the process. The survey then took the form of asking each participant which of the four actions they would perform (the fourth being inaction) given each possible permutation of the six characters on the main and side tracks (we assume that a character could not appear on both tracks in the same scenario). The general setup can be seen in Figure 5, with locations A and B denoting the locations of the characters on the main track and side track respectively. Last of all, we added a probabilistic element (which was explained in advance to participants) to our scenarios whereby the switch only works with probability 0.6, and pushing the character at location B onto the main track in order to stop the train succeeds with probability 0.8. This was used to account for the fact that people are generally more averse to actively pushing someone than to flipping a switch [Singer, 2005], and people are certainly more averse to sacrificing themselves than doing either of the former. However, depending on how much one values the character on the main track's life, one might be prepared to perform a less desirable action in order to increase their chance of survival.
In answering (Q1) we investigate how well our model serves as a representation of the aggregated decision preferences of participants by calculating how likely the system would be to make particular decisions in each of the 30 contexts and comparing this with the average across participants in the survey. For reasons of space we focus here on a representative subset of these comparisons: namely, the five possible scenarios in which the best friend character is on the main track (see Figure 6). In general, the model's predictions are similar to the answers given in the survey, although the effect of smoothing our distribution during learning is noticeable, especially due to the fact that the model was learnt with relatively few data points. Despite this handicap, the most likely decision in any of the 30 contexts according to the model is in fact the majority decision in the survey, with the ranking of other decisions in each context also highly accurate. Unlike our other two experiments, the survey data does not explicitly contain any utility information, meaning our system was forced to learn a utility function by using the probability distribution encoded by the PSDD. Within the decision-making scenarios we presented, it is plausible that the decisions made by participants were guided by weights that they assigned to the lives of each of the six characters and to their own life. Given that each of these is captured by a particular outcome variable we chose to construct a utility function that was linear in said variables. We also chose to make the utility function insensitive to context, as we would not expect how much one values the life of a particular character to depend on which track that character was on, or whether they were on a track at all.
Survey Answers
For (Q2), with no existing utility data to compare our learnt function, we interpreted the survival rates of each character as the approximate weight assigned to their lives by the participants. While the survival rate is a non-deterministic function of the decisions made in each context, we assume that over the experiment these rates average out enough for us to make a meaningful comparison with the weights learnt by our model. A visual representation of this comparison can be seen in Figure 7. It is immediately obvious that our system has captured the correct utility function to a high degree of accuracy. With that said, our assumption about using survival rates as a proxy for real utility weights does lend itself to favourable comparison with a utility function learnt from a probability distribution over contexts, decisions, and outcomes (which thus includes survival rates). Given the setup of the experiment, however, this assumption seems justified and, furthermore, to be in line with how most of the participants answered the survey. Figure 7: A comparison between the average survival rates of the seven characters (including the participants in the survey), normalised to sum to one, and the corresponding utility function weights learnt by our system.
Because of the symmetric nature of the set of contexts in our experiment, the probability of a particular character surviving as a result of a particular action across all contexts is just the same as the probability of that character not surviving. Hence in answering (Q3) we use our system's feature of being able to accept particular distributions Pr over the contexts in which we wish to attribute blame, allowing us to focus only on particular scenarios. Clearly, in any of the possible contexts one should not be blamed at all for the the death of the character on the main track for flipping the switch (F) as opposed to inaction (I), because in the latter case they will die with certainty, but not in the former. 7 Choosing a scenario arbitrarily to illustrate this point, with one person on the side track and five people on the main track, we have db N (F, I, ¬L 5 ) = 0 and db N (F, ¬L 5 ) = 0.307 (with our measure of cost importance N = 0.762, 1.1 times the negative minimum cost of any action). Now consider the scenario in which there is a large crowd of a hundred or so people on the main track, but one is unable to tell from a distance if the five or so people on the side track are strangers or one's family. Of course, the more likely it is that the family is on the side track, the more responsible one is for their deaths (¬L Fa ) if one, say, flips the switch (F) to divert the train. Conversely, we would also expect there to be less blame for the deaths of the 100 people (¬L 100 ) say, if one did nothing (I), the more likely it is that the family is on the side track (because the cost, for the participant at least, of somehow diverting the train is higher). We compare cases where there is a 0.3 probability that the family is on the side track against a 0.6 probability and for all calculations use the cost importance measure N = 1. Therefore, not only would we expect the blame for the death of the family to be higher when pulling the switch in the latter case, we would expect the value to be approximately twice as high as in the former case. Accordingly, we compute values db N (F, ¬L Fa ) = 0.264 and db N (F, ¬L Fa ) = 0.554 respectively. Similarly, when considering blame for the deaths of the 100 people due inaction, we find that db N (I, ¬L 100 ) = 0.153 in the former case and that db N (I, ¬L 100 ) = 0.110 in the latter case (when the cost of performing any other action is higher).
CONCLUSION
Our system utilises the specification of decision-making scenarios in HK, and at the same time exploits many of the desirable properties of PSDDs (such as tractability, semantically meaningful parameters, and the ability to be both learnt from data and include logical constraints). The system is flexible in its usage, allowing various inputs and specifications. In general, the models in our experiments are accurate representations of the distributions over the moral scenarios that they are learnt from. Our learnt utility functions, while simple in nature, are still able to capture subtle details and in some scenarios are able to match human preferences with high accuracy using very little data. With these two elements we are able to generate blameworthiness scores that are, prima facie, in line with human intuitions. We hope that our work here goes some way towards bridging the gap between the existing philosophical work on moral responsibility and the existing technical work on decision-making in automated systems. Table 3: A summary of the trolley problem data used in our third experiment.
No. data points 7446 No. variables 21 X variables Level 1 (L 1 ), ... , Level 6 (L 6 ) D variables Other (O), Load-balancing (L), Uniform (U), Skill-based (S ), Random (R) O variables Timeliness 1 (T 1 ), ... , Timeliness 5 (T 5 ), Quality 1 (Q 1 ), ... , Quality 5 (Q 5 ) Constraints i∈{1,...,6} L i , L i → ¬ j∈{1,...,6}\i L j for all i ∈ {1, ..., 6}, i∈{1,...,5} T i , T i → ¬ j∈{1,...,5}\i T j for all i ∈ {1, ..., 5}, i∈{1,...,5} Q i , Q i → ¬ j∈{1,...,5}\i Q j for all i ∈ {1, ..., 5} Model count 4800 Utilities given? Yes (Self-reported Happiness Score)
| 7,390 |
1810.03393
|
2894597154
|
This paper focuses on density-based clustering, particularly the Density Peak (DP) algorithm and the one based on density-connectivity DBSCAN; and proposes a new method which takes advantage of the individual strengths of these two methods to yield a density-based hierarchical clustering algorithm. Our investigation begins with formally defining the types of clusters DP and DBSCAN are designed to detect; and then identifies the kinds of distributions that DP and DBSCAN individually fail to detect all clusters in a dataset. These identified weaknesses inspire us to formally define a new kind of clusters and propose a new method called DC-HDP to overcome these weaknesses to identify clusters with arbitrary shapes and varied densities. In addition, the new method produces a richer clustering result in terms of hierarchy or dendrogram for better cluster structures understanding. Our empirical evaluation results show that DC-HDP produces the best clustering results on 14 datasets in comparison with 7 state-of-the-art clustering algorithms.
|
Many variants of DBSCAN have been attempted to overcome the weakness of detecting clusters with varied densities. OPTICS @cite_21 draws a reachability'' plot based on the @math -nearest neighbour distance. In the @math -axis of the plot, adjacent points follow close to each other such that point @math is the closest to @math in terms of the reachability distance'' The reachability-distance'' of object @math to object @math is the greater one between the core distance'' of @math and the distance between @math and @math . The core distance'' of @math is the minimum @math that makes @math a core'' object (the distance to its @math -nearest neighbour, @math ). . The reachability distance for each point is shown in @math -axis. Since clusters centre normally has a higher density or lower reachability distance than the cluster boundaries, each cluster is visible as a valley'' in this plot. Then a hierarchical method can be used to extract different clusters. The overall clustering performance depends on the hierarchical method employed on the reachability plot.
|
{
"abstract": [
"Cluster analysis is a primary method for database mining. It is either used as a stand-alone tool to get insight into the distribution of a data set, e.g. to focus further analysis and data processing, or as a preprocessing step for other algorithms operating on the detected clusters. Almost all of the well-known clustering algorithms require input parameters which are hard to determine but have a significant influence on the clustering result. Furthermore, for many real-data sets there does not even exist a global parameter setting for which the result of the clustering algorithm describes the intrinsic clustering structure accurately. We introduce a new algorithm for the purpose of cluster analysis which does not produce a clustering of a data set explicitly; but instead creates an augmented ordering of the database representing its density-based clustering structure. This cluster-ordering contains information which is equivalent to the density-based clusterings corresponding to a broad range of parameter settings. It is a versatile basis for both automatic and interactive cluster analysis. We show how to automatically and efficiently extract not only 'traditional' clustering information (e.g. representative points, arbitrary shaped clusters), but also the intrinsic clustering structure. For medium sized data sets, the cluster-ordering can be represented graphically and for very large data sets, we introduce an appropriate visualization technique. Both are suitable for interactive exploration of the intrinsic clustering structure offering additional insights into the distribution and correlation of the data."
],
"cite_N": [
"@cite_21"
],
"mid": [
"2160642098"
]
}
|
Hierarchical clustering that takes advantage of both density-peak and density-connectivity
|
Clustering is an important and useful tool in data mining and knowledge discovery. It has been widely used for partitioning instances in a dataset such that similar instances are grouped together to form a cluster [1]. It is the most common unsupervised knowledge discovery technique for automatic data-labelling in various areas, e.g., information retrieval, image segmentation, and pattern recognition [2]. Depending on the basis of categorisation, clustering methods can be divided into several kinds, e.g., partitioning clustering versus hierarchical clustering; and density-based clustering versus representative-based clustering [3].
Partitioning clustering methods are the simplest and most fundamental clustering methods [1].
They are relatively fast, and easy to understand and implement. They organise data points in a given dataset into k non-overlapping partitions, where each partition represents a cluster; and each point belongs to one cluster only [1]. However, traditional distance-based partitioning methods, such as k-means [4] and k-medoids [5], which are representative-based clustering, usually cannot find clusters with arbitrary shapes [6]. In contrast, density-based clustering algorithms can find clusters with arbitrary sizes and shapes while effectively separating noise. Thus, this kind of clustering is attracting more research and development. DBSCAN [7] and DENCLUE [8] are examples of an important class of density-based clustering algorithms. They define clusters as regions of high densities which are separated by regions of low densities. However, these algorithms have difficulty finding clusters with widely varied densities because a global density threshold is used to identify high-density regions [9,10,11,12,13]. Rodriguez et al. proposed a clustering algorithm based on density peaks (DP) [14]. It identifies cluster modes 1 which have local maximum density and are well separated, and then assigns each remaining point in the dataset to a cluster mode via a linking scheme. Compared with the classic density-based clustering algorithms (e.g., DBSCAN and DENCLUE), DP has a better capability in detecting clusters with varied densities. Despite this improved capability, Chen et al. [15] have recently identified a condition under which DP fails to detect all clusters with varied densities. They proposed a new measure called Local Contrast (LC) (instead of density) to enhance DP such that the resultant algorithm LC-DP is more robust against clusters with varied densities.
It is important to note that the progression from DBSCAN or DENCLUE to DP, and subsequently LC-DP, with improved clustering performance, is achieved without formally defining the types of clusters DP and LC-DP can detect.
In this paper, we are motivated to formally define the type of clusters that an algorithm is designed to detect before investigating the weaknesses of the algorithm. This approach enables us to determine two additional weaknesses of DP; and we show that the use of LC does not overcome these weaknesses. This paper proposes a new clustering method which takes advantage of the individual strengths of DBSCAN and DP to yield a density-based hierarchical clustering algorithm that produces a better and richer clustering result. It makes the following contributions:
(i) Formalising a new type of clusters called η-linked clusters; and providing a necessary condition for a clustering algorithm to correctly detect all η-linked clusters in a dataset.
(ii) Uncovering that DP is a clustering algorithm which is designed to detect η-linked clusters; and 1 The original DP paper regards detected cluster modes as "cluster centres" [14].
identifying two weaknesses of DP, i.e., the conditions under which DP cannot correctly detect all clusters in a dataset.
(iii) Introducing a different view of DP as a hierarchical procedure. Instead of producing flat clusters, this procedure generates a dendrogram, enabling a user to identify clusters in a hierarchical way.
(iv) Formalising the second new type of clusters called η-density-connected clusters which encompass all η-linked clusters and the kind of non-η-linked clusters that DP fails to detect.
(v) Proposing a density-connected hierarchical DP to overcome the identified weaknesses of DP. The new algorithm DC-HDP merges two cluster modes only if they are density-connected in the hierarchy.
(vi) Completing an empirical evaluation by comparing with 8 state-of-the-art clustering algorithms: 4 density-based clustering algorithms (DBSCAN [7], Mean shift clustering [16], DP [14] and LC-DP [15]), 3 hierarchical clustering algorithms (OPTICS [9], PHA [17] and HDBSCAN [18]) and 1 graph-based spectral clustering algorithm [19].
The formal analysis of DP provides an insight into the key weaknesses of DP. This has enabled a simple and effective method (DC-HDP) to overcome the weaknesses. The proposed method takes advantage of the individual strengths of DBSCAN and DP, i.e., DC-HDP has an enhanced ability to identify all clusters of arbitrary shapes and varied densities; where neither DBSCAN nor DP has. In addition, the dendrogram generated by DC-HDP gives a richer information of hierarchical components of clusters in a dataset than a flat partitioning provided by DBSCAN and DP. This is achieved with the same computational time complexity as in DP, having one additional parameter only which can usually be set to a default value in practice.
Since hierarchical clustering algorithms allow a user to choose a particular clustering granularity, hierarchical clustering is very popular and has been used far more than non-hierarchical clustering [20]. Thus, DC-HDP provides a new perspective which can be widely used in various applications.
The rest of the paper is organised as follows: we provide an overview of density-based clustering algorithms and related work in Section 2. Section 3 formalises the η-linked clusters. Section 4 uncovers that DP is an algorithm which detects η-linked clusters; and reveals two weaknesses of DP. Section 5 reiterates the definition of density-connected clusters used by DBSCAN, and states the known weakness of DBSCAN. Section 6 presents the definition of the second new type of clusters called η-density-connected clusters. The new density-connected hierarchical clustering algorithm is proposed in Section 7. In Section 8, we empirically evaluate the performance of the proposed algorithm by comparing it with 8 other state-of-the-art clustering algorithms. Discussion and the conclusions are provided in the last two sections. 3
DP is an algorithm which identifies η-linked clusters
Density Peak or DP [14] has two main procedures as follows:
1. Identifying cluster modes via a ranking scheme which aims to rank all points. The top k points are selected as the modes of k clusters.
2. Linking each non-mode data point to its nearest neighbour with higher density. The points directly linked or transitively linked to the same cluster mode are assigned to the same cluster.
This produces k clusters at the end of the process.
Therefore, DP is an algorithm implementing Definition 3 to identify η-linked clusters in a dataset (in step 2 above).
The first step is critical, as specified in Theorem 1. To effectively identify cluster modes, DP assumes that different cluster modes should have a relatively large distance between them in order to detect well-separated clusters [14]. To prevent a cluster from breaking into multiple clusters when it has a slow rate of density decrease from the cluster mode, it applies a ranking scheme on all points, and then selects the top k points as cluster modes. This is done as follows.
DP defines a distance function δ(x) as follows:
δ(x) = (i) dis(x, η x ), ∀x ∈ D (ii) max y∈D dis(x, y), if x =m(4)
DP selects the top k points with the highest γ(x) = ρ(x) × δ(x) as cluster modes. This means that each cluster mode should have high density and be far away from other cluster modes.
Weaknesses of DP
While DP generally produces a better clustering result than DBSCAN (see the evaluation conducted by Chen et al [15] reported in Appendix C of the paper), we identify two fundamental weaknesses of DP:
(i) Given a dataset of k η-linked clusters, if the data distribution is such that the k cluster modes are not ranked as the top k points with the highest γ values, then DP cannot correctly identify these cluster modes, as stated in Theorem 1. The source of this weakness is the ranking scheme in step 1 of the DP procedure.
An example is a dataset having two Gaussian clusters and an elongated cluster with two local peaks (the left one is the cluster mode), as shown in Figure 3. DP with k = 3 splits the elongated cluster into two sub-clusters because the two local peaks are ranked among the top 3 points with the highest γ values; and it merges the bottom two clusters into one, as shown in Figure 3a. A better clustering result can be obtained by using k = 4 which resulted in a correct identification of the two bottom clusters, as shown in Figure 3b. But it still splits the single top cluster into two. Note that all three clusters would be correctly identified by DP if the three true cluster modes were pre-selected for DP using a different process. This data distribution is similar to that shown in Figure 2c which has valid η-linked clusters.
An existing improvement of DP
Chen et al. [15] provides a method called Local Contrast (LC-DP), which aims to improve the ranking of cluster modes for detecting all clusters in a dataset with clusters of varied densities-the condition under which DP fails to correctly all clusters, i.e., the condition they have discovered.
LC-DP uses local contrast LC(x), instead of density ρ(x), in Equation 2 to determine η x . LC(x)
is defined to be the number of times that x has a higher density than its K-nearest neighbours, which has values between 0 and k. Then the ranking is based on LC(x) × δ LC (x), where δ LC (x) is the version of δ(x) based on η x which is determined by LC(x). Chen et al. [15] show that the use of LC makes DP to be more robust to clusters with varied densities.
LC-DP has the ability to enhance DP's clustering performance on clusters with varied densities [15], e.g., the 2Q and 3C datasets. However, LC doesn't overcome the two weaknesses of DP mentioned above. For example, LC-DP still fails to identify all clusters on the 2O dataset which does not have η-linked clusters. Therefore, it is important to design a method to overcome DP's two weaknesses.
Here we propose a hierarchical method based on density-connectivity with this aim in mind.
We first reiterate the currently known definitions of density connectivity and density-connected clusters in Section 5. Then, we define a new type of clusters based on density connectivity in the following section.
Density-connected clusters
The classic density-based clustering, such as DBSCAN [7], defines a cluster based on density connectivity as follows:
Definition 4. Using an -neighbourhood density estimator ρ (·) with density threshold τ , a point x 1 is density connected with another point x p in a sequence of p unique points from D, i.e., {x 1 , x 2 , x 3 , ..., x p }:
Connect τ (x 1 , x p ) is defined as: Connect τ (x1, xp) ↔ (i) if p > 2: ∃ {x 1 ,x 2 ,...,xp} x1 x2... xp (ii)if p = 2: x1 xp ∨ x1 xp (5) where x 1 x p iff dis(x 1 , x p ) ≤ and (ρ (x 1 ) ≥ τ ).
Definition 5. A density-connected cluster C, which has a mode m = arg max x∈ C ρ(x), is a maximal set of points that are density connected with its mode, i.e., C = {x ∈ D | Connect τ (x, m)}.
Based on the density-connectivity, we have the property that points in a density-connected cluster C are density connected to each other via the mode m, i.e., ∀ x∈ C Connect τ (x, m).
Note that a set of points having multiple modes (of the same peak density) must be density connected together in order to form a density-connected cluster.
14
The key characteristic of a density-connected cluster is that the cluster can have an arbitrary shape and size [7]. Although an η-linked cluster can have arbitrary shape and size as well, DP which detects η-linked clusters has issues with the two types of data distributions, mentioned in Section 4. The η-linked path may link points from different clusters which are separated by low density regions, e.g., the two circle-clusters in Figure 4a.
On the other hand, though a clustering algorithm such as DBSCAN which detects density-connected clusters does not have the above issues, DBSCAN has issues in identifying all clusters of varying densities [13], as shown in Figure 1b in Section 2.
In a nutshell, both the clustering algorithms designed for detecting η-linked clusters and the densityconnected clusters have different limitations.
η-density-connected clusters
To overcome the limitations of (i) η-linked clusters stated in Sections 3 and 4; and (ii) densityconnected clusters stated in Section 5, we strengthen the η-linked path based on the density connectivity as follows:
Definition 6. An η-density-connected path linking points x 1 and x p ,
DCpath τ (x 1 , x p ) = {x 1 , x 2 , x 3 .., x p },
is defined as a sequence of the smallest number of p unique points starting with x 1 and ending with
x p such that ∀ i∈{1,...,p−1} x i+1 = η xi , where η x is x's nearest density-connected neighbour which has a higher density than x, i.e., η x = arg min y∈D, ρ(y)>ρ(x), Connect τ (x,y)
dis(x, y)
Definition 7. The length of DCpath τ (x, y) is defined as
LDCpath τ (x, y) = (i) |DCpath τ (x, y)|, if there exists a
DCpath τ linking x and y (ii) ∞, otherwise
Note that |DCpath τ (x, y)| = 1 if x = y and |DCpath τ (x, y)| > 1, if x = y.
Definition 8. An η-density-connected clusterC i , which has only one mode m i = arg max x∈Ci ρ(x), is a maximal set of density-connected points having the shortest density-connected path to its cluster mode m i wrt other cluster modes in terms of the path length, i.e.,
C i = {x ∈ D | Connect τ (x, m i ) ∧ ∀ mj =mi LDCpath τ (x, m j ) > LDCpath τ (x, m i )}.
Based on these definitions, we have that an η-linked cluster becomes an η-density-connected cluster if all points in the η-linked cluster are density-connected. In addition, if a dataset has only densityconnected clusters and each cluster has only one mode, then all clusters are η-density-connected clusters.
It is worth mentioning that an η-linked cluster can be a density-connected cluster, providing the all η-linked paths in the cluster are η-density-connected path, i.e, all points are density-connected to the cluster mode. A density-connected cluster can be an η-linked cluster, providing each point in the cluster has an η-linked path to the cluster mode.
With proper neighbourhood threshold and density threshold τ , well-separated clusters cannot be linked together as an η-density-connected cluster. This enables us to identify clusters which are density-connected but are not η-linked in a dataset. Figure 5 illustrates the cluster boundaries of two clusters after selecting Peak 1 and Peak 2 as cluster modes and assigning the rest of the points based on Definition 8. It can be seen that all these clusters can be identified as η-density-connected clusters now, although the clusters in Figure 5c and Figure 5d are not η-linked clusters.
An η-density-connected hierarchical clustering algorithm
Here we propose an η-density-connected hierarchical clustering algorithm. It is described in two subsections. In Section 7.1, we introduce a different view of the (flat) DP as a hierarchical procedure.
Instead of employing a decision graph to rank points proposed in the original DP paper [14], the proposed hierarchical procedure merges clusters bottom-up to produce a dendrogram. The dendrogram enables a user to identify clusters in a hierarchical way, which cannot be produced by the current flat DP procedure.
In Section 7.2, we describe how the hierarchical DP procedure is modified to identify η-densityconnected clusters based on Definition 8.
A different view of DP: a hierarchical procedure
We show that the DP clustering [14] can be accomplished as a hierarchical clustering; and the two clustering procedures produce exactly the same flat clustering result when the same k is used. If we run DP n times by setting k = n, n − 1, ..., 1, we get a bottom-up based clustering result. To avoid running DP n times, which has the time complexity of O(dn 3 ), we propose a hierarchical procedure as follows.
The initialisation step in the hierarchical DP is as follows. After calculating γ for all points, let every point x ∈ D be a cluster mode (which is equivalent to running DP with k = n); and each cluster mode is tagged with its γ. Let D be D \m which is the set used for merging in the next step.
The first merging of two clusters (which is equivalent to running DP with k = n − 1) is conducted as follows. Select the cluster having the mode point z with the smallest γ value in D; and the cluster is merged with the cluster having η z . z is then removed from D. The above merging process is repeated iteratively by merging two clusters at each iteration until D = ∅. Figure 6 illustrates the clustering results produced from the hierarchical DP as dendrograms on the three datasets used in Figures 3 and 4. Figure 6a shows that the elongated cluster is split at the top level in the dendrogram. The dendrogram in Figure 6c shows that points from the two circles are (incorrectly) merged at low levels in the hierarchical structure. Figure 6e illustrates that many points from the sparse-and-large cluster are linked to the dense-and-small cluster when γ ≈ 0. clustering result such that the number of clusters below the threshold is k. Since both the hierarchical DP and the flat (original) DP produce the same flat clustering result, the name DP is used hereafter to denote both the two versions, as far as the flat clustering result is concerned.
Here we provide a lemma on the hierarchical view of DP as follows:
Lemma 3.
If there exists exactly one η x ∈ D, ∀ x∈D , then the hierarchical view of the DP dendrogram is unique; and the clustering result of DP using a γ threshold is unique.
Proof. Since the dendrogram is built by gradually linking each point x to η x until reachingm, when ∀ x∈D , there exists exactly one η x ∈ D, then the path(x,m) is unique. Therefore, the hierarchical view of the dendrogram is unique.
When setting a γ threshold, the points with γ values higher than the threshold would become cluster modes m. Since the other points only have one unique path linking one of the cluster modes and the points linking to the same mode become the same cluster, the clustering result is unique.
Advantages of the hierarchical DP: There are two advantages of the hierarchical DP over the flat DP. First, the former avoids the need to select k cluster modes in the first step of the clustering process. Instead, after the dendrogram is produced at the end of the hierarchical clustering process, k is required only if a flat clustering is to be extracted from the dendrogram. Second, the dendrogram produced by the hierarchical DP provides a richer information of the hierarchical structure of clusters in a dataset than a flat partitioning provided by the flat DP.
The hierarchical DP has the same time complexity of the flat DP, i.e., O(dn 2 ), since γ and η are calculated for all points only once.
A density-connected hierarchical DP
In order to enhance DP to detect clusters from a larger set of data distributions than that covered by density-connected clusters or η-linked clusters, the clusters based on Definition 8 is used.
Using the hierarchical DP, it turns out that only a simple rule needs to be incorporated, i.e., to check whether two cluster modes at the current level are density-connected before merging them at the next level in the hierarchical structure: two clusters C i and C j can only be merged if there is an η-density-connected path between their cluster modes. This is checked at each level of the hierarchy, where the procedure selects the cluster having the mode x with smallest γ (x) = ρ (x) × δ (x) to merge with another cluster having η x , where
δ (x) = (i) dis(x, η x ), if ∃ y∈D y = η x (ii) max y∈D dis(x, y), otherwise.(8)
Algorithm 1 DC-HDP(D, , τ )
Input: D -input data (n × d matrix); -radius of the neighbourhood; τ -density threshold.
Output: T -a dendrogram (an agglomerative hierarchical cluster tree). Once the dendrogram is obtained from Algorithm 1, a global γ threshold can be used to select the clusters as a flat clustering result.
Note that the algorithm for the hierarchical DP is the same as the DC-HDP algorithm, except the former uses (i) γ(·) instead of γ (·); and (ii) D (the whole dataset without the global peak point)
instead of M odeList.
Compared with DP, DC-HDP has one more parameter τ , used for the density-connectivity check;
and the same are used for both density estimation and density connectivity check. DC-HDP maintains the same time complexity of DP, i.e., O(dn 2 ). Similar to hierarchical DP, if ∀ x∈G ∃!η x ∈ D, the hierarchical view of the DC-HDP dendrogram is unique and the clustering result of DC-HDP using a γ threshold is unique.
DC-HDP has the ability to enhance the clustering performance of DP on a dataset having η-densityconnected clusters which encompass the two kinds of clusters DP is weak at, mentioned in Section 3.
This is because DC-HDP does not establish any DCpath between points from different clusters which are not density-connected. Since clusters which are not density-connected are only merged at the top of the dendrogram with the highest γ value, a global γ threshold can separate all these clusters.
Unlike DBSCAN, DC-HDP does not rely on a global density threshold to link points; thus, DC-HDP has the ability to detect clusters with varied densities.
In a nutshell, DC-HDP takes advantage of the individual strengths of DBSCAN and DP, i.e., it has the enhanced ability to identify all clusters of arbitrary shapes and varied densities; where neither DBSCAN nor DP has. Figure 7 illustrates a clustering result from DC-HDP as a dendrogram on each of the three datasets used in Figure 3 and Figure 4. It shows that all clusters can be detected perfectly by DC-HDP when an appropriate γ threshold (blue horizontal line) is used on the dendrogram.
Furthermore, DC-HDP has an additional advantage in comparison with DP and DBSCAN, i.e., the dendrogram produced has a rich structure of clusters at different levels. This is the advantage of a hierarchical clustering over a flat clustering.
Empirical evaluation
This section presents experiments designed to evaluate the effectiveness of DC-HDP. We compare DC-HDP with 4 density-based clustering algorithms (DBSCAN [7], Mean shift clustering [16], DP [14] and LC-DP [15]), 3 hierarchical clustering algorithms (OPTICS [9], PHA [17] and HDBSCAN [18]) and 1 graph-based spectral clustering algorithm [19]. Because LC-DP is an improvement over DP, to conduct a head-to-head comparison with LC-DP, we used the same local contrast LC(x) of LC-DP [15] for DC-HDP to determine η x in density-connected clusters in the following experiments.
Since clustering is an unsupervised learning task, here we used a standard external evaluation method such that first running the clustering algorithm on the whole dataset with particular parameter settings and then comparing the clustering result with the ground truth [1,6]. Furthermore, densitybased clustering algorithms normally are sensitive to parameter settings because the nonparametric density estimator used in these algorithms suffers from boundary bias without specific knowledge about the domain of the data [29]. To obtain a fair comparison, We report the best clustering performance 22 within a range of parameter search for each algorithm.
The clustering performance is measured in terms of Macro-average F-measure score 6 : given a clustering result, we calculate the precision score P i and the recall score R i for each cluster C i based on the confusion matrix, and the F-measure score of C i is the harmonic mean of P i and R i . After computing the pairwise F-measure Score, then we use the Hungarian algorithm [31] to search the optimal match between the clustering results and true clusters. The overall F-measure score is the unweighted average over all matched clusters: F-measure= 1
k k i=1 2PiRi
Pi+Ri . We used 6 artificial datasets (Pathbased, Compound, 2O, 3C, 3G and 2Q) and 11 real-world datasets with different data sizes and dimensions. 7 Table 2 presents the data properties of the datasets. Figure 8 shows the scatter plots of Pathbased and Compound datasets. All algorithms used in our experiments were implemented in Matlab. 8 The experiments were 6 It is worth noting that other evaluation measures such as purity and Normalized Mutual Information (NMI) [30] only take into account the points assigned to clusters and do not account for noise. A clustering algorithm which assigns the majority of the points to noise may result in a high clustering performance. Thus the F-measure is more suitable than purity or NMI in assessing the clustering performance of density-based clustering, e.g, DBSCAN and OPTICS. 7 Pathbased is from Chang et al. [32], Compound is from Zahn [33], Shape is from Müller et al. [34], COIL20 is from Li et al. [35] and all other real-world datasets are from UCI Machine Learning Repository [36]. 8 The source codes of all algorithms used in our experiments can be obtained at:
• DBSCAN, DC-HDP, LC-DP and DP: https://sourceforge.net/projects/hierarchical-dp/ required to extract k clusters from the dendrogram (at the end of Algorithm 1) by setting a corresponding γ threshold. For the fair comparison with DP, we fix the additional parameter K = √ n for LC-DP (as suggested in Chen et al. [15]). τ is set to 1 for DC-HDP as the minimum density threshold. only on this dataset since it assigned many high-density points to noise. .
It is interesting to mention that both LC-DP and DC-HDP use a local density estimator which enhances the mode selection of DP. The key difference between LC-DP and DC-HDP is that DC-HDP has the density-connectivity check when linking points. Thus, DC-HDP overcomes the drawbacks of both DP and LC-DP in detecting η-density-connected clusters on the 2O dataset.
To evaluate whether the performance difference among the algorithms is significant, we conduct the Friedman test with the post-hoc Nemenyi test [37]. Figure 9 shows the significance test result on the 5 algorithms with average F-measure higher than 0.70. It shows that DC-HDP is significantly better than Spectral clustering, DP and OPTICS. Table 5 presents the runtimes of the 9 algorithms on 4 datasets with different sizes. For a fair comparison, we converted DP and LC-DP to the same hierarchical version as DC-HDP. It shows that DC-HDP is only a bit slower than DP in practice due to the additional density connectivity check.
Note that Spectral clustering has O(n 3 ) but others have O(n 2 ) time complexity. centroid-linkage measures [6]. In contrast, DC-HDP and hierarchical DP do not simply employ a dissimilarity measure to determine the two clusters to merge at each level. Instead, they first identify the cluster having the smallest γ (and γ), and then select another cluster which has the shortest path length to it. While the path length may be considered as a kind of dissimilarity measure, that is a supporting measure, and the key determinant is γ .
Second, different from traditional methods, DC-HDP and hierarchical DP are a new agglomerative approach detecting η-density-connected clusters and η-linked clusters, respectively. Therefore, they can detect arbitrarily shaped clusters while existing agglomerative methods generally detect clusters with specific shapes, e.g, single-linkage measure tends to output elongated-shaped clusters, complete-linkage measure tends to detect compact-shaped clusters, all-pairs linkage and centroid-linkage measures tend to find globular clusters [6].
The standard algorithm for hierarchical agglomerative clustering normally has a time complexity of O(n 3 ) [38]. However, many efficient hierarchical agglomerative clustering approaches have the same quadratic time complexity and space complexity as DC-HDP when the pairwise distance matrix is require as input, e.g., SLINK [39] for single-linkage and CLINK [40] for complete-linkage clustering.
PHA measures the similarity between two clusters based on a hypothetical potential field that relies on both local and global data distribution information. It can detect slightly overlapping clusters with non-spherical shapes in noisy data. However, compared with density-based methods (e.g., DBSCAN, HDBSCAN, OPTICS and DC-HDP), PHA performed much worse in detecting arbitrarily shaped clusters, e.g., on the 2Q and 2O datasets.
There is another class of algorithms which employs a method to produce an initial set of subclusters from data, before applying a hierarchical clustering. For example, CHAMELEON [41] produces a K-nearest-neighbour graph from data and then breaks the graph into many small subgraphs (as subclusters). An agglomerative method is finally used to merge subclusters iteratively based on a similarity measure. The same general approach is used in two more recent methods, i.e., HDBSCAN [18] and OPTICS [9]; though different methods are used to produce subclusters in the preprocessing before building a hierarchical structure on them.
We show that DC-HCP is a simple yet effective approach than this class of algorithms because DC-HDP applies agglomerative clustering directly on individual points in the given dataset without a preprocessing to create subclusters. Section 8 shows that DC-HDP produces a significantly better clustering result than the most recent representative of this class of algorithms, i.e., HDBSCAN, as 28 well as OPTICS.
Parameter settings
DC-HDP requires two parameters and τ to build a dendrogram from a dataset, as shown in Table 3 where is more important as it is used in both density estimation and density connectivity check.
In our experiments, we found that τ can be set to 1 (i.e., at least 1 point in the -neighbourhood) in most datasets in terms of getting the best clustering results.
In all empirical evaluations reported in Section 8, we used the same parameter for both the density estimation and density connectivity check for DC-HDP. However, we can split into two different parameters for the two processes individually. By doing so, we found that DC-HDP can perform even better than the results shown in Table 4 on some datasets.
Ability to detect noise
It is worth mentioning that density-based clustering has the ability to identify noise and then filter them out in clustering. For example, DBSCAN uses a global density threshold to identify noise as points with a density lower than the threshold in the first step of the algorithm [7]. DP [14] employs a different method to identify the noise which are points with low densities at border regions of clusters (see footnote 4 in Section 2 for details). This is conducted at the end of the clustering process. This same method can be used by DC-HDP to identify noise.
Conclusions
The lack of a cluster definition, that a state-of-the-art density-based algorithm called Density Peak (DP) can detect, has motivated the work in this paper.
We formally defined two new kinds of clusters: η-linked clusters and η-density-connected clusters.
A further analysis revealed that DP is a clustering algorithm detecting η-linked clusters; and it has weaknesses in data distributions which contain a special kind of η-linked clusters or some non-η-linked clusters. We show that η-density-connected clusters encompass all η-linked clusters and the kind of non-η-linked clusters that DP fails to detect.
After showing that DP clustering can be accomplished as a hierarchical clustering, we proposed a density-connected hierarchical DP clustering called DC-HDP, which is designed to detect η-densityconnected clusters.
By taking advantage of the individual strengths of DBSCAN and DP, DC-HDP produces clustering outputs which are superior in two key aspects. First, DC-HDP has an enhanced ability to identify clusters of arbitrary shapes and varied densities; where neither DBSCAN nor DP has. Second, the dendrogram generated by DC-HDP gives a richer information of the hierarchical structure of clusters 29 in a dataset than a flat partitioning provided by DBSCAN and DP. DC-HDP achieved the enhanced ability with the same time complexity as DP. The additional parameters of DC-HDP can be set to default values in practice.
We confirm the previous study that LC-DP is an improvement over DP; and show that the proposed DC-HDP is further improvement over both LC-DP and DP. Our contribution is not merely algorithmic improvement, but formal cluster definitions which were non-existence in the previous studies. These formal definitions are the foundation of DC-HDP.
Our empirical evaluation validates this superiority by showing that DC-HDP produces the best clustering results on 28 datasets in comparison with 8 state-of-the-art clustering algorithms, including density-based clustering, i.e., DBSCAN, Mean Shift clustering, DP and LC-DP; hierarchical clustering,
i.e., HDBSCAN, PHA and OPTICS; and graph-based spectral clustering algorithm.
| 5,510 |
1810.03545
|
2895590610
|
We propose two novel samplers to produce high-quality samples from a given (un-normalized) probability density. The sampling is achieved by transforming a reference distribution to the target distribution with neural networks, which are trained separately by minimizing two kinds of Stein Discrepancies, and hence our method is named as Stein neural sampler. Theoretical and empirical results suggest that, compared with traditional sampling schemes, our samplers share the following three advantages: 1. Being asymptotically correct; 2. Experiencing less convergence issue in practice; 3. Generating samples instantaneously.
|
The fusion of deep learning and sampling is not new. @cite_0 proposed A-NICE-MC, where the proposal distribution in MCMC is, instead of domain-agnostic, adversarially trained using neural networks. Stein GAN also proposes to train a neural network to draw samples from given target distributions for probabilistic inference. Their method is by iteratively adjusting the weights according to the SVGD updates. From a GAN perspective, in each iteration of Stein GAN, the discriminator is performing a two sample test between the currently generated samples and the one-step updated samples by SVGD. This method generalized SVGD to training neural networks and it is minimizing the Kullback-Leibler divergence between the sampling distribution and the target inside a RKHS.
|
{
"abstract": [
"Existing Markov Chain Monte Carlo (MCMC) methods are either based on general-purpose and domain-agnostic schemes which can lead to slow convergence, or hand-crafting of problem-specific proposals by an expert. We propose A-NICE-MC, a novel method to train flexible parametric Markov chain kernels to produce samples with desired properties. First, we propose an efficient likelihood-free adversarial training method to train a Markov chain and mimic a given data distribution. Then, we leverage flexible volume preserving flows to obtain parametric kernels for MCMC. Using a bootstrap approach, we show how to train efficient Markov chains to sample from a prescribed posterior distribution by iteratively improving the quality of both the model and the samples. A-NICE-MC provides the first framework to automatically design efficient domain-specific MCMC proposals. Empirical results demonstrate that A-NICE-MC combines the strong guarantees of MCMC with the expressiveness of deep neural networks, and is able to significantly outperform competing methods such as Hamiltonian Monte Carlo."
],
"cite_N": [
"@cite_0"
],
"mid": [
"2674709473"
]
}
|
STEIN NEURAL SAMPLER
|
A core problem in machine learning and Bayesian statistics is to approximate a complex target distribution given its probability density function up to an (unknown) normalizing constant. Discrete approximation of a given distribution has been studied extensively using Markov Chain Monte Carlo (MCMC) (Gamerman and Lopes, 2006). Even though being asymptotically correct, MCMC suffers from several practical issues such as local trap, which is commonly seen in practice and often due to the multimodality of the target. Another classical approach is Variational Bayes (VB) (Kingma and Welling, 2013;Blei et al., 2017), where the target density is approximated using a tractable parametric family. Although the optimization of VB tends to have no convergence issue, the asymptotic correctness is usually not guaranteed. Another alternative is the recently proposed Stein Variational Gradient Descent (SVGD) (Liu and Wang, 2016), which is a particle-based sampling framework that sequentially updates a set of particles. SVGD is able to approximate the target distribution using a small number of particles (samples) but the local trap still remains an issue. Besides, SVGD can not generate samples instantaneously given a trained SVGD chain: if additional samples are required, we will need to start a new SVGD chain from scratch.
Motivated by the weaknesses and advantages of MCMC, VB and SVGD, it is desirable to construct a sampler that is both asymptotically correct with less convergence issue and able to generate sample instantaneously after being well trained. Villani (2008) showed that between any two non-atomic distributions, there always exist a measurable transform T . Therefore, a natural route to achieve our goals is through learning a preservable transformation T that transforms an easy-to-sample distribution p z (z) to the target distribution q(x). One way to learn such a transformation is to model it within a sufficiently rich family of functions such as Neural Network (Raghu et al., 2016). The success of Generative Adversarial Network (GAN) indicates that Neural networks have very strong expressive power in modeling complex distributions (Goodfellow et al., 2014;Radford et al., 2015;Brock et al., 2018). 2. Fisher-NS: KSD distinguish two distributions based on discriminator function in Reproducing Kernel Hilbert Space (RKHS), which is a relatively small function space. To enhance the power of the sampler, we expand the space of discriminator function from RKHS to L 2 , where functions are represented by L 2 -regularized deep neural networks. This training scheme optimally corresponds to minimizing the Fisher divergence, which is stronger than KSD. This extension also leads to a better empirical performance.
The proposed schemes aim to train a transformation that is asymptotically correct and enables instantaneous sampling. Extensive empirical studies conducted in section 6 demonstrates that our neural sampler suffers less convergence issue such as local trap.
2)
If F is large enough, D(p, q, F) = 0 if and only if p = q. However, F cannot be too large. Otherwise, D(p, q, F) = ∞ for any p = q.
Kernelized Stein Discrepancy If the function space F = f ∈ H d ||f || H d = 1 is a unit ball in an RKHS H d , with k(·, ·) being the associated positive definite kernel function, then the supremum in (2.2) has a closed form solution ,
KSD(p, q) = E x,x ∼p [u q (x, x )] where u q (x, x ) = S q (x) k(x, x )S q (x ) + S q (x) ∇ x k(x, x ) + ∇ x k(x, x ) S q (x ) + tr(∇ x,x k(x, x )).
( 2.3)
The corresponding optimal discriminative function f * satisfies ||f * || H d = 1 and
f * (·) ∝ E x∼p [S q (x)k(x, ·) + ∇ x k(x, ·)] ,
Empirical KSD measures the goodness-of-fit of a given sample X = {x 1 , · · · , x n } to the target density q(x). The minimum variance unbiased estimator can be written as
KSD(p, q) = 1 n(n − 1) n i=1 n j =i [u q (x i , x j )]
(2.4)
Despite the ease of computation, RKHS is relatively small and may fail to detect non-convergence in higher dimensions (Gorham and Mackey, 2017).
Integral Probability Metrics (IPM) IPM measures the distance between two distributions p and q via the largest discrepancy in expectation over a class of well behaved witness functions F:
IPM (p, q, F) = sup f ∈F {E x∼p [f (x)] − E x∼q [f (x)]} . (2.5)
Note that the function space F needs to have some constraint. Otherwise the corresponding IPM will be trivial with value 0 if p = q and ∞ if p = q. A broad class of distances can be viewed as special cases of IPM (Müller, 1997). For instance, if we choose all the functions whose integration under q is zero, then we get Stein discrepancy (Gorham and Mackey, 2017).
Related Work
SVGD Given an initial distribution p 0 (x), the idea of SVGD is to learn a nonlinear transformation T such that the distribution of T (x) approximates the target distribution q in the sense of Kullback-Leibler (KL) divergence. However, directly learning the nonlinear transformation T is difficult and SVGD circumvents this difficulty by constructing T with incremental linear updates x = x + f (x). The key observation is that
∇ KL(p || q) =0 = − E x∼p [tr(S q f (x) + ∇ x f (x))] (3.1)
where x ∼ p and x ∼ p . Confining f inside a unit ball of RKHS gives the optimal f * in a closed form, in a similar spirit to KSD.
However, one weakness of SVGD is that the nonlinear transformation T can not be stored in any form after finishing an SVGD chain. As a consequence, we have to run extra SVGD chains when extra samples are needed. In comparison, our proposed neural sampler learns a preservable nonlinear transformation T trained by neural networks. In this way, our framework is able to generate new samples without additional efforts once the transformation is learned.
Generative Adversarial Network GAN also learns to transform random noises to high-quality samples. Its objective is to implicitly capture the underlying distribution of given samples by building a generator to sample from it. In GANs, the generator is constructed using deep neural networks and trained adversarially with another discriminator network. The discriminator takes both true samples and generated samples as input and essentially conducts two sample test between them. The min-max game between the two networks potentially corresponds to minimizing the Jenson-Shannon divergence in the vanilla GAN (Goodfellow et al., 2014). Other choices of divergence lead to variants of GAN such as Maximum Mean Discrepancy (MMD) (Li et al., 2015), Wasserstein distance , Chi-squared distance , etc. The aforementioned distances can all be seen as special cases of IPM; see an overview by .
Fusion of GANs and Sampling
The fusion of deep learning and sampling is not new. Song et al. (2017) proposed A-NICE-MC, where the proposal distribution in MCMC is, instead of domain-agnostic, adversarially trained using neural networks. Stein GAN (Wang and Liu, 2016) also proposes to train a neural network to draw samples from given target distributions for probabilistic inference. Their method is by iteratively adjusting the weights according to the SVGD updates. From a GAN perspective, in each iteration of Stein GAN, the discriminator is performing a two sample test between the currently generated samples and the one-step updated samples by SVGD. This method generalized SVGD to training neural networks and it is minimizing the Kullback-Leibler divergence between the sampling distribution and the target inside a RKHS.
Although Stein GAN shares a similar objective with our proposed neural samplers, two approaches are fundamentally different. Instead of KL divergence, we incorporate Stein discrepancy, a special case of Integral Probability Metrics, which serves as a bridging tool between true samples and true density. This enables various frameworks in IPM based GAN to be directly developed in parallel ( Figure 1).
KSD Neural Sampler
Let q(x) denote the un-normalized target density with support on X ⊂ R d and Q(x) be the corresponding distribution function. Denote the noise by z with density p z (z) supported on R d0 . Let G θ denote our sampler, which is a multilayer neural network parametrized by θ. Let x = G θ (z) be our generated samples and denote its underlying density by p θ (x). In summary, our setup is as follows:
z ∼ p z (z), G θ (z) = x ∼ p θ (x)
We want to train the network parameters θ so that p θ (x) is a good approximation to the target q(x).
Methodology
Evaluating how close is the generated samples X = {G θ (z i )} n i=1 to the target q(x) is equivalent to conducting onesample goodness-of-fit test. When q(x) is un-normalized, one well-defined testing framework is based on kernelized Stein discrepancy.
KSD is the counterpart of maximum mean discrepancy (MMD) in two-sample test (Gretton et al., 2012). By choosing F in IPM (2.5) to be an unit-ball in RKHS, Li et al. (2015) proposed MMD-GAN, which simplifies the GAN framework by eliminating the need of training a discriminator network. As a result, MMD-GAN is more stable and easier to train.
Motivated by MMD-GAN, we propose to train our neural sampler G θ by directly minimizing KSD with respect to θ using gradient-based optimization. At each iteration, we sample a batch of noise {z 1 , · · · , z n } ∼ p z (z) and calculate the corresponding samples {G θ (z 1 ), · · · , G θ (z n )}. Plugging in the samples to formula (2.4) gives the empirical KSD estimator, which can serve as an indicator of how well our current samples are approximating q(x). Iteratively updating θ to minimize the empirical KSD until convergence. Algorithm 1 summarizes our training procedure.
Algorithm 1 KSD-NS 1: Input: un-normalized density q(x), noise density p z (z), number of iterations T , learning rate α, mini-batch size n. 2: Initialize parameter θ for the generator network. 3: For iteration t = 1, . . . , T , Do 4:
Generate i.i.d. noise inputs z 1 , . . . , z n from N (0, I d0 ) 5: Obtain fake sample G θ (z 1 ), · · · , G θ (z n ) 6: Compute empirical KSD(p θ , q) 7: Compute gradients ∇ θ KSD(p θ , q) 8: update θ ← θ − α∇ θ KSD(p θ , q) 9: End For
Comparing to Stein GAN, KSD-NS has the following advantages:
1. While the loss of Stein GAN is not interpretable, the loss in our KSD-NS directly shows the sample quality.
KSD is always non-negative and a smaller KSD indicates a better sample quality. 2. We show KSD-NS is theoretically sound in section 4.2: with sufficient batch size, empirical KSD loss converging to zero implies weak convergence of the sampling distribution.
3. Even though both methods used RKHS and kernel trick, empirical results show that our KSD-NS tends to capture better global structures and less likely to drop mode. Stein GAN is more sensitive to initialization and suffers local trap more severely.
Mini-Batch Error Bound
The optimization described above involves evaluating the expectation under p θ and it is approximated by the minibatch sample mean. Natural questions to ask include when the empirical KSD is minimized and what we can say about the population KSD? In the following, we demonstrate that the generalization error is bounded when mini-batch sample size is sufficiently large.
Let X θ = {x 1 , · · · , x n } be the generated samples from our generator G θ (·) with Θ being the parameter space. Denote θ and θ * as the value minimizing the empirical KSD and population KSD:
θ = argmin θ∈Θ KSD(X θ , q), θ * = argmin θ∈Θ KSD(p θ , q),
We are interested in bounding the difference
KSD p θ , q − KSD (p θ * , q) ,
whose upper bound is given in the following theorem. Theorem 4.1. Assume q and k(·, ·) satisfy some smoothness conditions so that the newly defined kernel u q in (2.3) is L 1 -Lipschitz with one of the arguments fixed. Under some norm constraints on the weight matrix of each layer of the generator G θ , then for any > 0, the following bound holds with probability at least exp(− 2 n/2),
KSD (p θ * , q) ≤ KSD p θ , q + O C d √ n + ,
where C d is a function of the dimension d.
Remark
The norm constraints on neural networks in the above theorem require the norm of each weight matrices to be bounded. The usual norms used are Frobenius norm, W p,q norm and other matrix norms. More details about the conditions and the results are in the supplementary material.
Theorem 4.1 implies that in practice, with enough batch size, the generator G θ can be trusted if we observe a small KSD loss. But we want to raise the following two points.
1. When KSD is small, what we can tell is that in the support of the samples, the score function of p θ matches the target S q well. An almost-zero KSD doesn't necessarily imply p θ captures all the modes or recovers all the support of the true density. Admittedly, local trap is a common problem across various sampling methods, but our KSD-NS demonstrates strong resistance to this issue in simulations. 2. KSDs based on the commonly used kernels, such as Gaussian kernel, Matern kernel, fail to detect nonconvergence when d ≥ 3 (Gorham and Mackey, 2017). However, KSD used in our neural sampler is exempt from such curse of dimensionality and we show that with some mild constraints, convergence to zero of KSD-NS does imply weak convergence of p θ to q.
Metrization of Weak Convergence
The issue of KSD with Gaussian kernel in higher dimensions can be traced back to the fast decaying kernel function. If we choose a heavy-tail kernel, such as Inverse Multi-Quadratic (IMQ) kernel, the corresponding KSD can detect non-convergence. The following theorem is from Gorham and Mackey (2017).
Theorem 4.2. Under IMQ kernel k(x, y) = c 2 + ||x − y|| 2 2 β where c > 0 and β ∈ (−1, 0), KSD(p θ , q) → 0 implies p θ d −→ q.
The above theorem shows that IMQ KSD detects non-convergence. Since IMQ is a bounded kernel, the corresponding KSD is well-defined as long as F (p θ , q) < ∞. If we use Gaussian kernel or other popular kernels, we can still ensure weak convergence if we enforce uniformly tightness (Merolla et al., 2016). One simple approach is through weight clipping of the generator, see appendix for details.
If we choose the appropriate kernel and prevent our generated samples from going to infinity, the KSD-NS is theoretically sound. However, in practice, the performance of our model usually deteriorate as dimension goes higher. In the next section, we introduce the Fisher divergence neural sampler, which expands RKHS to L 2 space to better deal with the curse of dimensionality.
Fisher Neural Sampler
The ease of computation for kernel methods does not come free. RKHS is a relatively small function space and the expressive power of kernel function in RKHS decays when dimension goes higher. In generating images, empirical performance of MMD-GAN (Li et al., 2015) is usually not comparable to more computationally intensive GANs like Wasserstein GAN Gulrajani et al., 2017).
Methodology
Instead of an unit-ball in RKHS, we choose the function space F in Stein discrepancy (2.2) to be L 2 . Next, we approximate L 2 functions by another multi-layer neural network f η (x):
D η (p θ , q) = sup η E x∼p θ tr S q (x)f η (x) + ∇ x f η (x) .
Neural networks as functions are not square integrable by nature, since they don't vanish at infinity by default. To impose the L 2 constraint, we add an L 2 penalty term and thus our loss function becomes
L η,λ (p θ , q) = D η (p θ , q) − λ E x∼p θ f η f η
where λ is a tuning parameter. Our training objective is
min θ max η L η,λ (p θ , q)
The ideal training scheme is:
step 1 Initialize generator network G θ and the discriminator network f η step 2 Fix θ, train η to optimal step 3 Fix η, train θ with one step step 3 Repeat step 2 and 3 until convergence
The ideal part mainly refers to training the discriminator to optimal and the discriminator itself has large enough capacity. The proposed training scheme is similar to that in Wasserstein GAN and Fisher GAN . Under the optimality assumptions, we now show the extension from RKHS to L 2 indeed introduces a stronger mode of convergence.
Optimal Discriminator
The Fisher divergence between two densities p and q is defined as
F(p || q) = E x∼p ||∇ x log(p) − ∇ x log(q)|| 2 2
. We now show that Fisher divergence is the corresponding loss of our ideal training scheme, provided that the discriminator network has enough capacity. Theorem 5.1. The optimal discriminator function is
1 2λ (S q (x) − S p (x)) .
Training the generator with the optimal discriminator corresponds to minimizing the fisher divergence between p θ and q. The corresponding optimal loss for training θ is
L(θ) = 1 4λ E x∼p θ ||S q (x) − S p θ (x)|| 2 2 .
One observation is that when our sampling distribution p θ is close to the target q, the discriminator function f η tends to zero. Naturally, f η can be used as an diagnostic tool to evaluate how well our neural sampler is working.
Fisher Divergence vs. KSD Fisher divergence dominates KSD in the following sense :
KSD(p, q) ≤ E x,x ∼p [k(x, x ) 2 ] · F(p||q).
Fisher divergence is stronger than KSD, and lot of other distances between distributions, such as total variation, Hellinger distance, Wasserstein distance, etc (Ley et al., 2013).
Fisher Divergence vs. KL Divergence KL divergence is not symmetric and usually not stable for optimization due to its division format, while KSD and Fisher divergence are more robust in contrast. Under mild conditions, according to Sobolev inequality, Fisher divergence is a stronger distance than KL divergence, which serves as the objective distance in SVGD. It implies that, theoretically, our framework has higher potentiality than SVGD.
In SVGD or Stein GAN, the normalizing constant is unknown so it is hard to quantify how well the KL divergence is being minimized. In comparison, both KSD and Fisher divergence only rely on the score function and hence, the values are directly interpretable as goodness-of-fit test statistics.
Remark The optimality assumption on discriminator may seem unrealistic. However, 1. Optimality of discriminator is an usual assumption for all GAN models mentioned in this paper. Optimization in deep neural networks are highly non-convex and the mini-max game in GAN model is extremely hard to characterize. Losing the assumption require tremendous amount of work (Arora et al., 2017). 2. Many results suggest that deep neural networks with large capacity usually generalize well. Bad local minimum is scarce and more efficient optimization tools to escape saddle points are being developed (Kawaguchi, 2016;LeCun et al., 2015;Jin et al., 2017).
In practice, we suggest choosing a large enough discriminator network and after each iteration of θ, we train η for 5 times, as suggested in Wasserstein GAN . Algorithm 2 summarizes our training procedure.
Algorithm 2 Fisher-NS 1: Input: un-normalized density q(x), noise density p z (z), number of step 2 iterations m, number for step 4 iterations T , tuning parameter λ, learning rate α 1 , α 2 , mini-batch size n. Obtain fake sample G θ (z 1 ), · · · , G θ (z n )
6:
For h = 1, . . . , m, Do 7:
Compute empirical loss L η,λ (p θ , q) 8:
Compute gradient η L η,λ (p θ , q) 9: η ← η + α 1 η L η,λ (p θ , q) 10:
End For
11:
Compute empirical loss L η,λ (p θ , q)
12:
Compute gradient θ L η,λ (p θ , q) 13: θ ← θ − α 2 θ L η,λ (p θ , q) 14: End For
Training the Generator
After the training cycle for the discriminator, we fix η and train the generator G θ . Denote the loss function to be L(θ) and ideally, we would want L(θ) to be continuous with respect to θ. Wasserstein GAN gives a very intuitive explanation of the importance of this continuity. We now give some sufficient conditions, under which our training scheme satisfies the continuity condition with respect to θ for any discriminator function f η . Theorem 5.2. If the following conditions are satisfied: 1) both the generator's weights and the noises are bounded, 2) discriminator uses smooth activate function i.e. tanh, sigmoid, etc., 3) target score function s q is continuously differentiable. Then L(θ) is continuous everywhere and differentiable almost everywhere w.r.t θ.
Remark These conditions are to impose some Lipschitz continuity (details in appendix). The first condition is trivially satisfied if we choose uniform as random noise and apply weight clipping to the generator. Except for θ being bounded, the other conditions are mild. It is true that procedures like weight clipping will make the function space smaller. But we can make the clipping range large enough to reach a fixed accuracy (Merolla et al., 2016). The empirical difference should be negligible if the range is sufficiently large.
Experiments
We test our neural samplers on both toy examples and real world problems. We compare them to Stein GAN, SVGD and other commonly used sampling methods such as Langevin dynamics (LD) and variational inference method. From the simulation results on Gaussian mixtures, our methods demonstrate superior ability to handle multimodality as well as avoiding local trap compared to the benchmarking methods. When applied to real world data, which is high dimensional, our method shows comparable performance. All the experiment details are attached in the Appendix.
Gaussian Mixtures
We start with a toy example to illustrate how our sampler transform a reference distribution to match the target. let the target distribution to be a 2-dimensional mixture normal q(x) = 0.5 · N (x; 0, I 2 (0.8)) + 0.5 · N (x; 0, I 2 (−0.8))
where N (x; µ, Σ) denotes the density function of N (µ, Σ) and I 2 (ρ) denotes 2-dimension identity matrix with ρ as off-diagonal elements. Figure [2] shows the how the sampling distribution evolves during the training.
Above is a simple case where the local trap phenomenon would not occur and our method successfully captures the detailed shape of the distribution. However, often times in practice, the target distribution is multimodal which usually causes the sampling method to be trapped in certain modes.
In order to demonstrate the capability of our method in escaping the local mode and exploring the global space, in the next example, we consider a 2-dimensional Gaussian mixture model with modes far from each other. Specifically, the target is a mixture of 8 standard Gaussian components equally spaced on a circle of radius 15 with equal mixing weights. To make the task more difficult, we set the initial particles to be far away from the true modes. For fair comparison, the network configurations for Stein GAN, KSD-NS and Fisher-NS are exactly the same. Figure [ 3] shows the contour of the target distribution and the evolution of the particles of each compared method. The result suggests that the proposed method is far more powerful in exploring the global structure.
Consider X = (X 1 , X 2 ) ∼ q with E(X) = µ = (µ 1 , µ 2 ) . To measure the quality of the sample quantitatively, we consider estimating the following statistics based the particles generated by each method.
h 1 = E(X 1 ) + E(X 1 ) and h 2 = E(X 1 − µ 1 ) 2 + E(X 2 − µ 2 ) 2
We run 30 independent runs of each method and compared the mean square error in estimating the two quantities. We also include the average of the estimated Maximum Mean Discrepancy (MMD) (Gretton et al., 2012) in 30 runs to measure the sample quality.
Conclusion
In this paper, we propose two novel frameworks that directly learns preservable transformations from random noise to target distributions. KSD-NS enjoys theoretical guarantee and demonstrates strong ability to capture multi-modal distributions. Fisher-NS extends KSD-NS and potentially achieves convergence with respect to Fisher divergence.
The introduction of GAN to sampling is exciting. Neural network as generator has great capacity to model the transformation and the adversarial training can optimally correspond to minimizing all kinds of distances between distributions. Using Stein discrepancy as a bridge, numerous variants of GAN and their related techniques can be potentially applied in parallel to sampling.
Radford, A., Metz, L., and Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. ) is also a positive definite kernel, and can be rewritten into
Raghu
u q (x, x ) = j λ j [A q e j (x)] [A q e j (x )], (A.1)
where A q is the Stein operator acted on e j that
A q (f ) = ∇ log q(x) · f + ∇f (A.2)
Gaussian Kernel (Fasshauer, 2011) Gaussian kernel is a popular characteristic kernel written as
k(x, x ) = exp(− ||x − x || 2 2σ 2 ) Its eigenexpansion is λ j ∝ b j , b < 1 (A.3) e j (x) ∝ exp(−a||x|| 2 ) 2 j j! d i=1 H j (x i √ 2c) (A.4)
where a, b, c > 0 are some constants depending on σ, and H k is k-th order Hermite polynomial. The eigenfunctions are L2-orthonorm. For details, please refer to section 6.2 of (Fasshauer, 2011 (2003)) Let X 1 , · · · , X n ∈ X be independent random variables and let f : X n → R be a function of X 1 , · · · , X n . Assume there exists c 1 , · · · , c n ≥ 0 such that ∀i, (Golowich et al. (2017))) Let H d be the class of real-valued neural networks of depth D over domain Z, where each weight matrix W j has Frobenius norm at most M F (j). Let the activation function be 1-Lipschitz, positive-homogeneous (such as the ReLU). Denote R n (H) to be the empirical Rademacher complexity of H. Then,
x 1 , · · · , x n , x i ∈ X , |f (x 1 , · · · , x i , · · · , x n ) − f (x 1 , · · · , x i , · · · , x n )| ≤ c i Then, for all > 0, P (f − E(f ) ≥ ) ≤ exp − 2 n i=1 c 2 i Lemma A.3. (Norm-based Sample Complexity ControlR n (H d ) ≤ B √ 2D log 2 + 1 D j=1 M F (j) √ n
where B > 0 is the range of the input distribution such that ||z|| ≤ B almost surely.
Lemma A.4. (Extension of Ledoux-Talagrand contraction inequality (van de Geer, 2016)) Let u :
R d → R be L-Lipschitz functions w.r.t. L 1 norm, i.e. ∀x, y ∈ R d , |u(x) − u(y)| ≤ L||x − y|| 1 . For some function space F = {f = (g 1 (x), · · · , g d (x)) : R d0 → R d }, denote F i = {g i (x) : R d0 → R} for i = 1, 2, · · · , d, accordingly. Then, R n (u • F) ≤ 2 d−1 L d i=1 R n (F i ) where • means composition and u • F = {u • f : f ∈ F}.
Theorem 4.1 Assume q and k(·, ·) satisfy some smoothness conditions so that the newly defined kernel u q in lemma A.1 is L 1 -Lipschitz with one of the argument fixed. If generator G θ satisfy the conditions in A.3. Then, For any > 0, with probability at least exp(− 2 n/2) the following bound holds,
KSD(p θ * , q) ≤ KSD(p θ , q) + O 2 d dB √ D D j=1 M F (j) √ n + (A.5)
Proof. For the ease of notation, let's denote
E(θ) = KSD(X θ , q), T (θ) = KSD(p θ , q)
By applying the large deviation bound on U-statistics of (Hoeffding, 1963), we have that for any θ ∈ Θ
P (|E(θ) − E(E(θ))| > ) ≤ 2 exp − 2 n 16 (A.6)
Note that (A.6) holds for any fixed θ. Since θ * is the population MMD minimizer that doesn't depend on samples, we have E(E(θ * )) = T (θ * ), which yields
P (|E(θ * ) − T (θ * )| > ) ≤ 2 exp − 2 n 16
On the other hand, θ is the empirical MMD minimizer and to bound it, we want to show that for some β s.t.
P(sup θ |E(θ) − T (θ)| > ) < β (A.7)
Apply (2.4), we can write
sup θ |E(θ) − T (θ)| ≤ sup θ 1 n(n − 1) n i=1 n j =i u(X i , X j ) − E (u(X i , X j )) (A.8) := sup θ I(X θ ) (A.9)
For I(X θ ), notice that the bounded condition for McDiarmid's inequality still holds that
| sup θ I(X θ ) − sup θ I(X θ )| ≤ sup θ |I(X θ ) − I(X θ )| ≤ 2 n
where X θ and X θ only differ in one element. Then McDiarmid's inequality gives us that
P sup θ I(X θ ) − E(sup θ I(X θ )) > ≤ exp − 2 n 2 (A.10)
With high probability, sup θ I(X θ ) can be bounded by E(sup θ I(X θ )) + . Now we give a bound for E(sup θ I(X θ )).
E sup θ I(X θ ) = E p θ sup θ 1 n(n − 1) n i=1 n j =i u(X i , X j ) − E p θ (u(X i , X j )) = E p θ sup θ 1 n n i=1 1 n − 1 n j =i u(X i , X j ) − E p θ (u(X i , X j )) ≤ 1 n n i=1 E p θ sup θ 1 n − 1 n−1 j =i u(X i , X j ) − E p θ (u(X i , X j )) = E p θ sup θ 1 n − 1 n−1 j=1 u(X n , X j ) − E p θ (u(X n , X j )) = E Xn E p θ sup θ 1 n − 1 n−1 j=1 u(X n , X j ) − E p θ (u(X n , X j )) X n
Once X n is fixed, u(X n , X j ) for different j's are independent. By standard argument of Rademacher complexity, we have
E p θ sup θ 1 n − 1 n−1 j=1 k(X n , X j ) − E p θ (u(X n , X j )) X n ≤ 2R n−1 F θ,Xn (A.11)
where F θ,Xn = {u(X n , G θ (z)) : z ∼ p z and independent of X n } Combine the assumption of u q being Lipschitz with lemma A.4, we have
R n F θ,Xn ≤ 2 d−1 d i=1 R n (F θ,Xn,i ) (A.12) Applying lemma A.3 yields 2R n F θ,Xn ≤ 2 d dL · B √ 2D log 2 + 1 D j=1 M F (j) √ n := η(n, d) (A.13) = O 2 d dB √ D D j=1 M F (j) √ n (A.14)
Together with (A.10) and (A.11), we can further get
P sup θ I(X θ ) > η(n, d) + ≤P sup θ I(X θ ) > 2R n−1 F θ,Xn + ≤P sup θ I(X θ ) > E(sup θ I(Y θ )) + ≤ exp − 2 n 2
Now we can get
P T ( θ) − T (θ * ) > 4 + η = P T ( θ) − E( θ) + E( θ) − T (θ * ) > 4 + η ≤ P T ( θ) − E( θ) + E(θ * ) − T (θ * ) > 4 + η ≤ P |E( θ) − T ( θ)| > + η + P |E(θ * ) − T (θ * )| > 2 √ 2 ≤ 2 exp − 2 n 2
Together with (A.7), the theorem is proved and the bound goes to zero if 2 n go to infinity.
Remark The Lipschitz condition for kernel u q is not hard to satisfy. From Lemma A.1, if we use Gaussian kernel, as long as S q doesn't have exponential tails, the Lipschitz condition is satisfied.
In our application, we can choose a wide range of noise distributions as long as it is easy to sample and regular enough. If we choose uniform distribution, then B = O( √ d). If assume M F (j) ≤ M ≤ ∞ for any j = 1, 2, · · · , D. Then (A.5) becomes
KSD(p θ * , q) ≤ KSD(p θ , q) + O 2 d d 3/2 √ n + A.2 Proof of Theorem 5.1 Lemma A.5. E x∼q tr ∇ x log p(x)f (x) + ∇ x f (x) = E x∼q tr (∇ x log p(x) − ∇ x log q(x))f (x)
Proof.
E x∼q tr ∇ x log q(x)f (x) + ∇ x f (x) = 0 ⇒ E x∼q tr (∇ x f (x)) = −E x∼q tr ∇ x log q(x)f (x) Lemma A.6. |E x∼q tr g(x)f (x) | ≤ E x∼q tr (g(x) g(x)) * E x∼q tr (f (x) f (x))
The equality holds iff f ∝ g a.s x ∼ q.
Proof. Firstly, we have tr(
f (x)f (x) ) = tr(f (x) f (x)) = f (x) f (x) = f (x) 2 2 E x∼q tr (f − t * g)(f − t * g) ≥ 0 ⇒ E x∼q tr f f + t 2 E x∼q tr g g ≥ 2tE x∼q tr gf
Because inequality hold for all t, so the lemma is proved.
Theorem 5.1 The optimum discriminator is 1 2λ (S q − S p )
Training generator equals minimize the fisher divergence of p and q 1 4λ
E x∼p tr (S q − S p ) (S q − S p )
Proof. Let our loss function be L.
Because
tr(f (x)f (x) ) = tr(f (x) f (x)) = f (x) f (x) = f (x) 2 2 . Then we have L = E x∼p tr S q (x)f (x) + ∇ x f (x) − λE x∼p [tr(f (x) f (x))] (A.15) = E x∼p tr (S q (x) − S p (x))f (x) − λE x∼p [tr(f (x) f (x))] (A.16) ≤ E x∼p tr ((S q − S p ) (S q − S p )) · E x∼p tr (f (x) f (x)) − λE x∼p tr f (x) f (x) (A.17) ≤ 1 4λ E x∼p tr (S q − S p ) (S q − S p ) (A.18) Equality sign in (A.17) holds iff f ∝ S q − S p , a.s x ∼ p.
Equality sign in (A.18) holds iff
E x∼p tr f (x) f (x) = 1 4λ 2 E x∼p tr (S q − S p ) (S q − S p )
So argmax of L is 1 2λ (∇ log p − ∇ log q)a.sx ∼ p.
Theorem 5.2 If the following conditions are satisfied: 1) both the generator's weights and the noises are bounded, 2) discriminator uses smooth activate function i.e. tanh, sigmoid, etc., 3) target score function s q is continuously differentiable. Then L(θ) is continuous everywhere and differentiable almost everywhere w.r.t θ.
Proof. Using generator with weight clipping and uniform noise, we have a transform function G θ which is lipschitz.
As we can see later in Theorem A.10, there exist a compact set Ω. P (G θ (z) ∈ Ω) = 1, ∀θ.
From the condition of discriminator we know that f is smooth.
So A q f is continuously differentiable on Ω, f T f is continuously differentiable on Ω. So we have A q f lip,Ω + f T f lip,Ω < ∞ E p θ (A q f (x) − λf (x) T f (x)) − E p θ (A q f (x) − λf (x) T f (x)) = E z (A q f (G θ (z)) − λf (G θ (z)) T f (G θ (z))) − E z (A q f (G θ (z)) − λf (G θ (z)) T f (G θ (z) )) ≤( A q f lip,Ω + λ f T f lip,Ω )E z ( G θ − G θ )
Because z and θ all bounded, We know that G is locally lipschitz. For a given pair (θ, z) there is a constant C(θ, z) and an open set U θ such that for every (θ , z) ∈ U θ we have G θ (z) − G θ (z) ≤ C(θ, z) θ − θ Under the condition mentioned before, E z |C(θ, z)| < ∞, so we achieve L(θ) − L(θ ) ≤ ( A q f lip,Ω + λ f T f lip,Ω )E z |C(θ, z)| θ − θ Therefore, L(θ) is locally Lipschitz and continuous everywhere. Last, applying Radamachers theorem proves L(θ) is differentiable almost everywhere, which completes the proof.
A.3 Relationship to Wasserstein GAN
Denote φ = tr(A q f ), then φ = div(qf )/q and our loss function without penalty can be re-written as E p (φ) − E q (φ).
Lemma A.7. (Durán and López García, 2010) If Ω is a John domain, for any v ∈ L l 0 (Ω), l > 1 there exists u ∈ W 1,l 0 such that div u = v in Ω
In Wasserstein GAN, if we constrain the functions to be compactly supported and the expectation under the target distribution E q (f ) to be zero, the result doesn't change.
Theorem A.8. Suppose there exists l > 1 s.t xq , q ∈ L l 0 , then if we constrain φ = tr(A q f ) to be Lip-1 and compacted supported. Then the optimal loss function is Wasserstein-1 distance.
Proof. For every function φ which is Lip-1 and has compact support, φq ≤ xq + c · q , where c > 0 is some constant. So the equation has a solution, there exist f which has a compact support s.t φ = tr(A q f ).
Remark Firstly, xq , q ∈ L l 0 is extremely weak even for Cauchy distribution this condition holds. Secondly, we can apply weight clipping to f to ensure f has a compact support and tr(A p f ) is Lip-1.
A.4 Weak convergence
Theorem A.9. If kernel k(x, y) if bounded by constant c. Then S(p θ , q) ≤ c · F (p θ , q)
Proof.
S(p θ , q) 2 = |E x,x ((S p θ (x) − S q (x)) T k(x, x )(S p θ (x ) − S q (x )))| 2 ≤ E x,x (k(x, x ) 2 ) · E x,x (|(S p θ (x) − S q (x)) T (S p θ (x ) − S q (x ))| 2 ) ≤ E x,x (k(x, x ) 2 ) · E x,x ( (S p θ (x) − S q (x)) 2 2 (S p θ (x ) − S q (x )) 2 2 ) = E x,x (k(x, x ) 2 ) · F (p θ , q) 2 ≤ c 2 · F (p θ , q) 2
Theorem A.10. Suppose we use uniform or Gaussian noise, tanh or relu activate function for generator. Then p θ is uniformly tight, if we clip the weight to (−c, c) for any c > 0.
Proof. Denote the transform function of generator is G θ (x). Fix z 0 in the space. Then we know that there exist R, s.t G θ (z 0 ) < R for all θ. In addition, G θ is a lipschitz function because the weight is clipped to (−c, c). So there exist k s.t G θ (x) − G θ (y) ≤ k x − y . So we have
P ( G θ (z) > A) ≤ P ( G θ (z) − G θ (z 0 ) > A − R) ≤ P ( z − z 0 > (A − R)/k)
Notice that z ∼ normal or uniform. For all > 0, there exist A s.t ( z − z 0 > A/k) < .Therefore P ( G θ (z) > A + R) < holds for all θ, which means G θ are uniformly tight. Moreover if noise is uniform, there exist A s.t P ( G θ (z) > A + R) = 0 for all θ.
A.5 Simulation Details
Experiment Setting in Gaussian Mixtures To ensure the comparison is fair, the neural samplers structure in Stein GAN, KSD-NS, Fisher-NS are the same in all the 2-dim Gaussian mixture experiments shown in the simulation section. The generator/sampler is a plain network that has two hidden layers with width 200 and tanh(·) as activation function. The discriminator network in Fisher-NS is set to be of the same structure as the generator. The noise is chosen to be uniform(−10, 10). The optimization is done in TensorFlow via RMSProp. The learning rate is 0.001 for all Stein GAN and KSD-NS and Fisher-NS. The step size for SVGD is 0.3.
| 7,179 |
1810.01049
|
2951285883
|
In this paper, we consider a class of constrained clustering problems of points in @math , where @math could be rather high. A common feature of these problems is that their optimal clusterings no longer have the locality property (due to the additional constraints), which is a key property required by many algorithms for their unconstrained counterparts. To overcome the difficulty caused by the loss of locality, we present in this paper a unified framework, called Peeling-and-Enclosing (PnE) , to iteratively solve two variants of the constrained clustering problems, constrained @math -means clustering ( @math -CMeans) and constrained @math -median clustering ( @math -CMedian). Our framework is based on two standalone geometric techniques, called Simplex Lemma and Weaker Simplex Lemma , for @math -CMeans and @math -CMedian, respectively. The simplex lemma (or weaker simplex lemma) enables us to efficiently approximate the mean (or median) point of an unknown set of points by searching a small-size grid, independent of the dimensionality of the space, in a simplex (or the surrounding region of a simplex), and thus can be used to handle high dimensional data. If @math and @math are fixed numbers, our framework generates, in nearly linear time ( i.e., @math ), @math @math -tuple candidates for the @math mean or median points, and one of them induces a @math -approximation for @math -CMeans or @math -CMedian, where @math is the number of points. Combining this unified framework with a problem-specific selection algorithm (which determines the best @math -tuple candidate), we obtain a @math -approximation for each of the constrained clustering problems. We expect that our technique will be applicable to other constrained clustering problems without locality.
|
As for the hardness of the problem, Dasgupta @cite_3 showed that it is NP-hard for @math -means clustering in high dimensional space even if @math ; @cite_45 proved that there is no PTAS for @math -means clustering if both @math and @math are large, unless @math . Guruswami and Indyk @cite_16 showed that it is NP-hard to obtain any PTAS for @math -median clustering if @math is not a constant and @math is @math .
|
{
"abstract": [
"The Euclidean @math -means problem is a classical problem that has been extensively studied in the theoretical computer science, machine learning and the computational geometry communities. In this problem, we are given a set of @math points in Euclidean space @math , and the goal is to choose @math centers in @math so that the sum of squared distances of each point to its nearest center is minimized. The best approximation algorithms for this problem include a polynomial time constant factor approximation for general @math and a @math -approximation which runs in time @math . At the other extreme, the only known computational complexity result for this problem is NP-hardness [ADHP'09]. The main difficulty in obtaining hardness results stems from the Euclidean nature of the problem, and the fact that any point in @math can be a potential center. This gap in understanding left open the intriguing possibility that the problem might admit a PTAS for all @math . In this paper we provide the first hardness of approximation for the Euclidean @math -means problem. Concretely, we show that there exists a constant @math such that it is NP-hard to approximate the @math -means objective to within a factor of @math . We show this via an efficient reduction from the vertex cover problem on triangle-free graphs: given a triangle-free graph, the goal is to choose the fewest number of vertices which are incident on all the edges. Additionally, we give a proof that the current best hardness results for vertex cover can be carried over to triangle-free graphs. To show this we transform @math , a known hard vertex cover instance, by taking a graph product with a suitably chosen graph @math , and showing that the size of the (normalized) maximum independent set is almost exactly preserved in the product graph using a spectral analysis, which might be of independent interest.",
"We show that k-means clustering is an NP-hard optimization problem, even if k is fixed to 2.",
""
],
"cite_N": [
"@cite_45",
"@cite_3",
"@cite_16"
],
"mid": [
"1956536100",
"2187617524",
"2010480966"
]
}
|
A Unified Framework for Clustering Constrained Data without Locality Property
|
be used to handle high dimensional data. If k and 1 are fixed numbers, our framework generates, in nearly linear time (i.e., O(n(log n) k+1 d)), O((log n) k ) k-tuple candidates for the k mean or median points, and one of them induces a (1 + )-approximation for k-CMeans or k-CMedian, where n is the number of points. Combining this unified framework with a problem-specific selection algorithm (which determines the best k-tuple candidate), we obtain a (1 + )approximation for each of the constrained clustering problems. Our framework improves considerably the best known results for these problems. We expect that our technique will be applicable to other constrained clustering problems without locality.
Keywords constrained clustering · k-means/median · approximation algorithms · high dimensions
Introduction
Clustering is one of the most fundamental problems in computer science, and finds numerous applications in many different areas, such as data management, machine learning, bioinformatics, networking, etc. [45]. The common goal of many clustering problems is to partition a set of given data items into a number of clusters so as to minimize the total cost measured by a certain objective function. For example, the popular k-means (or k-median) clustering seeks k mean (or median) points to induce a partition of the given data items so that the average squared distance (or the average distance) from each data item to its closest mean (or median) point is minimized. Most existing clustering techniques assume that the data items are independent from each other and therefore can "freely" determine their memberships in the resulting clusters (i.e., a data item does not need to pay attention to the clustering of others). However, in many real-world applications, data items are often constrained or correlated, which require a great deal of effort to handle such additional constraints. In recent years, considerable attention has been paid to various types of constrained clustering problems and a number of techniques, such as l-diversity clustering [55], r-gather clustering [3,33,43], capacitated clustering [6,25,48], chromatic clustering [8,28], and probabilistic clustering [24,40,53], have been obtained. In this paper, we study a class of constrained clustering problems of points in Euclidean space.
Given a set of points P in R d , a positive integer k, and a constraint C, the constrained k-means (or k-median) clustering problem is to partition P into k clusters so as to minimize the objective function of the ordinary k-means (or k-median) clustering and satisfy the constraint C. In general, the problems are denoted by k-CMeans and k-CMedian, respectively.
The detailed definition for each individual problem is given in Section 4. Roughly speaking, data constraints can be imposed at either cluster or item level. Cluster level constraints are restrictions on the resulting clusters, such as the size of the clusters [3] or their mutual differences [72], while item level constraints are mainly on data items inside each cluster, such as the coloring constraint which prohibits items of the same color being clustered into one cluster [8,28,55]. The additional constraints could considerably change the nature of the clustering problems. For instance, one key property exhibited in many unconstrained clustering problems is the so called locality property, which indicates that each cluster is located entirely inside the Voronoi cell of its center (e.g., the mean, median, or center point) in the Voronoi diagram of all the centers [44] (see Figure 1a). Existing algorithms for these clustering problems often rely on such a property [10,12,22,37,44,51,57,60]. However, due to the additional constraints, the locality property may no longer exist (see Figure 1b). Therefore, we need new techniques to overcome this challenge.
Our Main Results
In this paper we present a unified framework called Peeling-and-Enclosing (PnE), based on two standalone geometric techniques called Simplex Lemma and Weaker Simplex Lemma, to solve a class of constrained clustering problems without the locality property in Euclidean space, where the dimensionality of the space could be rather high and the number k of clusters is assumed to be some fixed number. Particularly, we investigate the constrained k-means (k-CMeans) and k-median (k-CMedian) versions of these problems. For the k-CMeans problem, our unified framework generates in O(n(log n) k+1 d) time a set of k-tuple candidates of cardinality O((log n) k ) for the to-be-determined k mean points. We show that among the set of candidates, one of them induces a (1 + )-approximation for k-CMeans. To find out the best k-tuple candidate, a problem-specific selection algorithm is needed for each individual constrained clustering problem (note that due to the additional constraints, the selection problems may not be trivial). Combining the unified framework with the se-lection algorithms, we obtain a (1 + )-approximation for each constrained clustering problem in the considered class. Our results considerably improve (in various ways) the best known algorithms for all these problems (see the table in Section 1.2). Our techniques can also be extended to k-CMedian to achieve (1 + )-approximations for these problems with the same time complexities. Below is a list of the constrained clustering problems considered in this paper. We expect that our technique will be applicable to other clustering problems without locality property, as long as the corresponding selection problems can be solved.
1. l-Diversity Clustering. In this problem, each input point is associated with a color, and each cluster has no more than a fraction 1 l (for some constant l > 1) of its points sharing the same color. The problem is motivated by a widely-used privacy preserving model called l-diversity [55,56] in data management, which requires that each block contains no more than a fraction 1 l of items sharing the same sensitive attribute. 2. Chromatic Clustering. In [28], Ding and Xu introduced a new clustering problem called chromatic clustering, which requires that the points with the same color should be clustered in different clusters. It is motivated by a biological application for clustering chromosome homologs in a population of cells, where homologs from the same cell should be clustered into different clusters. Similar problem also appears in applications related to transportation system design [8]. 3. Fault Tolerant Clustering. The problem of fault tolerant clustering assigns each point p to its l nearest cluster centers for some l ≥ 1, and counts all the l distances as its cost. The problem has been extensively studied in various applications for achieving better fault tolerance [21,42,47,52,64]. 4. r-Gather Clustering. This clustering problem requires each of the clusters to contain at least r points for some r > 1. It is motivated by the k-anonymity model for privacy preserving [3,65], where each block contains at least k items 1 . 5. Capacitated Clustering. This clustering problem has an upper bound on the size of each cluster, and finds various applications in data mining and resource assignment [25,48]. 6. Semi-Supervised Clustering. Many existing clustering techniques, such as ensemble clustering [62,63] and consensus clustering [4,23], make use of a priori knowledge. Since such clusterings are not always based on the geometric cost (e.g., k-means cost) of the input, thus a more accurate way of clustering is to consider both the priori knowledge and the geometric cost. We consider the following semi-supervised clustering problem: given a set P of points and a clustering S of P (based on the priori knowledge), partition P into k clusters so as to minimize the sum (or some function) of the geometric cost and the difference with the given clustering S. Another related problem is evolutionary clustering [20], where the clustering in each time point needs to minimize not only the geometric cost, but also the total shifting from the clustering in the previous time point (which can be viewed as S). 7. Uncertain Data Clustering. Due to the unavoidable error, data for clustering are not always precise. This motivates us to consider the following probabilistic clustering problem [24,40,53] : given a set of "nodes" with each represented as a probabilistic distribution over a point set in R d , group the nodes into k clusters so as to minimize the expected cost with respect to the probabilistic distributions.
Note: Following our work published in [30], Bhattacharya et al. [17] improved the running time for finding the candidates of k-cluster centers from nearly linear to linear based on the elegant D 2 -sampling. Their work also follows the framework of clustering constrained data, i.e., generating the candidates and selecting the best one by a problem-specific selection algorithm, presented in this paper. Our paper represents the first systematically theoretical study of the constrained clustering problems. Some of the underlying techniques, such as Simplex Lemma and Weaker Simplex Lemma, are interesting in their rights, which have already been used to solve other problems [31] (e.g., the popular "truth discovery" problem in data mining).
Our Main Ideas
Most existing k-means or k-median clustering algorithms in Euclidean space consist of two main steps: (1) identify the set of k mean or median points and (2) partition the input points into k clusters based on these mean or median points (we call this step Partition). Note that for some constrained clustering problems, the Partition step may not be trivial. More formally, we have the following definition.
Definition 1 (Partition Step) Given an instance P of k-CMeans (or k-CMedian) and k cluster centers (i.e., the mean or median points), the Partition step is to form k clusters of P , where the clusters should satisfy the constraint and each cluster is assigned to an individual cluster center, such that the objective function of the ordinary k-means (or k-median) clustering is minimized.
To determine the set of k mean or median points in step (1), most existing algorithms (either explicitly or implicitly) rely on the locality property. To shed some light on this, consider a representative and elegant approach by Kumar et al. [51] for k-means clustering. Let {Opt 1 , · · · , Opt k } be the set of k unknown optimal clusters in non-increasing order of their sizes. Their approach uses random sampling and sphere peeling to iteratively find k mean points. At the j-th iterative step, it draws j-1 peeling spheres centered at the j-1 already obtained mean points, and takes a random sample on the points outside the peeling spheres to find the j-th mean point. Due to the locality property, the points belonging to the first j-1 clusters lie inside their corresponding j-1 Voronoi cells; that is, for each peeling sphere, most of the covered points belong to their corresponding cluster, and thus ensures the correctness of the peeling step.
However, when the additional constraint (such as coloring or size) is imposed on the points, the locality property may no longer exist (see Figure 1b), and thus the correctness of the peeling step cannot always be guaranteed. In this scenario, the core-set technique [36] is also unlikely to be able to resolve the issue. The main reason is that although the core-set can greatly reduce the size of the input points, it is quite challenging to impose the constraint through the core-set.
To overcome this challenge, we present a unified framework, called Peelingand-Enclosing (PnE), in this paper, based on a standalone new geometric technique called Simplex Lemma. The simplex lemma aims to address the major obstacle encountered by the peeling strategy in [51] for constrained clustering problems. More specifically, due to the loss of locality, at the j-th peeling step, the points of the j-th cluster Opt j could be scattered over all the Voronoi cells of the first j-1 mean points, and therefore their mean point can no longer be simply determined by the sample outside the j-1 peeling spheres. To resolve this issue, our main idea is to view Opt j as the union of j unknown subsets, Q 1 , · · · , Q j , with each Q l , 1 ≤ l ≤ j-1, being the set of points inside the Voronoi cell (or peeling sphere) of the obtained l-th mean point and Q j being the set of remaining points of Opt j . After approximating the mean point of each unknown subset by using random sampling, we build a simplex to enclose a region which contains the mean point of Opt j , and then search the simplex region for a good approximation of the j-th mean point. To make this approach work, we need to overcome two difficulties: (a) how to generate the desired simplex to contain the j-th mean point, and (b) how to efficiently search the (approximate) j-th mean point inside the simplex.
For difficulty (a), our idea is to use the already determined j-1 mean points (which can be shown that they are also the approximate mean points of Q 1 , · · · , Q j−1 , respectively) and another point, which is the mean of those points in Opt j outside the peeling spheres (or Voronoi cells) of the first j-1 mean points (i.e., Q j ), to build a (j-1)-dimensional simplex to contain the jth mean point. Since we do not know how Opt j is partitioned (i.e., how Opt j intersects the j-1 peeling spheres), we vary the radii of the peeling spheres O(log n) times to guess the partition and generate a set of simplexes, where the radius candidates are based on an upper bound of the optimal value determined by a novel estimation algorithm (in Section 3.4). We show that among the set of simplexes, one of them contains the j-th (approximate) mean point.
For difficulty (b), our simplex lemma (in Section 2) shows that if each vertex v l of the simplex V is the (approximate) mean point of Q l , then we can find a good approximation of the mean point of Opt j by searching a small-size grid inside V. A nice feature of the simplex lemma is that the grid size is independent of the dimensionality of the space and thus can be used to handle high dimensional data. In some sense, our simplex lemma can be viewed as a considerable generalization of the well-known sampling lemma (i.e., Lemma 4 in this paper) in [44], which has been widely used for estimating the mean of a point set through random sampling [35,44,51]. Different from Lemma 4, which requires a global view of the point set (meaning that the sample needs to be taken from the point set), our simplex lemma only requires some partial views (e.g., sample sets are taken from those unknown subsets whose size might be quite small). If Opt j is the point set, our simplex lemma enables us to bound the error by the variance 2 of Opt j (i.e., a local measure) and the optimal value of the clustering problem on the whole instance P (i.e., a global measure), and thus helps us to ensure the quality of our solution.
For the k-CMedian problem, we show that although the simplex lemma no longer holds since the median point may lie outside the simplex, a weaker version (in Section 5.1) exists, which searches a surrounding region of the simplex. Thus our Peeling-and-Enclosing framework works for both k-CMeans and k-CMedian. It generates in total O((log n) k ) k-tuple candidates for the constrained k mean or median points. To determine the best k mean or median points, we need to use the property of each individual problem to design a selection algorithm. The selection algorithm takes each k-tuple candidate, computes a clustering (i.e., completing the Partition step) satisfying the additional constraint, and outputs the k-tuple with the minimum cost. We present a selection algorithm for each considered problem in Sections 4 and 5.4.
Simplex Lemma
In this section, we present the Simplex Lemma for approximating the mean point of an unknown point set Q, where the only known information is a set Lemma 1 (Simplex Lemma I) Let Q be a set of points in R d with a partition of Q = ∪ j l=1 Q l and Q l1 ∩ Q l2 = ∅ for any l 1 = l 2 . Let o be the mean point of Q, and o l be the mean point of Q l for 1 ≤ l ≤ j. Let the variance of Q be δ 2 = 1 |Q| q∈Q ||q−o|| 2 , and V be the simplex determined by {o 1 , · · · , o j }. Then for any 0 < ≤ 1, it is possible to construct a grid of size O((8j/ ) j ) inside V such that at least one grid point τ satisfies the inequality ||τ − o|| ≤ √ δ. Proof The following claim has been proved in [51].
o 1 o 2 o 3 o 4 o (a) o o 1 o 2 o 3 o 4 o 0 1 o 0 2 o 0 3 o 0 4 o 0 (b)
Claim 1 Let Q be a set of points in R d space, and o be the mean point of Q. For any pointõ ∈ R d , q∈Q ||q −õ|| 2 = q∈Q ||q − o|| 2 + |Q| × ||o −õ|| 2 .
Let Q 2 = Q \ Q 1 , and o 2 be its mean point. By Claim 1, we have the following two equalities.
q∈Q1 ||q − o|| 2 = q∈Q1 ||q − o 1 || 2 + |Q 1 | × ||o 1 − o|| 2 ,(1)q∈Q2 ||q − o|| 2 = q∈Q2 ||q − o 2 || 2 + |Q 2 | × ||o 2 − o|| 2 .(2)
Let L = ||o 1 − o 2 ||. By the definition of mean point, we have
o = 1 |Q| q∈Q q = 1 |Q| ( q∈Q1 q + q∈Q2 q) = 1 |Q| (|Q 1 |o 1 + |Q 2 |o 2 ).(3)
Thus the three points {o, o 1 , o 2 } are collinear, while ||o 1 − o|| = (1 − α)L and ||o 2 − o|| = αL. Meanwhile, by the definition of δ, we have
δ 2 = 1 |Q| ( q∈Q1 ||q − o|| 2 + q∈Q2 ||q − o|| 2 ).(4)
Combining (1) and (2), we have
δ 2 = 1 |Q| ( q∈Q1 ||q − o 1 || 2 + |Q 1 | × ||o 1 − o|| 2 + q∈Q2 ||q − o 2 || 2 + |Q 2 | × ||o 2 − o|| 2 ) ≥ 1 |Q| (|Q 1 | × ||o 1 − o|| 2 + |Q 2 | × ||o 2 − o|| 2 ) = α((1 − α)L) 2 + (1 − α)(αL) 2 = α(1 − α)L 2 .(5)Thus, we have L ≤ δ √ α(1−α)
, which means that
||o 1 −o|| = (1−α)L ≤ 1−α α δ.
Proof (of Lemma 1) We prove this lemma by induction on j.
Base case: For j = 1, since Q 1 = Q, o 1 = o. Thus, the simplex V and the grid are all simply the point o 1 . Clearly τ = o 1 satisfies the inequality. Induction step: Assume that the lemma holds for any j ≤ j 0 for some j 0 ≥ 1 (i.e., the induction hypothesis). Now we consider the case of j = j 0 + 1. First, we assume that |Q l | |Q| ≥ 4j for each 1 ≤ l ≤ j. Otherwise, we can reduce the problem to the case of a smaller j in the following way. Let I = {l|1 ≤ l ≤ j, |Q l | |Q| < 4j } be the index set of small subsets. Then, l∈I |Q l | |Q| < 4 , and l ∈I |Q l | |Q| ≥ 1 − 4 . By Lemma 2, we know that ||o − o|| ≤ /4
1− /4 δ, where o is the mean point of ∪ l ∈I Q l . Let (δ ) 2 be the variance of ∪ l ∈I Q l . Then, we have
(δ ) 2 ≤ |Q| |∪ l ∈I Q l | δ 2 ≤ 1 1− /4 δ 2 .
Thus, if we replace Q and by ∪ l ∈I Q l and 16 , respectively, and find a point τ such that
||τ − o || 2 ≤ 16 (δ ) 2 ≤ /16 1− /4 δ 2 , then we have ||τ − o|| 2 ≤ (||τ − o || + ||o − o||) 2 ≤ 9 16 1 − /4 δ 2 ≤ δ 2 ,(6)
where the last inequality is due to the fact < 1. This means that we can reduce the problem to a problem with the point set ∪ l ∈I Q l and a smaller j (i.e., j − |I|). By the induction hypothesis, we know that the reduced problem can be solved, where the new simplex would be a subset of V determined by {o l | 1 ≤ l ≤ j, l ∈ I}, and therefore the induction step holds for this case. Note that in general, we do not know I, but we can enumerate all the 2 j possible combinations to guess I if j is a fixed number as is the case in the algorithm in Section 3.2. Thus, in the following discussion, we can assume that |Q l | |Q| ≥ 4j for each 1 ≤ l ≤ j.
For
each 1 ≤ l ≤ j, since |Q l | |Q| ≥ 4j , by Lemma 2, we know that ||o l − o|| ≤ 1− 4j 4j δ ≤ 2 j δ.
This, together with triangle inequality, implies that for any
1 ≤ l, l ≤ j, ||o l − o l || ≤ ||o l − o|| + ||o l − o|| ≤ 4 j/ δ.(7)
Thus, if we pick any index l 0 , and draw a ball B centered at o l0 and with radius r = max 1≤l≤j {||o l − o l0 ||} ≤ 4 j/ δ (by (7)), the whole simplex V will be inside B.
Note that o = j l=1 |Q l | |Q| o l , so o lies inside the simplex V.
To guarantee that o is contained by the ball B, we can construct B only in the (j − 1)-dimensional space spanned by {o 1 , · · · , o j }, rather than the whole R d space. Also, if we build a grid inside B with grid length r 4j , i.e., generating a uniform mesh with each cell being a (j − 1)-dimensional hypercube of edge length r 4j , the total number of grid points is no more than O(( 8j ) j ). With this grid, we know that for any point p inside V, there exists a grid point g such
that ||g − p|| ≤ j( r 4j ) 2 = 4 √ j r ≤ √ δ.
This means that we can find a grid point τ inside V, such that ||τ − o|| ≤ √ δ. Thus, the induction step holds, and the lemma is true for any j ≥ 1.
In the above lemma, we assume that the exact positions of {o 1 , · · · , o j } are known (see Figure 2a). However, in some scenarios (e.g., in the Algorithm in Section 3.2), we only know an approximate position of each mean point o i (see Figure 2b). The following lemma shows that an approximate position of o can still be similarly determined (see Section 7.1 for the proof).
Lemma 3 (Simplex Lemma II) Let Q, o, Q l , o l , 1 ≤ l ≤ j, and δ be defined as in Lemma 1. Let {o 1 , · · · , o j } be j points in R d such that ||o l − o l || ≤ L for 1 ≤ l ≤ j and L > 0, and V be the simplex determined by {o 1 , · · · , o j }. Then for any 0 < ≤ 1, it is possible to construct a grid of size O((8j/ ) j ) inside V such that at least one grid point τ satisfies the inequality ||τ − o|| ≤ √ δ + (1 + )L.
Peeling-and-Enclosing Algorithm for k-CMeans
In this section, we present a new Peeling-and-Enclosing (PnE) algorithm for generating a set of candidates for the mean points of k-CMeans. Our algorithm uses peeling spheres and the simplex lemma to iteratively find a good candidate for each unknown cluster. An overview of the algorithm is given in Section 3.1.
Some notations: Let P = {p 1 , · · · , p n } be the set of R d points in k-CMeans, and OPT = {Opt 1 , · · · , Opt k } be the k unknown optimal constrained clusters with m j being the mean point of Opt j for 1 ≤ j ≤ k. Without loss of generality, we assume that |Opt 1 | ≥ |Opt 2 | ≥ · · · ≥ |Opt k |. Denote by δ 2 opt the optimal objective value, i.e., δ 2 opt = 1 n k j=1 p∈Optj ||p − m j || 2 . We also set > 0 as the parameter related to the quality of the approximate clustering result. Our Peeling-and-Enclosing algorithm needs an upper bound ∆ on the optimal value δ 2 opt . Specifically, δ 2 opt satisfies the condition ∆/c ≤ δ 2 opt ≤ ∆ for some constant c ≥ 1. In Section 3.4, we will present a novel algorithm to determine such an upper bound for general constrained k-means clustering problems. Then, it searches for a (1 + )-approximation δ 2 of δ 2 opt in the set
Overview of the Peeling-and-Enclosing Algorithm
p v1 p v2 p v3 m 4 Opt 4 (a) p v1 p v2 p v3 m 4 Opt 4 (b) p v1 p v2 p v3 ⇡ m 4 Opt 4 (c) p v1 p v2 p v3 ⇡ m 4 Opt 4 p v4 (d)H = {∆/c, (1 + )∆/c, (1 + ) 2 ∆/c, · · · , (1 + ) log 1+ c ∆/c ≥ ∆}.(8)
Obviously, there exists one element of H lying inside the interval [δ 2 opt , (1 + )δ 2 opt ], and the size of H is O( 1 log c).
At each searching step, our algorithm performs a sphere-peeling and simplexenclosing procedure to iteratively generate k approximate mean points for the constrained clusters. Initially, our algorithm uses Lemmas 4 and 5 to find an approximate mean point p v1 for Opt 1 (note that since Opt 1 is the largest cluster, |Opt 1 |/n ≥ 1/k and the sampling lemma applies). At the (j + 1)-th iteration, it already has the approximate mean points p v1 , · · · , p vj for Opt 1 , · · · , Opt j , respectively (see Figure 3(a)). Due to the lack of locality, some points of Opt j+1 could be scattered over the regions (e.g., Voronoi cells or peeling spheres) of Opt 1 , · · · , Opt j and are difficult to be distinguished from the points in these clusters. Since the number of such points could be small (comparing to that of the first j clusters), they need to be handled differently from the remaining points. Our idea is to separate them using j peeling spheres, B j+1,1 , · · · , B j+1,j , centered at the j approximate mean points respectively and with some properly guessed radius (see Figure 3(b)). Let A be the set of unknown points in Opt j+1 \ (∪ j l=1 B j+1,l ). Our algorithm considers two cases, (a) |A| is large enough and (b) |A| is small. For case (a), since |A| is large enough, we can use Lemma 4 and Lemma 5 to find an approximate mean point π of A, and then construct a simplex determined by π and p v1 , · · · , p vj to contain the j + 1-th mean point (see Figure 3(c)). Note that A and Opt j+1 ∩ B j+1,l , 1 ≤ l ≤ j, can be viewed as a partition of Opt j+1 where the points covered by multiple peeling spheres can be assigned to anyone of them, and p v l can be shown as an approximate mean point of Opt j+1 ∩ B j+1,l ; thus the simplex lemma applies. For case (b), it directly constructs a simplex determined just by p v1 , · · · , p vj . For either case, our algorithm builds a grid inside the simplex and uses Lemma 3 to find an approximate mean point for Opt j+1 (i.e., p vj+1 , see Figure 3(d)). The algorithm repeats the Peeling-and-Enclosing procedure k times to generate the k approximate mean points.
Peeling-and-Enclosing Algorithm
Before presenting our algorithm, we first introduce two basic lemmas from [29,44] for random sampling. Let S be a set of n points in R d space, and T be a randomly selected subset of size t from S. Denote by m(S) and m(T ) the mean points of S and T respectively.
Lemma 4 ( [44]) With probability 1 − η, ||m(S) − m(T )|| 2 < 1 ηt δ 2 , where δ 2 = 1 n s∈S ||s − m(S)|| 2 and 0 < η < 1.
Lemma 5 ( [29])
Let Ω be a set of elements, and S be a subset of Ω with |S| |Ω| = α for some α ∈ (0, 1). If we randomly select
t ln t η ln(1+α) = O( t α ln t η ) ele- ments
from Ω, then with probability at least 1 − η, the sample contains at least t elements from S for 0 < η < 1 and t ∈ Z + .
Our Peeling-and-Enclosing algorithm is shown in Algorithm 1.
Algorithm 1 Peeling-and-Enclosing for k-CMeans 2. For each root-to-leaf path of every T i , build a k-tuple candidate using the k points associated with the path.
Input: P = {p 1 , · · · , pn} in R d , k ≥ 2, a constant ∈ (0, 14k
Algorithm 2 Peeling-and-Enclosing-Tree
Input: P = {p 1 , · · · , pn} in R d , k ≥ 2, a constant ∈ (0, 1 4k 2 )
, and δ > 0. 1. Initialize T as a single root node v associated with no point. 2. Recursively grow each node v in the following way
(a) If the height of v is already k, then it is a leaf. (b) Otherwise, let j be the height of v. Build the radius candidate set R = ∪ log n t=0 { 1+l 2 2(1+ ) j2 t/2 √ δ | 0 ≤ l ≤ 4 + 2 }. For each r ∈ R, do i. Let {pv 1 , · · · ,
pv j } be the j points associated with the nodes on the root-to-v path. ii. For each pv l , 1 ≤ l ≤ j, construct a ball B j+1,l centered at pv l and with radius r. iii. Take a random sample from P \ ∪ j l=1 B j+1,l of size s = 8k 3 9 ln k 2 6 . Compute the mean points of all the subsets of the sample, and denote them by Π = {π 1 , · · · , π 2 s −1 }. iv. For each π i ∈ Π, construct a simplex using {pv 1 , · · · , pv j , π i } as its vertices.
Also construct another simplex using {pv 1 , · · · , pv j } as its vertices. For each simplex, build a grid with size O(( 32j/ 2 ) j ) inside itself and each of its 2 j possible degenerated sub-simplices. v. In total, there are 2 s+j (32j/ 2 ) j grid points inside the 2 s simplices. For each grid point, add one child to v, and associate it with the grid point.
3. Output T .
Theorem 1 Let P be the set of n R d points and k ∈ Z + be a fixed constant.
In O(2 poly( k ) n(log n) k+1 d) time, Algorithm 1 outputs O(2 poly( k ) (log n) k ) ktuple candidate mean points. With constant probability, there exists one k-tuple candidate in the output which is able to induce a 1 + O( ) -approximation of k-CMeans (together with the solution for the corresponding Partition step).
Remark 1 (1) To increase the success probability to be close to 1, e.g., 1 − 1 n , one only needs to repeatedly run the algorithm O(log n) times; both the time complexity and the number of k-tuple candidates increase by a factor of O(log n).
(2) In general, the Partition step may be challenging to solve. As shown in Section 4, the constrained clustering problems considered in this paper admit efficient selection algorithms for their Partition steps.
Proof of Theorem 1
Let β j = |Opt j |/n, and δ 2
j = 1 |Optj | p∈Optj ||p − m j || 2 ,
where m j is the mean point of Opt j . By our assumption in the beginning of Section 3, we know that β 1 ≥ · · · ≥ β k . Clearly, k j=1 β j = 1 and the optimal objective value δ 2 opt = k j=1 β j δ 2 j . Proof Synopsis: Instead of directly proving Theorem 1, we consider the following Lemma 6 and Lemma 7 which jointly ensure the correctness of Theorem 1. In Lemma 6, we show that there exists such a root-to-leaf path in one of the returned trees that its associated k points along the path, denoted by {p v1 , · · · , p v k }, are close enough to the mean points m i , · · · , m k of the k optimal clusters, respectively. The proof is based on mathematical induction; each step needs to build a simplex, and applies Simplex Lemma II to bound the error, i.e., ||p vj − m j || in (9). The error is estimated by considering both the local (i.e., the variance of cluster Opt j ) and global (i.e., the optimal value δ opt ) measurements. This is a more accurate estimation, comparing to the widely used Lemma 4 which considers only the local measurement. Such an improvement is due to the increased flexibility in the Simplex Lemma II, and is a key to our proof. In Lemma 7, we further show that the k points, {p v1 , · · · , p v k }, in Lemma 6 induce a (1 + O( ))-approximation of k-CMeans.
Lemma 6 Among all the trees generated by Algorithm 1, with constant probability, there exists at least one tree, T i , which has a root-to-leaf path with each of its nodes v j at level j (1 ≤ j ≤ k) associating with a point p vj and satisfying the inequality
||p vj − m j || ≤ δ j + (1 + )j β j δ opt .(9)
Before proving this lemma, we first show its implication.
Lemma 7 If Lemma 6 is true, then {p v1 , · · · , p v k } is able to induce a (1 + O( ))
-approximation of k-CMeans (together with the solution for the corresponding Partition step).
Proof We assume that Lemma 6 is true. Then for each 1 ≤ j ≤ k, we have
p∈Optj ||p − p vj || 2 = p∈Optj ||p − m j || 2 + |Opt j | × ||m j − p vj || 2 ≤ p∈Optj ||p − m j || 2 + |Opt j | × 2( 2 δ 2 j + (1 + ) 2 j 2 β j δ 2 opt ) = (1 + 2 2 )|Opt j |δ 2 j + 2(1 + ) 2 j 2 nδ 2 opt ,(10)
where the first equation follows from Claim 1 in the proof of Lemma 2 (note that m j is the mean point of Opt j ), the inequality follows from Lemma 6 and the fact that (a + b) 2 ≤ 2(a 2 + b 2 ) for any two real numbers a and b, and the last equality follows from the fact that |Optj | βj = n. Summing both sides of (10) over j, we have
k j=1 p∈Optj ||p − p vj || 2 ≤ k j=1 ((1 + 2 2 )|Opt j |δ 2 j + 2(1 + ) 2 j 2 nδ 2 opt ) ≤ (1 + 2 2 ) k j=1 |Opt j |δ 2 j + 2(1 + ) 2 k 3 nδ 2 opt = (1 + O(k 3 ) )nδ 2 opt ,(11)
where the last equation follows from the fact that k j=1 |Opt j |δ 2 j = nδ 2 opt . By (11), we know that {p v1 , · · · , p v k } will induce a (1+O(k 3 ) )-approximation for k-CMeans (together with the solution for the corresponding Partition step). Note that k is assumed to be a fixed number. Thus the lemma is true.
Lemma 7 implies that Lemma 6 is indeed sufficient to ensure the correctness of Theorem 1 (except for the number of candidates and the time complexity). Now we prove Lemma 6.
Proof (of Lemma 6) Let T i be the tree generated by Algorithm 2 when δ falls in the interval of [δ opt , (1 + )δ opt ]. We will focus our discussion on T i , and prove the lemma by mathematical induction on j. Base case: For j = 1, since β 1 = max{β j |1 ≤ j ≤ k}, we have β 1 ≥ 1 k . By Lemma 4 and Lemma 5, we can find the approximate mean point through random sampling. Let Ω and S (in Lemma 5) be P and Opt 1 , respectively. Also, p v1 is the mean point of the random sample from P . Lemma 5 ensures that the sample contains enough number of points from Opt 1 , and Lemma 4 implies that
||p v1 − m 1 || ≤ δ 1 ≤ δ 1 + (1 + ) β1 δ opt .
Induction step: Suppose j > 1. We assume that there is a path in T i from the root to the (j − 1)-th level, such that for each 1 ≤ l ≤ j − 1, the level-l node v l on the path is associated with a point p v l satisfying the inequality ||p v l − m l || ≤ δ l + (1 + )l β l δ opt (i.e., the induction hypothesis). Now we consider the case of j. Below we will show that there is one child of v j−1 , i.e., v j , such that its associated point p vj satisfies the inequality ||p vj − m j || ≤ δ j + (1 + )j βj δ opt . First, we have the following claim (see Section 7.2 for the proof).
Claim 2 In the set of radius candidates in Algorithm 2, there exists one value r j ∈ R such that
r j ∈ [j /β j δ opt , (1 + 2 )j /β j δ opt ].(12)
Now, we construct the j − 1 peeling spheres, {B j,1 , · · · , B j,j−1 } as in Algorithm 2. For each 1 ≤ l ≤ j − 1, B j,l is centered at p v l and with radius r j . By Markov's inequality and the induction hypothesis, we have the following claim (see Section 7.3 for the proof).
Claim 3 For each 1 ≤ l ≤ j − 1, |Opt l \ ( j−1 w=1 B j,w )| ≤ 4βj n . Claim 3 shows that |Opt l \( j−1 w=1 B j,w )| is bounded for 1 ≤ l ≤ j −1
, which helps us to find the approximate mean point of Opt j . Induced by the j − 1 peeling spheres {B j,1 , · · · , B j,j−1 }, Opt j is divided into j subsets, Opt j ∩ B j,1 , · · · , Opt j ∩ B j,j−1 and Opt j \ (
j−1 w=1 B j,w ). For ease of discussion, let P l denote Opt j ∩ B j,l for 1 ≤ l ≤ j − 1, P j denote Opt j \ ( j−1 w=1 B j,w )
, and τ l denote the mean point of P l for 1 ≤ l ≤ j. Note that the peeling spheres may intersect with each other. For any two intersecting spheres B j,l1 and B j,l2 , we arbitrarily assign the points in Opt j ∩ (B j,l1 ∩ B j,l2 ) to either P l1 or P l2 . Thus, we can assume that {P l | 1 ≤ l ≤ j} are pairwise disjoint. Now consider the size of P j . We have the following two cases: (a) |P j | ≥ 3 βj j n and (b) |P j | < 3 βj j n. We show how, in each case, Algorithm 2 can obtain an approximate mean point for Opt j by using the simplex lemma (i.e., Lemma 3).
For case (a), by Claim 3, together with the fact that β l ≤ β j for l > j, we know that
k l=1 |Opt l \ ( j−1 w=1 B j,w )| ≤ j−1 l=1 |Opt l \ ( j−1 w=1 B j,w )| + |P j | + k l=j+1 |Opt l | ≤ 4(j − 1)β j n + |P j | + (k − j)β j n,(13)
where the second inequality follows from Claim 3. So we have
|P j | k l=1 |Opt l \ ( j−1 w=1 B j,w )| ≥ |P j | 4(j−1)βj n + |P j | + (k − j)β j n .(14)
We view the right-hand side as a function of |P j |. Given any h > 0, the function f (x) = x x+h is an increasing function on the variable x ∈ [0, +∞). Note that we assume |P j | ≥ 3 βj j n. Thus
|P j | k l=1 |Opt l \ ( j−1 w=1 B j,w )| ≥ 3 j β j n 4(j−1)βj n + 3 j β j n + (k − j)β j n > 4 8kj ≥ 4 8k 2 ,(15)
(15) implies that P j is large enough, comparing to the set of points outside the peeling spheres. Hence, we can obtain an approximate mean point π for P j in the following way. First, we set t = k 5 , η = k , and take a sample of size t ln(t/η) 4 /8k 2 = 8k 3 9 ln k 2 6 . By Lemma 5, we know that with probability 1 − k , the sample contains at least k 5 points from P j . Then we let π be the mean point of the k 5 points from P j , and a 2 be the variance of P j . By Lemma 4, we know that with probability 1 − k , ||π − τ j || 2 ≤ 4 a 2 . Also, since Once obtaining π, we can apply Lemma 3 to find a point p vj satisfying the condition of ||p vj − m j || ≤ δ j + (1 + )j βj δ opt . We construct a simplex V (a) with vertices {p v1 , · · · , p vj−1 } and π (see Figure 4a). Note that Opt j is partitioned by the peeling spheres into j disjoint subsets, P 1 , · · · , P j . Each P l (1 ≤ l ≤ j − 1) lies inside B j,l , which implies that τ l , i.e., the mean point of P l , is also inside B j,l . Further, by Claim 2, for 1 ≤ l ≤ j − 1, we have
|Pj | |Optj | = |Pj | βj n ≥ 3 j (because |P j | ≥ 3 βj j n for case (a)), we have a 2 ≤ |Optj | |Pj | δ 2 j ≤ j 3 δ 2 j . Thus, ||π − τ j || 2 ≤ jδ 2 j .(16)⇡ p v1 p v2 p v3 B 4,1 B 4,2 B 4,3 (a) p v 1 p v 2 p v 3 B 4,1 B 4,2 B 4,3 (b)||p v l − τ l || ≤ r j ≤ (1 + 2 )j /β j δ opt .(17)
Recall that β j δ 2 j ≤ δ 2 opt . Thus, together with (16), we have
||π − τ j || ≤ jδ j ≤ j/β j δ opt .(18)
By (17) and (18), if setting the value of L (in Lemma 3) to be
max{r j , ||π − τ j ||} ≤ max{(1 + 2 )j /β j δ opt , j/β j δ opt } = (1 + 2 )j /β j δ opt ,(19)
and the value of (in Lemma 3) to be 0 = 2 /4, by Lemma 3 we can construct a grid inside the simplex V (a) with size O((8j/ 0 ) j ) to ensure the existence of the grid point τ satisfying the inequality of
||τ − m j || ≤ √ 0 δ j + (1 + 0 )L ≤ δ j + (1 + )j β j δ opt .(20)
Hence, let p vj be the grid point τ , and the induction step holds for this case. For case (b), we can also apply Lemma 3 to find an approximate mean point in a way similar to case (a); the difference is that we construct a simplex V (b) with vertices {p v1 , · · · , p vj−1 } (see Figure 4b). Roughly speaking, since |P j | is small, the mean points of Opt j \ P j and Opt j are very close to each other (by Lemma 2). Thus, we can ignore P j and just consider Opt j \ P j .
Let a 2 and m j denote the variance and mean point of Opt j \P j respectively. We know that {P 1 , P 2 , · · · , P j−1 } is a partition on Opt j \ P j . Thus, similar with case (a), we construct a simplex V (b) determined by {p v1 , · · · , p vj−1 } (see Figure 4b), set the value of L to be r j ≤ (1 + 2 )j βj δ opt , and then build a grid inside V (b) with size O(( 8j 0 ) j ), where 0 = 2 /4. By Lemma 3, we know that there exists one grid point τ satisfying the condition of
||τ − m j || ≤ √ 0 a + (1 + 0 )L ≤ 2 a + (1 + )j β j δ opt .(21)
Meanwhile, we know that
|Opt j \ P j | ≥ (1 − 3 /j)|Opt j |, since |P j | ≤ 3 j |Opt j |. Thus, we have a 2 ≤ |Optj | |Optj \Pj | δ 2 j ≤ 1 1− 3 /j δ 2 j , and ||m j − m j || ≤
Lemma 2). Together with (21), we have
||τ − m j || ≤ ||τ − m j || + ||m j − m j || ≤ 2 a + (1 + )j β j δ opt + 3 /j 1 − 3 /j δ j ≤ 2 1 1 − 3 /j δ j + (1 + )j β j δ opt + 3 /j 1 − 3 /j δ j ≤ ( 2 1 1 − 3 /j + 3 /j 1 − 3 /j )δ j + (1 + )j β j δ opt ≤ δ j + (1 + )j β j δ opt .(22)
Hence, let p vj be the grid point τ , and the induction step holds for this case. Since Algorithm 2 executes every step in our above discussion, the induction step, as well as the lemma, is true.
Success probability: From the above analysis, we know that in the j-th iteration, only case (a) (i.e., |P j | ≥ 3 βj j n) needs to consider the success probability of random sampling. Recall that in case (a), we take a sample of size 8k 3 9 ln k 2 6 . Thus with probability 1 − k , it contains at least k 5 points from P j . Meanwhile, with probability 1 − k , ||π − τ j || 2 ≤ 4 a 2 . Hence, the success probability in the j-th iteration is (1 − k ) 2 . By taking the union bound, the success probability in all k iterations is (
1 − k ) 2k ≥ 1 − 2 .
Number of Candidates and Running time: Algorithm 1 calls Algorithm 2 O( 1 log c) times (in Section 3.4, we will show that c can be a constant number). It is easy to see that each node in the returned tree has |R|2 s+j ( 32j 2 ) j children, where |R| = O( log n ), and s = 8k 3 9 ln k 2 6 . Since the tree has the height of k, the complexity of the tree is O(2 poly( k ) (log n) k ). Consequently, the number of candidates is O(2 poly( k ) (log n) k ). Further, since each node takes O(|R|2 s+j ( 32j 2 ) j nd) time, the total time complexity of the algorithm is O(2 poly( k ) n(log n) k+1 d).
Upper Bound Estimation
As mentioned in Section 3.1, our Peeling-and-Enclosing algorithm needs an upper bound ∆ on the optimal value δ 2 opt . To compute this, our main idea is to use some unconstrained k-means clustering algorithm A * (e.g., the linear time (1 + )-approximation algorithm in [51]) on the input points P without considering the constraint, to obtain a λ-approximation to the k-means clustering for some constant λ > 1. Let C = {c 1 , · · · , c k } be the set of mean points returned by algorithm A * 3 . Below, we show that the Cartesian product
[C] k = C × · · · × C k contains one k-tuple, which is an (18λ + 16)-approximation of k-CMeans on the same input P . Clearly, to select the k-tuple from [C] k with the smallest objective value, we still need to solve the Partition step on each k-tuple to form the desired clusters. Similar to Remark 1, we refer the reader to Section 4 for the selection algorithms for the considered constrained clustering problems.
Theorem 2 Let P = {p 1 , · · · , p n } be the input points of k-CMeans, and C = {c 1 , · · · , c k } be the mean points of a λ-approximation of the k-means clustering on P (without considering the constraint) for some constant λ ≥ 1.
Then [C] k contains at least one k-tuple which is able to induce an (18λ + 16)approximation of k-CMeans (together with the solution for the corresponding Partition step). Proof Synopsis: Let ω be the objective value of the k-means clustering on P corresponding to the k mean points in C. To prove Theorem 2, we create a new instance of k-CMeans: for each point p i ∈ P , move it to its nearest point, say c t , in {c 1 , · · · , c k }; letp i denote the new p i (note that c t andp i coincide with each other; see Figure 5a). The setP = {p 1 , · · · ,p n } forms a new instance of k-CMeans. Letδ 2 opt be the optimal value of k-CMeans onP , and δ 2 opt ([C] k ) be the minimum cost of k-CMeans on P by restricting its mean points to be one k-tuple in [C] k . We show thatδ 2 opt is bounded by a combination of ω and δ 2 opt , and δ 2 opt ([C] k ) is bounded by a combination of ω andδ 2 opt (see Lemma 8). Together with the fact that ω is no more than λδ 2 opt , we consequently obtain that δ 2 opt ([C] k ) ≤ (18λ + 16)δ 2 opt , which implies Theorem 2.
Lemma 8δ 2 opt ≤ 2ω + 2δ 2 opt , and δ 2 opt ([C] k ) ≤ 2ω + 8δ 2 opt .
Proof We first prove the inequality ofδ 2 opt ≤ 2ω + 2δ 2 opt . Consider any point p i ∈ P . Let Opt l be the optimal cluster containing p i . Then, we have
||p i − m l || 2 ≤ (||p i − p i || + ||p i − m l ||) 2 ≤ 2||p i − p i || 2 + 2||p i − m l || 2 ,(23)
where the first inequality follows from triangle inequality, and the second inequality follows from the fact that (a+b) 2 ≤ 2a 2 +2b 2 for any two real numbers a and b. For both sides of (23), we take the averages over all the points in P , and obtain
1 n k l=1 pi∈Opt l ||p i − m l || 2 ≤ 2 n n i=1 ||p i − p i || 2 + 2 n k l=1 pi∈Opt l ||p i − m l || 2 .(24)
Note that the left-hand side of (24) is not smaller thanδ 2 opt , sinceδ 2 opt is the optimal objective value of k-CMeans onP . For the right-hand side of (24), the first term 2 1 n n i=1 ||p i − p i || 2 = 2ω (by the construction ofP ), and the second term 2 1 n k l=1 pi∈Opt l ||p i − m l || 2 = 2δ 2 opt . Thus, we haveδ 2 opt ≤ 2ω + 2δ 2 opt . Next, we show the inequality δ 2 opt ([C] k ) ≤ 2ω + 8δ 2 opt . Consider k-CMeans clustering onP . Let {m 1 , · · · ,m k } be the optimal constrained mean points of P , and {Õ 1 , · · · ,Õ k } be the corresponding optimal clusters. Let {c 1 , · · · ,c k } be the k-tuple in [C] k withc l being the nearest point in C tom l . Thus, by an argument similar to the one for (23), we have
||p i −c l || 2 ≤ 2||p i −m l || 2 + 2||m l −c l || 2 ≤ 4||p i −m l || 2 .(25)
for eachp i ∈Õ l . In (25), the last one follows from the facts thatc l is the nearest point in C tom l andp i ∈ C, which implies that ||m l −c l || ≤ ||m l −p i || (see Figure 5b). Summing both sides of (25) over all the points inP , we have
k l=1 pi∈Õ l ||p i −c l || 2 ≤ 4 k l=1 pi∈Õ l ||p i −m l || 2 .(26)
Now, consider the following clustering on P . For each p i , ifp i ∈Õ l , we cluster it to the correspondingc l . Then the objective value of the clustering is
1 n k l=1 pi∈Õ l ||p i −c l || 2 ≤ 1 n k l=1 pi∈Õ l (2||p i −p i || 2 + 2||p i −c l || 2 ) ≤ 2 1 n n i=1 ||p i −p i || 2 + 8 1 n k l=1 pi∈Õ l ||p i −m l || 2 .(27)
The left-hand side of (27), 1 n k l=1
pi∈Õ l ||p i −c l || 2 , is no smaller than δ 2 opt ([C] k ) (by the definition), and the right-hand side of (27) is equal to 2ω +8δ 2 opt . Thus, we have δ 2 opt ([C] k ) ≤ 2ω + 8δ 2 opt .
Proof (of Theorem 2) By the two inequalities in Lemma 8, we know that δ 2 opt ([C] k ) ≤ 18ω + 16δ 2 opt . It is obvious that the optimal objective value of the k-means clustering is no larger than that of k-CMeans on the same set of input points P . This implies that ω ≤ λδ 2 opt . Thus, we have
δ 2 opt ([C] k ) ≤ (18λ + 16)δ 2 opt .(28)
So there exists one k-tuple in [C] k , which is able to induce an (18λ + 16)approximation.
Selection Algorithms for k-CMeans
As shown in Section 3, a set of k-tuple candidates for the mean points of k-CMeans can be obtained by our Peeling-and-Enclosing algorithm. To determine the best candidate, we need a selection algorithm to compute the clustering for each k-tuple candidate, and select the one with the smallest objective value. Clearly, the key to designing a selection algorithm is to solve the Partition step (i.e., generating the clustering) for each k-tuple candidate. We need to design a problem-specific algorithm for the Partition step, to satisfy the constraint of each individual problem. We consider all the constrained k-means clustering problems which are mentioned in Section 1.1, except for the uncertain data clustering, since Cormode and McGregor [24] have showed that it can be reduced to an ordinary k-means clustering problem. However, the k-median version of the uncertain data clustering does not have such a property. In Section 5.4, we will discuss how to obtain the (1+ )-approximation by applying our Peeling-and-Enclosing framework.
r-Gather k-means Clustering
Let P be a set of n points in R d . r-Gather k-means clustering (denoted by (r, k)-GMeans) on P is the problem of clustering P into k clusters with size at least r, such that the average squared Euclidean distance from each point in P to the mean point of its cluster is minimized [3].
To solve the Partition problem of (r, k)-GMeans, we adopt the following strategy. For each k-tuple candidate P v = {p v1 , · · · p v k } returned by Algorithm 1, build a complete bipartite graph G (see Figure 6a): each vertex in the left column corresponds to a point in P , and each vertex in the right column represents a candidate mean point in P v ; for each pair of vertices in different partite sets, connect them by an edge with the weight equal to their squared Euclidean distance. We can solve the Partition problem by finding the minimum cost matching in G: each vertex in the left has the supply 1, and each vertex in the right has the demand r and capacity n. After adding a source node s connecting to all the verities in the left and a sink node t connecting to all the vertices in the right, we can reduce the Partition problem to a minimum cost circulation problem, and solve it by using the algorithm in [34]. Denote by V and E the sets of vertices and edges of G. The running time for solving the minimum cost circulation problem is O(|E| 2 log |V | + |E| · |V | log 2 |V |) [59]. In our case, |E| = O(n) and |V | = O(n) if k is a fixed constant. Also, the time complexity for building G is O(nd). Thus, the total time for solving the Partition problem is O n n(log n) 2 + d 4 . Together with the time complexity in Theorem 1, we have the following theorem.
Theorem 3 There exists an algorithm yielding a (1 + )-approximation for (r, k)-GMeans with constant probability, in O 2 poly( k ) (log n) k+1 n n log n+d time.
r-Capacity k-means Clustering
r-Capacity k-means clustering (denoted by (r, k)-CaMeans) [48] on a set P of n points in R d is the problem of clustering P into k clusters with size at most r, such that the average squared Euclidean distance from each point in P to the mean point of its cluster is minimized.
We can solve the Partition problem of (r, k)-CaMeans in a way similar to that of (r, k)-GMeans; the only difference is that the demand r is replaced by the capacity r.
Theorem 4 There exists an algorithm yielding a (1 + )-approximation for (r, k)-CaMeans with constant probability, in O 2 poly( k ) (log n) k+1 n n log n+d time.
l-Diversity k-means Clustering
Let P = ñ i=1 P i be a set of colored points in R d and ñ i=1 |P i | = n, where the points in each P i share the same color. l-Diversity k-means clustering (denoted by (l, k)-DMeans) on P is the problem of clustering P into k clusters such that the points sharing the same color inside each cluster have a fraction no more than 1 l for some l > 1, and the average squared Euclidean distance from each point in P to the mean point of its cluster is minimized.
Similar to (r, k)-GMeans, we reduce the Partition problem of (l, k)-DMeans to a minimum cost circulation problem for each k-tuple candidate P v = {p v1 , · · · p v k }. The challenge is that we do not know the size of each resulting cluster, and therefore it is difficult to control the flow on each edge if directly using the bipartite graph built in Figure 6a. Instead, we add a set of "gates" between the input P and the k-tuple P v (see Figure 6b). First, following the definition of (l, k)-DMeans, we partition the "vertices" P intoñ groups {P 1 , · · · , Pñ}. For each P i , we generate a new set of vertices (i.e., the gates) P i = {c i 1 , · · · , c i k }, and connect each pair of p ∈ P i and c i j ∈ P i by an edge with weight ||p − p vj || 2 . We also connect each pair of c i j and p vj by an edge with weight 0. In Figure 6b, the size of vertices |V | = n + kñ + k + 2 = O(kn), and the size of edges |E| = n + kn + kñ + k = O(kn). Below we show that we can use c i j to control the flow from P i to p vj by setting appropriate capacities and demands.
Let t = max 1≤i≤ñ |P i |. We consider the value |Opt j |/l that is the upper bound on the number of points with the same color in Opt j (recall Opt j is the j-th optimal cluster defined in Section 3). The upper bound |Opt j |/l can be either between 1 and t or larger than t. Clearly, if the upper bound is larger than t, there is no need to consider the upper bound anymore. Thus, we can enumerate all the (t + 1) k cases to guess the upper bound |Opt j |/l for 1 ≤ j ≤ k. Let u j be the guessed upper bound for Opt j . If u j is no more than t, then each c i j , 1 ≤ i ≤ñ, has the capacity u j , and p vj has the demand l × u j and capacity l × (u j + 1) − 1. Otherwise (i.e., u j > t), set the capacity of each c i j , 1 ≤ i ≤ñ, to be n, and the demand and capacity of p vj to be l × (t + 1) and n, respectively. By using the algorithm in [59], we solve the minimum cost circulation problem for each of the (t + 1) k guesses.
Theorem 5 For any colored point set P = ñ i=1 P i in R d with n = |P | and t = max 1≤i≤ñ |P i |, there exists an algorithm yielding, in O 2 poly( k ) (log n) k+1 (t + 1) k n n log n+d time, a (1 + )-approximation for (l, k)-DMeans with constant probability.
Note: We can solve the problem in [55] by slightly changing the above Partition algorithm. In [55], it requires that the size of each cluster is at least l and the points inside each cluster have distinct colors, which means that the upper bound u j is always equal to 1 for each 1 ≤ j ≤ k. Thus, there is no need to guess the upper bounds in our Partition algorithm. We can simply set the capacity for each c i j to be 1, and the demand for each p vj to be l. With this change, our algorithm yields a (1+ )-approximation with constant probability in O 2 poly( k ) (log n) k+1 n n log n+d time.
Chromatic k-means Clustering
Let P = ñ i=1 P i be a set of colored points in R d and ñ i=1 |P i | = n, where the points in each P i share the same color. Chromatic k-means clustering (denoted by k-ChMeans) [8,28] on P is the problem of clustering P into k clusters such that no pair of points with the same color is clustered into the same cluster, and the average squared Euclidean distance from each point in P to the mean point of its cluster is minimized.
To satisfy the chromatic requirement, each P i should have a size no more than k. Given a k-tuple candidate P v = {p v1 , · · · , p v k }, we can consider the partition problem for each P i independently, since there is no mutual constraint among them. It is easy to see that finding a partition of P i is equivalent to computing a minimum cost one-to-one matching between P i and P v , where the cost of the matching between any p ∈ P i and p vj ∈ P v is their squared Euclidean distance. We can build this bipartite graph in O(k 2 d) time, and solve this matching problem by using Hungarian algorithm in O(k 3 ) time. Thus, the running time of the Partition step for each P v is O(k 2 (k + d)n).
Theorem 6 There exists an algorithm yielding a (1 + )-approximation for k-ChMeans with constant probability, in O 2 poly( k ) (log n) k+1 nd time.
Fault Tolerant k-means Clustering
Fault Tolerant k-means clustering (denoted by (l, k)-FMeans) [64] on a set P of n points in R d and a given integer 1 ≤ l ≤ k is the problem of finding k points C = {c 1 , · · · , c k } ⊂ R d , such that the average of the total squared distances from each point in P to its l nearest points in C is minimized.
To solve the Partition problem of (l, k)-FMeans, our idea is to reduce (l, k)-FMeans to k-ChMeans, and use the Partition algorithm for k-ChMeans to generate the desired clusters. The reduction simply makes l monochromatic copies {p 1 i , · · · , p l i } for each p i ∈ P . The following lemma shows the relation of the two problems.
Lemma 9 For any constant λ ≥ 1, a λ-approximation of (l, k)-FMeans on P is equivalent to a λ-approximation of k-ChMeans on
n i=1 {p 1 i , · · · , p l i }.
Proof We build a bijection between the solutions of (l, k)-FMeans and k-ChMeans. First, we consider the mapping from (l, k)-FMeans to k-ChMeans. Let C = {c 1 , · · · , c k } be the k mean points of (l, k)-FMeans, and {c i(1) , · · · , c i(l) } ⊂ C be the l nearest mean points to each p i ∈ P . If using C as the k mean points of k-ChMeans on n i=1 {p 1 i , · · · , p l i }, the l copies {p 1 i , · · · , p l i } of p i will be respectively clustered to the l clusters of {c i(1) , · · · , c i(l) } to minimize the cost. Now consider the mapping from k-ChMeans to (l, k)-FMeans. Let C = {c 1 , · · · , c k } be the k mean points of k-ChMeans. For each i, {c i(1) , · · · , c i(l) } are the mean points of the l clusters that {p 1 i , · · · , p l i } are clustered to. It is easy to see that the l nearest mean points of p i are {c i(1) , · · · , c i(l) } if we use C as the k mean points of (l, k)-FMeans.
With this bijection, we can pair up the solutions to the two problems. Clearly, each pair of solutions to (l, k)-FMeans and k-ChMeans formed by the bijection have the equal objective value. Further, their optimal objective values are equal to each other, and for any pair of solutions, their approximation ratios are the same. Thus, Lemma 9 is true.
With Lemma 9, we immediately have the following theorem.
Theorem 7 There exists an algorithm yielding a (1 + )-approximation for (l, k)-FMeans with constant probability, in O 2 poly( k ) (log n) k+1 nd time.
Note: As mentioned in [42], a more general version of fault tolerant clustering problem is to allow each point p i ∈ P to have an individual l-value l i . From the above discussion, it is easy to see that this general version can also be solved in the same way (i.e., through reduction to k-ChMeans) and achieve the same approximation result.
Semi-Supervised k-means Clustering
As shown in Section 1.1, semi-supervised clustering has various forms. In this paper, we consider the semi-supervised k-means clustering problem which takes into account the geometric cost and priori knowledge. Let P be a set of n points in R d , and S = {S 1 , · · · , S k } be a given clustering of P . Semisupervised k-means clustering (denoted by k-SMeans) on P and S is the problem of finding a clustering S = {S 1 , · · · , S k } of P such that the following objective function is minimized,
α Cost(S) E 1 + (1 − α) dist{S, S} E 2 ,(29)
where α ∈ [0, 1] is a given constant, E 1 and E 2 are two given scalars to normalize the two terms, Cost(S) is the k-means clustering cost of S, and dist{S, S} is the distance between S and S defined in the same way as in [13]. For any pair of S j and S i , 1 ≤ j, i ≤ k, their difference is |S j \ S i |. Given a bipartite matching σ between S and S, dist{S, S} is defined as
k j=1 |S j \ S σ(j) |.
The challenge is that the bipartite matching σ is unknown in advance. We fix the k-tuple candidate P v = {p v1 , · · · p v k }. To find the desired σ to minimize the objective function (29), we build a bipartite graph, where the left (resp., right) column contains k vertices corresponding to p v1 , · · · , p v k (resp., S 1 , · · · , S k ), respectively. For each pair (p vj , S i ), we connect them by an edge; we calculate the edge weight w (i,j) in the following way. For each p ∈ S i , it could be potentially assigned to any of the k clusters in S; if i = σ(j), the induced k costs of p will be {c 1
p , c 2 p , · · · , c k p }, where c l p = α ||p−pv l || 2 E1 if l = j, or c l p = α ||p−pv l || 2 E1 + (1 − α) 1 E2 otherwise. Thus, we set w (i,j) = p∈Si min 1≤l≤k c l p .(30)
We solve the minimum cost bipartite matching problem to determine σ. To build such a bipartite graph, we need to first compute all the kn distances from the points in P to the k-tuple P v ; then, we calculate the k 2 edge weights via (30). The bipartite graph can be built in a total of O(knd + k 2 n) time, and the optimal matching can be obtained via Hungarian algorithm in O(k 3 ) time.
Theorem 8 There exists an algorithm yielding a (1 + )-approximation for k-SMeans with constant probability, in O 2 poly( k ) (log n) k+1 nd time.
Constrained k-Median Clustering (k-CMedian)
In this section, we extend our approach for k-CMeans to the constrained kmedian clustering problem (k-CMedian). Similar to k-CMeans, we show that the Peeling-and-Enclosing framework can be used to construct a set of candidates for the constrained median points. Combining this with the selection algorithms (with trivial modification) in Section 4, we achieve the (1 + ) approximations for a class of k-CMedian problems.
To solve k-CMedian, a straightforward idea is to extend the simplex lemma to median points and combine it with the Peeling-and-Enclosing framework to achieve an approximate solution. However, due to the essential difference between mean and median points, such an extension for the simplex lemma is not always possible. The main reason is that the median point (i.e., Fermat point) does not necessarily lie inside the simplex, and thus there is no guarantee to find the median point by searching inside the simplex. Below is an example showing that the median point actually can lie outside the simplex.
Let P = {p 1 , p 2 , · · · , p 9 } be a set of points in R d . We consider the following partition of P , P 1 = {p i | 1 ≤ i ≤ 5} and P 2 = {p i | 6 ≤ i ≤ 9}. Assume that all the points of P locate at the three vertices of a triangle ∆abc. Particularly, {p 1 , p 2 , p 6 } coincide with vertex a, {p 3 , p 4 , p 5 } with vertex b, and {p 7 , p 8 , p 9 } with vertex c (see Figure7). It is easy to see that the median points of P 1 and P 2 are b and c, respectively. If the angle ∠bac ≥ 2π 3 , the median point of P is vertex a (note that the median point can be viewed as the Fermat point of ∆abc with each vertex associated with weight 3). This means that the median point of P is outside the simplex formed by the median points of P 1 and P 2 a (p 1 , p 2 , (i.e., segment bc). Thus, a good approximation of the median point cannot be obtained by searching a grid inside bc.
p 6 ) b (p 3 , p 4 , p 5 ) c (p 7 , p 8 , p 9 )
To overcome this difficulty, we show that a weaker version of the simplex lemma exists for median, which enables us to achieve similar results for k-CMedian.
Weaker Simplex Lemma for Median Point
Comparing to the simplex lemma in Section 2, the following Lemma 10 has two differences. One is that the lemma requires a partial partition on a significantly large subset of P , rather than a complete partition on P . Secondly, the grid is built in the flat spanned by {o 1 , · · · , o j }, instead of the simplex. Later, we will show that the grid is actually built in a surrounding region of the simplex, and thus the lemma is called "weaker simplex lemma".
Lemma 10 (Weaker Simplex Lemma) Let P be a set of n points in R d , and j l=1 P l ⊂ P be a partial partition of P with P l1 ∩ P l2 = ∅ for any l 1 = l 2 . Let o l be the median point of P l for 1 ≤ l ≤ j, and F be the flat spanned by {o 1 , · · · , o j }. If |P \ ( j l=1 P l )| ≤ |P | for some constant ∈ (0, 1/5) and each P l is contained inside a ball B(o l , L) centered at o l and with radius L ≥ 0, then it is possible to build a grid in F with size O(j 2 ( j √ j ) j ) such that at least one grid point τ satisfies the following inequality, where o is the median point of P (see Figure 8).
1 |P | p∈P ||τ − p|| ≤ (1 + 9 4 ) 1 |P | p∈P ||p − o|| + (1 + )L.(31)
Proof Synopsis: To prove Lemma 10, we letõ be the orthogonal projection of o to F (see Figure8). In Claim 4, we show that the distance between o and o is bounded, and consequently, the induced cost ofõ, i.e., 1 |P | p∈P ||p −õ||, is also bounded according to Claim 5. Thus,õ is a good approximation of o, and we can focus on building a grid inside F to approximateõ. Since F is unbounded, we need to determine a range for the grid. Claim 6 resolves the issue. It considers two cases. One is that there are at least two subsets in the partial partition, {P 1 , · · · , P j }, having large enough fractions of P ; the other is that only one subset is large enough. For either case, Claim 6 shows that we can determine the range of the grid using the location information of {o 1 , · · · , o j }. Finally, we can obtain the desired grid point τ in the following way: draw a set of balls centered at {o 1 , · · · , o j } with proper radii; build the grids inside each of the balls, and find the desired grid point τ in one of these balls. Note that since all the balls are inside F, the complexity of the union of the grids is independent of the dimensionality d.
Claim 4 ||o −õ|| ≤ L + 1 1 − 1 |P | p∈P ||o − p||.(32)
Proof Lemma 10 assumes that j l=1 P l ≥ (1 − )|P |. By Markov's inequality, we know that there exists one point q ∈ j l=1 P l such that
||q − o|| ≤ 1 1 − 1 |P | p∈P ||o − p||.(33)
Let P lq be the subset containing q. Then from (33), we immediately have
||o −õ|| ≤ ||o lq − o|| ≤ ||o lq − q|| + ||q − o|| ≤ L + 1 1 − 1 |P | p∈P ||o − p||.(34)
This implies Claim 4 (see Figure 9). Claim 5
1 |P | p∈P ||p −õ|| ≤ 1 1 − 1 |P | p∈P ||p − o|| + L.(35)
Proof For any point p ∈ P l , let dist{oõ, p} (resp., dist{F, p}) denote its distance to the line oõ (resp., flat F). See Figure 10. Then we have
||p −õ|| = dist 2 {oõ, p} + dist 2 {F, p},(36)
||p − o|| ≥ dist{oõ, p}.
Combining (36) and (37), we have
||p −õ|| − ||p − o|| ≤ dist 2 {oõ, p} + dist 2 {F, p} − dist{oõ, p} ≤ dist{F, p} ≤ ||p − o l || ≤ L.(38)
For any point p ∈ P \ ( j l=1 P l ), we have ||p −õ|| ≤ ||p − o|| + ||o −õ||.
Combining (38), (39), and (32), we have
1 |P | p∈P ||p −õ|| = 1 |P | ( p∈ j l=1 P l ||p −õ|| + p∈P \( j l=1 P l ) ||p −õ||) ≤ 1 |P | ( p∈ j l=1 P l (L + ||p − o||) + p∈P \( j l=1 P l ) (||p − o|| + ||o −õ||)) ≤ (1 − )L + 1 |P | p∈P ||p − o|| + L + 1 − 1 |P | p∈P ||p − o|| = 1 1 − 1 |P | p∈P ||p − o|| + L.(40)
Thus the claim is true. Claim 6 At least one of the following two statements is true.
1. There exist at least two points in {o 1 , · · · , o j } whose distances toõ are no more than L + 3j
1− 1 |P | p∈P ||p − o||. 2.
There exists one point in {o 1 , · · · , o j }, say o l0 , whose distance toõ is no more than (1 + 1+2 √ 3−12 )L. 5
Proof We consider two cases: (i) there are two subsets P l1 and P l2 of P with size at least 1− 3j |P |, and (ii) no such pair of subsets exists. For case (i), by Markov's inequality, we know that there exist two points q ∈ P l1 and q ∈ P l2 such that
||q − o|| ≤ 3j 1 − 1 |P | p∈P ||p − o||; (41) ||q − o|| ≤ 3j 1 − 1 |P | p∈P ||p − o||.(42)
This, together with triangle inequality, indicates that both ||o l1 −o|| and ||o l2 − o|| are no more than L + 3j
1− 1 |P | p∈P ||p − o||.
Sinceõ is the orthogonal projection of o to F, we have ||o l1 −õ|| ≤ ||o l1 − o|| and ||o l2 −õ|| ≤ ||o l2 − o||. Thus, the first statement is true in this case.
For case (ii), i.e., no two subsets with size at least 1− 3j |P |, since j l=1 |P l | ≥ (1 − )|P |, by pigeonhole principle we know that there must exist one P l0 , 1 ≤ l 0 ≤ j, with size
|P l0 | ≥ (1 − (j − 1) 1 3j )(1 − )|P | ≥ 2 3 (1 − )|P |.(43)
Let x = ||o−o l0 ||. We assume that x > L, since otherwise the second statement is automatically true. Now imagine moving o slightly toward o l0 by a small distance δ. See Figure 11. For any point p ∈ P l0 , letp be its orthogonal projection to the line oo l0 , and a and b be the distances ||o −p|| and ||p −p||, respectively. Then, the 5 Note that we assume < 1 5 in Lemma 10, so (1 + 1+2 √ 3−12 )L is a finite real number.
distance between p and o is decreased by
√ a 2 + b 2 − (a − δ) 2 + b 2 . Also, we have lim δ→0 √ a 2 + b 2 − (a − δ) 2 + b 2 δ = lim δ→0 2a − δ √ a 2 + b 2 + (a − δ) 2 + b 2 = (a/b) (a/b) 2 + 1 .(44)
Since p is inside ball B(o l0 , L), we have a/b ≥ (x − L)/L. For any point p ∈ P \ P l0 , the distance to o is non-increased or increased by at most δ. Thus, the average distance from the points in P to o is decreased by at least
2 3 (1 − ) ((x − L)/L)δ ((x − L)/L) 2 + 1 − (1 − 2 3 (1 − ))δ.(45)
Since the original position of o is the median point of P , the value of (45) should be non-positive. With simple calculation, we have
(x − L)/L ≤ 1 + 2 √ 3 − 12 =⇒ x ≤ (1 + 1 + 2 √ 3 − 12 )L.(46)
By the same argument in case (i), we know that ||o l0 −õ|| ≤ ||o l0 − o||. This, together with (46), implies that the second statement is true for case (ii). This completes the proof for this claim.
With the above claims, we now prove Lemma 10.
Proof (of Lemma 10) We build a grid in F as follows. First, draw a set of balls.
-For each o l , 1 ≤ l ≤ j, draw a ball (called type-1 ball) centered at o l and with radius (1 + 1+2 . We claim that among the above balls, there must exist one ball that containsõ. If there is only one subset in {P 1 , · · · , P j } with size no smaller than 1− 3j |P |, it corresponds to the second case in Claim 6, and thus there exists a type-1 ball containingõ. Now consider the case that there are multiple subsets, say {P l1 , · · · , P lt } for some t ≥ 2, all with size no smaller than 1− 3j |P |. Without loss of generality, assume that ||o l1 − o l2 || = max{||o l1 − o ls || | 1 ≤ s ≤ t}. Then, we can view t s=1 P ls as a big subset of P bounded by a ball centered at o l1 and with radius ||o l1 − o l2 || + L. By the same argument given in the proof of Claim 6 for (43), we know that | t s=1 P ls | ≥ 2 3 (1 − )|P |. This also means that we can reduce this case to the second case in Claim 6, i.e., replacing P l0 , o l0 and L by | t s=1 P ls |, o l1 and ||o l1 − o l2 || + L respectively. Thus, there is a type-2 ball containingõ.
Next, we discuss how to build the grids inside these balls. For type-1 balls with radius (1 + 1+2 √ 3−12 )L, we build the grids inside them with grid length √ j L. For type-2 balls with radius r l,l = (1 + 1+2 √ 3−12 )(||o l − o l || + L) for some l and l , we build the grids inside them with grid length
1 1 + 1+2 √ 3−12 (1 − ) 6j √ j r l,l .(47)
Ifõ is contained in a type-1 ball, then there exists one grid point τ whose distance toõ is no more than L. Ifõ is contained in a type-2 ball, such a distance is no more than
(1 − ) 6j (||o l − o l || + L)(48)
by (47). By the first statement in Claim 6 and triangle inequality, we know that
||o l − o l || ≤ ||o l −õ|| + ||õ − o l || ≤ 2(L + 3j 1 − 1 |P | p∈P ||p − o||).(49)
(48) and (49) imply that there exists one grid point τ whose distance toõ is no more than
1 |P | p∈P ||p − o|| + (1 − ) 2j L ≤ 1 |P | p∈P ||p − o|| + L.(50)
Thus in both types of ball-containing, by triangle inequality and Claim 5, we have
1 |P | p∈P ||p − τ || ≤ 1 |P | p∈P (||p −õ|| + ||õ − τ ||) ≤ ( 1 1 − + ) 1 |P | p∈P ||p − o|| + (1 + )L ≤ (1 + 9 4 ) 1 |P | p∈P ||p − o|| + (1 + )L,(51)
where the second inequality follows from the assumption that ≤ 1 5 . As for the grid size, since we build the grids inside the balls in the (j − 1)dimensional flat F, through simple calculation, we know that the grid size is O(j 2 ( j √ j ) j ). This completes the proof.
Peeling-and-Enclosing Algorithm for k-CMedian Using Weaker Simplex Lemma
In this section, we present a unified Peeling-and-Enclosing algorithm for generating a set of candidate median points for k-CMedian. Similar to the algorithm for k-CMeans, our algorithm iteratively determines the k median points. At each iteration, it uses a set of peeling spheres and a simplex to search for an Fig. 12: The gray area is U .
✏/16 F õ o
approximate median point. Since the simplex lemma no longer holds for k-CMedian, we use the weaker simplex lemma as a replacement. Thus a number of changes are needed to accommodate the differences. Before presenting our algorithm, we first introduce the following lemma proved by Badȏiu et al. in [12] for finding an approximate median point of a given point set. Sketch of the proof of Theorem 9. Since our algorithm uses some ideas in Theorem 9, we sketch its proof for completeness. First, by Markov's inequality, we know that there exists one point, say s 1 , from R whose distance to o is no more than 2 1 |P | p∈P ||o − p|| with certain probability. Then the sampling procedure can be viewed as an incremental process starting with s 1 ; a flat F spanned by all previously obtained sample points is maintained; at each time that a new sample point is added, F is updated. Letõ be the projection of o on F, and
U = {p ∈ R d | π 2 − 16 ≤ ∠oõp ≤ π 2 + 16 }.(52)
See Figure 12. It has been shown that this incremental sampling process stops before at most O(1/ 3 log 1/ ) points are taken, and one of the following two events happens with constant probability: (1) F is close enough to o, and (2) |P \ U | is small enough. For either event, a grid can be built inside F, and one of the grid points τ is the desired approximate median point. Below we give an overview of our Peeling-and-Enclosing algorithm for k-CMedian. Let P = {p 1 , · · · , p n } be the set of R d points in k-CMedian, and OPT = {Opt 1 , · · · , Opt k } be the k (unknown) optimal clusters with m j being the median point of cluster Opt j for 1 ≤ j ≤ k. Without loss of generality, we assume that |Opt 1 | ≥ |Opt 2 | ≥ · · · ≥ |Opt k |. Denote by µ opt the optimal objective value, i.e., µ opt = 1 n k j=1 p∈Optj ||p − m j ||.
Algorithm overview: We mainly focus on the differences with the k-CMeans algorithm. First, our algorithm uses Theorem 9 (instead of Lemma 4) to find an approximation p v1 for m 1 . Then, it iteratively finds the approximate median points for {m 2 , · · · , m k } using the Peeling-and-Enclosing strategy. At the (j + 1)-th iteration, it has already obtained the approximate median points p v1 , · · · , p vj for clusters Opt 1 , · · · , Opt j , respectively. To find the approximate median point p vj+1 for Opt j+1 , the algorithm draws j peeling spheres B j+1,1 , · · · , B j+1,j centered at {p v1 , · · · , p vj }, respectively, and considers the size of A = Opt j+1 \ ( j l=1 B j+1,l ). If |A| is small, it builds a flat (instead of a simplex) spanned by {p v1 , · · · , p vj }, and finds p vj+1 by using the weaker simplex lemma where the j peeling spheres can be viewed as a partial partition on Opt j+1 . If |A| is large, it adopts a strategy similar to the one in Theorem 9 to find p vj+1 : start with the flat F spanned by {p v1 , · · · , p vj }, and grow F by repeatedly adding a sample point in A to it. As it will be shown in Lemma 11, F will become close enough to m j+1 , and p vj+1 can be obtained by searching a grid (built in a way similar to Lemma 10) in F. By choosing a proper value (i.e., O( )µ opt ) for L in Lemma 10 and Lemma 11, we can achieve the desired (1 + )-approximation. As for the running time, although Theorem 9 introduces an extra factor of log n for estimating the optimal cost of each Opt j+1 , our algorithm actually does not need it as such estimations have already been obtained during the Peeling-and-Enclosing step (see Claim 2 in the proof of Lemma 6). Thus, the running time is still O(n(log n) k+1 d), which is the same as k-CMeans.
The algorithm is shown in Algorithm 3. The following lemma is needed to ensure the correctness of our algorithm.
Lemma 11
Let F be a flat in R d containing {p v1 , · · · , p vj } and having a distance to m j+1 no more than 2 |Optj+1| p∈Optj+1 ||p − m j+1 ||. Assume that all the peeling spheres B j+1,1 , · · · , B j+1,j are centered at {p v1 , · · · , p vj }, respectively, and have a radius L ≥ 0. Then if |Opt j+1 \ (( j w=1 B j+1,w ) U )| ≤ |Opt j+1 |, we have
1 |Opt j+1 | p∈Optj+1 ||p −m j+1 || ≤ (1 + 2 ) 1 |Opt j+1 | p∈Optj+1 ||p − m j+1 || + L(53)
for any 0 ≤ ≤ 1, wherem j+1 is the projection of m j+1 on F and U is defined in (52) For each point p ∈ ( j w=1 B j+1,w ) Opt j+1 , similar to (38) in Claim 5, we know that the cost increases by at most L if the median point moves from m j+1 tom j+1 . Thus we have
p∈Optj+1 ( j w=1 Bj+1,w) ||p −m j+1 || ≤ p∈Optj+1 ( j w=1 Bj+1,w) (||p − m j+1 || + L).(54)
For the part Opt j+1 \ (( j w=1 B j+1,w ) U ), by triangle inequality we have
p∈Optj+1\(( j w=1 Bj+1,w) U ) ||p −m j+1 || ≤ p∈Optj+1\(( j w=1 Bj+1,w) U ) (||p − m j+1 || + ||m j+1 −m j+1 ||) ≤ p∈Optj+1\(( j w=1 Bj+1,w) U ) ||p − m j+1 || + 2 p∈Optj+1 ||p − m j+1 ||,(55)
where the second inequality follows from the assumption that F's distance to m j+1 is no more than 2 |Optj+1| p∈Optj+1 ||p − m j+1 || and
|Opt j+1 \ (( j w=1 B j+1,w ) U )| ≤ |Opt j+1 |.
For each point p ∈ Opt j+1 ∩ U , recall that the angle ∠m j+1mj+1 p ∈ [ π 2 − 16 , π 2 + 16 ] in (52). In Theorem 3.2 of [12], it showed that ||p −m j+1 || ≤ (1 + )||p − m j+1 ||. Therefore,
p∈Optj+1∩U ||p −m j+1 || ≤ (1 + ) p∈Optj+1∩U ||p − m j+1 ||.(56)
Combining (54), (55) and (56), we obtain (53).
To complete the Peeling-and-Enclosing algorithm for k-CMedian, we also need an upper bound for the optimal objective value. In Section 5.3, we will show how to obtain such an estimation. For this moment, we assume that the upper bound is available.
Using the same idea for proving Theorem 1, we obtain the following theorem for k-CMedian.
Theorem 10 Let P be a set of n points in R d and k ∈ Z + be a fixed constant. In O(2 poly( k ) n(log n) k+1 d) time, Algorithm 3 outputs O(2 poly( k ) (log n) k ) ktuple candidate median points. With constant probability, there exists one ktuple candidate in the output which is able to induce a 1+O( ) -approximation of k-CMedian (together with the solution for the corresponding Partition step).
Algorithm 3 Peeling-and-Enclosing for k-CMedian
Input: P = {p 1 , · · · , pn} in R d , k ≥ 2, a constant ∈ (0, 1 4k 2 ), and an upper bound ∆ ∈ [µopt, cµopt] with c ≥ 1. Output: A set of k-tuple candidates for the k constrained median points.
1. For i = 0 to log 1+ c do (a) Set µ = (1 + ) i ∆/c, and run Algorithm 4. (b) Let T i be the output tree.
2. For each root-to-leaf path of every T i , build a k-tuple candidate using the k points associated with the path.
Algorithm 4 Peeling-and-Enclosing-Tree II Input: P = {p 1 , · · · , pn} in R d , k ≥ 2, a constant ∈ (0, 1 4k 2 ), and µ > 0. 1. Initialize T with a single root node v associated with no point. 2. Recursively grow each node v in the following way (a) If the height of v is already k, then it is a leaf. (b) Otherwise, let j be the height of v. Build the radius candidates set R =
∪ log n t=0 { 1+l 2 2(1+ ) j2 t µ | 0 ≤ l ≤ 4 + 2 }.
For each r ∈ R, do i. Let {pv 1 , · · · , pv j } be the j points associated with nodes on the root-to-v path. ii. For each pv l , 1 ≤ l ≤ j, construct a ball B j+1,l centered at pv l and with radius r. iii. Compute a flat spanned by {pv 1 , · · · , pv j }, and build a grid inside it by Lemma 10. iv. Take a random sample from P \ ∪ j l=1 B j+1,l with size s = k 3 11 ln k 2 6 , and compute the flat determined by these sample points and {pv 1 , · · · , pv j }. Build a grid inside the flat by Theorem 9. v. In total, there are O(2 poly( k ) ) grid points inside these two flats. For each grid point, add one child to v, and associate it with the grid point.
3. Output T .
Upper Bound Estimation for k-CMedian
In this section, we show how to obtain an upper bound of the optimal objective value of k-CMedian.
Theorem 11 Let P = {p 1 , · · · , p n } be the input points of k-CMedian, and C be the set of k median points of a λ-approximation of k-median on P (without considering the constraint) for some constant λ ≥ 1. Then the Cartesian product [C] k contains at least one k-tuple which is able to induce a (3λ + 2)approximation of k-CMedian (together with the solution for the corresponding Partition step).
Let {c 1 , · · · , c k } be the k median points in C, and ω be the corresponding objective value of the k-median approximate solution on P . Recall that {m 1 , · · · , m k } are the k unknown optimal constrained median points of P , and OPT = {Opt 1 , · · · , Opt k } are the corresponding k optimal constrained clusters. To prove Theorem 11, we create a new instance of k-CMedian in the following way: for each point p i ∈ P , move it to its nearest point, say c t , in {c 1 , · · · , c k }; letp i denote the new p i (note that c t andp i overlap with each other). Then the setP = {p 1 , · · · ,p n } forms a new instance of k-CMedian. Let µ opt andμ opt be the optimal cost of P andP respectively, and µ opt ([C] k ) be the minimum cost of P by restricting its k constrained median points to being a k-tuple in [C] k . The following two lemmas are keys to proving Theorem 11.
Lemma 12μ opt ≤ ω + µ opt .
Proof For each p i ∈ Opt l , by triangle inequality we have
||p i − m l || ≤ ||p i − p i || + ||p i − m l ||.(57)
For both sides of (57), taking the averages over i and l, we get Note that the left-hand side of (58) is not smaller thanμ opt , sinceμ opt is the optimal object value of k-CMedian onP . For the right-hand side of (58), the first term 1 n n i=1 ||p i − p i || = ω (by the construction ofP ), and the second term 1 n k l=1 pi∈Opt l ||p i − m l || = µ opt . Thus, we haveμ opt ≤ ω + µ opt .
Lemma 13 µ opt ([C] k ) ≤ ω + 2μ opt .
Proof Consider k-CMedian onP . Let {m 1 , · · · ,m k } be the optimal constraint median points, and {Õ 1 , · · · ,Õ k } be the corresponding optimal constraint clusters ofP . Let {c 1 , · · · ,c k } be the k-tuple in [C] k withc l being the nearest point in C tom l . Thus, by an argument similar to the one for (57), we have the following inequality, wherep i is assumed to be clustered inÕ l .
||p i −c l || ≤ ||p i −m l || + ||m l −c l || ≤ 2||p i −m l ||.
In (59), the last one follows from the facts thatc l is the nearest point in C tõ m l andp i ∈ C, which implies ||m l −c l || ≤ ||m l −p i ||. For both sides of (59), taking the averages over i and l, we have
Now, consider the following k-CMedian on P . For each p i , ifp i ∈Õ l , we cluster it to the corresponding median pointc l . Then the objective value of the clustering is As for the running time of the Peeling-and-Enclosing algorithm, it still builds the trees with heights equal to k. But the number of children for each node is different. Recall that in the proof of Claim 2, in order to obtain an estimation for β j = |Optj | n , we need to try O(log n) times since 1 n ≤ β j ≤ 1; but for k-PMedian, the range of β j becomes [ wmin W , 1] where w min = min 1≤i≤n w i (note that W = n i=1 w i ≤ n). Thus, the running time of Peeling-and-Enclosing algorithm becomes O(nh(log n wmin ) k+1 d). Furthermore, for each k-tuple candidate, we perform the Partition step through assigning each D i to the m j with the smallest dist{v i , m j }. Obviously, the Partition step can be finished within linear time. Thus we have the following theorem.
Theorem 12 There exists an algorithm yielding a (1 + )-approximation for k-PMedian with constant probability, in O(2 poly( k ) nh (log n wmin ) k+1 d) time, where w min = min 1≤i≤n w i .
Future Work
Following this work, some interesting problems deserve to be further studied in the future. For example, we reduce the partition step to the minimum cost circulation problem for several constrained clustering problems in Section 4; however, since the goal is to find an approximate solution, one may consider using the geometric information to solve the Partition step approximately. In Euclidean space, several techniques have been developed for solving approximate matching problems efficiently [7,61]. But it is still not clear whether such techniques can be extended to solve the constrained matching problems (such as the r-gather or l-diversity) considered in this paper, especially in high dimensional space. We leave it as an open problem for future work.
Thus, the base case holds. Induction step: Assume that the lemma holds for any j ≤ j 0 for some j 0 ≥ 1 (i.e., the induction hypothesis). Now we consider the case of j = j 0 + 1. Similar to the proof of Lemma 1, we assume that |Q l | |Q| ≥ 4j for each 1 ≤ l ≤ j. Otherwise, through a similar idea from Lemma 1, it can be reduced to the case with smaller j, and solved by the induction hypothesis. Hence, in the following discussion, we assume that |Q l | |Q| ≥ 4j for each 1 ≤ l ≤ j. To find such a τ , we consider the distance from o l to o for any 1 ≤ l ≤ j. We have
||o l − o || ≤ ||o l − o l || + ||o l − o|| + ||o − o || ≤ 2 j δ + 2L,(65)
where the first inequality follows from triangle inequality, and the second inequality follows from the facts that ||o l − o l || and ||o − o || are both bounded by L, and ||o l − o|| ≤ 2 j δ (by Lemma 2). This implies that we can use a similar idea in Lemma 1 to construct a ball B centered at any o l0 and with radius r = max 1≤l≤j {||o l − o l0 ||}. Also, the simplex V is inside B. Note that
||o l − o l0 || ≤ ||o l − o || + ||o − o l0 || ≤ 4 j δ + 4L(66)
by (65), which implies r ≤ 4 j δ + 4L. Similar to Lemma 1, we can build a grid inside B with grid length r 4j , and the number of grid points is O((8j/ ) j ). Moreover, o must lie inside V by the definition. In this grid, we can find a grid point τ such that ||τ − o || ≤ 4 √ j r ≤ √ δ + L. Thus, ||τ − o|| ≤ ||τ − o || + ||o − o|| ≤ √ δ + (1 + )L, and the induction step, as well as the lemma, holds.
Proof of Claim 2 for Lemma 6
Since 1 ≥ β j ≥ 1 n , there is one integer t between 1 and log n, such that 2 t−1 ≤ 1 βj ≤ 2 t . Thus 2 t/2−1 √ δ opt ≤ βj δ opt ≤ 2 t/2 √ δ opt . Together with δ ∈ [δ opt , (1 + )δ opt ], we have
2 t/2−1 √ δ 1 + ≤ β j δ opt ≤ 2 t/2 √ δ.(67)
Thus if settingr j = 2 t/2 √ δ, we have β j δ opt ≤r j ≤ 2(1 + ) β j δ opt .
We consider the interval I = [ j 2(1+ )r j , jr j ]. (68) ensures that j βj δ opt ∈ I. Also, we build a grid in the interval with grid length 2 1 2(1+ ) jr j , i.e., R j = { 1+l 2 2(1+ ) jr j | 0 ≤ l ≤ 4+ 2 }. Moreover, the grid length 2 1 2(1+ ) jr j ≤ 2 j βj δ opt , which implies that there exists r j ∈ R j such that j β j δ opt ≤ r j ≤ (1 + 2 )j β j δ opt .
Note that R j ⊂ R, where R = ∪ log n t=0 { 1+l 2 2(1+ ) j2 t/2 √ δ | 0 ≤ l ≤ 4 + 2 }. Thus, the Claim is true.
Proof of Claim 3 for Lemma 6
Note that δ 2 opt = k j=1 β j δ 2 j , and β j ≤ β l for each 1 ≤ l ≤ j − 1. Thus, we have δ l ≤ 1 β l δ opt ≤ 1 βj δ opt . Together with j βj δ opt ≤ r j (Claim 2) and ||p v l − m l || ≤ δ l + (1 + )l β l δ opt (by the induction hypothesis), we have r j − ||p v l − m l || ≥ j β j δ opt − ( δ l + (1 + )(j − 1) β l δ opt )
≥ (1 − (j − 1) ) β j δ opt − δ l ≥ (1 − (j − 1) − √ ) β j δ opt .(70)
Since ∈ (0, 1 4k 2 ) in the input of Algorithm 1, we know r j − ||p v l − m l || > 0. That is, m l is covered by the ball B j,l .
| 18,549 |
1810.01049
|
2951285883
|
In this paper, we consider a class of constrained clustering problems of points in @math , where @math could be rather high. A common feature of these problems is that their optimal clusterings no longer have the locality property (due to the additional constraints), which is a key property required by many algorithms for their unconstrained counterparts. To overcome the difficulty caused by the loss of locality, we present in this paper a unified framework, called Peeling-and-Enclosing (PnE) , to iteratively solve two variants of the constrained clustering problems, constrained @math -means clustering ( @math -CMeans) and constrained @math -median clustering ( @math -CMedian). Our framework is based on two standalone geometric techniques, called Simplex Lemma and Weaker Simplex Lemma , for @math -CMeans and @math -CMedian, respectively. The simplex lemma (or weaker simplex lemma) enables us to efficiently approximate the mean (or median) point of an unknown set of points by searching a small-size grid, independent of the dimensionality of the space, in a simplex (or the surrounding region of a simplex), and thus can be used to handle high dimensional data. If @math and @math are fixed numbers, our framework generates, in nearly linear time ( i.e., @math ), @math @math -tuple candidates for the @math mean or median points, and one of them induces a @math -approximation for @math -CMeans or @math -CMedian, where @math is the number of points. Combining this unified framework with a problem-specific selection algorithm (which determines the best @math -tuple candidate), we obtain a @math -approximation for each of the constrained clustering problems. We expect that our technique will be applicable to other constrained clustering problems without locality.
|
Besides the traditional clustering models, Balcan considered the problem of finding the clustering with small difference from the unknown ground truth @cite_29 @cite_33 .
|
{
"abstract": [
"A common approach to clustering data is to view data objects as points in a metric space, and then to optimize a natural distance-based objective such as the k-median, k-means, or min-sum score. For applications such as clustering proteins by function or clustering images by subject, the implicit hope in taking this approach is that the optimal solution for the chosen objective will closely match the desired “target” clustering (e.g., a correct clustering of proteins by function or of images by who is in them). However, most distance-based objectives, including those mentioned here, are NP-hard to optimize. So, this assumption by itself is not sufficient, assuming P ≠ NP, to achieve clusterings of low-error via polynomial time algorithms. In this article, we show that we can bypass this barrier if we slightly extend this assumption to ask that for some small constant c, not only the optimal solution, but also all c-approximations to the optimal solution, differ from the target on at most some e fraction of points—we call this (c,e)-approximation-stability. We show that under this condition, it is possible to efficiently obtain low-error clusterings even if the property holds only for values c for which the objective is known to be NP-hard to approximate. Specifically, for any constant c > 1, (c,e)-approximation-stability of k-median or k-means objectives can be used to efficiently produce a clustering of error O(e) with respect to the target clustering, as can stability of the min-sum objective if the target clusters are sufficiently large. Thus, we can perform nearly as well in terms of agreement with the target clustering as if we could approximate these objectives to this NP-hard value.",
"A common approach for solving clustering problems is to design algorithms to approximately optimize various objective functions (e.g., k-means or min-sum) defined in terms of some given pairwise distance or similarity information. However, in many learning motivated clustering applications (such as clustering proteins by function) there is some unknown target clustering; in such cases the pairwise information is merely based on heuristics and the real goal is to achieve low error on the data. In these settings, an arbitrary c-approximation algorithm for some objective would work well only if any c-approximation to that objective is close to the target clustering. In recent work, Balcan et. al [7] have shown how both for the k-means and k-median objectives this property allows one to produce clusterings of low error, even for values c such that getting a c-approximation to these objective functions is provably NP-hard. In this paper we analyze the min-sum objective from this perspective. While [7] also considered the min-sum problem, the results they derived for this objective were substantially weaker. In this work we derive new and more subtle structural properties for min-sum in this context and use these to design efficient algorithms for producing accurate clusterings, both in the transductive and in the inductive case. We also analyze the correlation clustering problem from this perspective, and point out interesting differences between this objective and k-median, k-means, or min-sum objectives."
],
"cite_N": [
"@cite_29",
"@cite_33"
],
"mid": [
"2041504706",
"15368807"
]
}
|
A Unified Framework for Clustering Constrained Data without Locality Property
|
be used to handle high dimensional data. If k and 1 are fixed numbers, our framework generates, in nearly linear time (i.e., O(n(log n) k+1 d)), O((log n) k ) k-tuple candidates for the k mean or median points, and one of them induces a (1 + )-approximation for k-CMeans or k-CMedian, where n is the number of points. Combining this unified framework with a problem-specific selection algorithm (which determines the best k-tuple candidate), we obtain a (1 + )approximation for each of the constrained clustering problems. Our framework improves considerably the best known results for these problems. We expect that our technique will be applicable to other constrained clustering problems without locality.
Keywords constrained clustering · k-means/median · approximation algorithms · high dimensions
Introduction
Clustering is one of the most fundamental problems in computer science, and finds numerous applications in many different areas, such as data management, machine learning, bioinformatics, networking, etc. [45]. The common goal of many clustering problems is to partition a set of given data items into a number of clusters so as to minimize the total cost measured by a certain objective function. For example, the popular k-means (or k-median) clustering seeks k mean (or median) points to induce a partition of the given data items so that the average squared distance (or the average distance) from each data item to its closest mean (or median) point is minimized. Most existing clustering techniques assume that the data items are independent from each other and therefore can "freely" determine their memberships in the resulting clusters (i.e., a data item does not need to pay attention to the clustering of others). However, in many real-world applications, data items are often constrained or correlated, which require a great deal of effort to handle such additional constraints. In recent years, considerable attention has been paid to various types of constrained clustering problems and a number of techniques, such as l-diversity clustering [55], r-gather clustering [3,33,43], capacitated clustering [6,25,48], chromatic clustering [8,28], and probabilistic clustering [24,40,53], have been obtained. In this paper, we study a class of constrained clustering problems of points in Euclidean space.
Given a set of points P in R d , a positive integer k, and a constraint C, the constrained k-means (or k-median) clustering problem is to partition P into k clusters so as to minimize the objective function of the ordinary k-means (or k-median) clustering and satisfy the constraint C. In general, the problems are denoted by k-CMeans and k-CMedian, respectively.
The detailed definition for each individual problem is given in Section 4. Roughly speaking, data constraints can be imposed at either cluster or item level. Cluster level constraints are restrictions on the resulting clusters, such as the size of the clusters [3] or their mutual differences [72], while item level constraints are mainly on data items inside each cluster, such as the coloring constraint which prohibits items of the same color being clustered into one cluster [8,28,55]. The additional constraints could considerably change the nature of the clustering problems. For instance, one key property exhibited in many unconstrained clustering problems is the so called locality property, which indicates that each cluster is located entirely inside the Voronoi cell of its center (e.g., the mean, median, or center point) in the Voronoi diagram of all the centers [44] (see Figure 1a). Existing algorithms for these clustering problems often rely on such a property [10,12,22,37,44,51,57,60]. However, due to the additional constraints, the locality property may no longer exist (see Figure 1b). Therefore, we need new techniques to overcome this challenge.
Our Main Results
In this paper we present a unified framework called Peeling-and-Enclosing (PnE), based on two standalone geometric techniques called Simplex Lemma and Weaker Simplex Lemma, to solve a class of constrained clustering problems without the locality property in Euclidean space, where the dimensionality of the space could be rather high and the number k of clusters is assumed to be some fixed number. Particularly, we investigate the constrained k-means (k-CMeans) and k-median (k-CMedian) versions of these problems. For the k-CMeans problem, our unified framework generates in O(n(log n) k+1 d) time a set of k-tuple candidates of cardinality O((log n) k ) for the to-be-determined k mean points. We show that among the set of candidates, one of them induces a (1 + )-approximation for k-CMeans. To find out the best k-tuple candidate, a problem-specific selection algorithm is needed for each individual constrained clustering problem (note that due to the additional constraints, the selection problems may not be trivial). Combining the unified framework with the se-lection algorithms, we obtain a (1 + )-approximation for each constrained clustering problem in the considered class. Our results considerably improve (in various ways) the best known algorithms for all these problems (see the table in Section 1.2). Our techniques can also be extended to k-CMedian to achieve (1 + )-approximations for these problems with the same time complexities. Below is a list of the constrained clustering problems considered in this paper. We expect that our technique will be applicable to other clustering problems without locality property, as long as the corresponding selection problems can be solved.
1. l-Diversity Clustering. In this problem, each input point is associated with a color, and each cluster has no more than a fraction 1 l (for some constant l > 1) of its points sharing the same color. The problem is motivated by a widely-used privacy preserving model called l-diversity [55,56] in data management, which requires that each block contains no more than a fraction 1 l of items sharing the same sensitive attribute. 2. Chromatic Clustering. In [28], Ding and Xu introduced a new clustering problem called chromatic clustering, which requires that the points with the same color should be clustered in different clusters. It is motivated by a biological application for clustering chromosome homologs in a population of cells, where homologs from the same cell should be clustered into different clusters. Similar problem also appears in applications related to transportation system design [8]. 3. Fault Tolerant Clustering. The problem of fault tolerant clustering assigns each point p to its l nearest cluster centers for some l ≥ 1, and counts all the l distances as its cost. The problem has been extensively studied in various applications for achieving better fault tolerance [21,42,47,52,64]. 4. r-Gather Clustering. This clustering problem requires each of the clusters to contain at least r points for some r > 1. It is motivated by the k-anonymity model for privacy preserving [3,65], where each block contains at least k items 1 . 5. Capacitated Clustering. This clustering problem has an upper bound on the size of each cluster, and finds various applications in data mining and resource assignment [25,48]. 6. Semi-Supervised Clustering. Many existing clustering techniques, such as ensemble clustering [62,63] and consensus clustering [4,23], make use of a priori knowledge. Since such clusterings are not always based on the geometric cost (e.g., k-means cost) of the input, thus a more accurate way of clustering is to consider both the priori knowledge and the geometric cost. We consider the following semi-supervised clustering problem: given a set P of points and a clustering S of P (based on the priori knowledge), partition P into k clusters so as to minimize the sum (or some function) of the geometric cost and the difference with the given clustering S. Another related problem is evolutionary clustering [20], where the clustering in each time point needs to minimize not only the geometric cost, but also the total shifting from the clustering in the previous time point (which can be viewed as S). 7. Uncertain Data Clustering. Due to the unavoidable error, data for clustering are not always precise. This motivates us to consider the following probabilistic clustering problem [24,40,53] : given a set of "nodes" with each represented as a probabilistic distribution over a point set in R d , group the nodes into k clusters so as to minimize the expected cost with respect to the probabilistic distributions.
Note: Following our work published in [30], Bhattacharya et al. [17] improved the running time for finding the candidates of k-cluster centers from nearly linear to linear based on the elegant D 2 -sampling. Their work also follows the framework of clustering constrained data, i.e., generating the candidates and selecting the best one by a problem-specific selection algorithm, presented in this paper. Our paper represents the first systematically theoretical study of the constrained clustering problems. Some of the underlying techniques, such as Simplex Lemma and Weaker Simplex Lemma, are interesting in their rights, which have already been used to solve other problems [31] (e.g., the popular "truth discovery" problem in data mining).
Our Main Ideas
Most existing k-means or k-median clustering algorithms in Euclidean space consist of two main steps: (1) identify the set of k mean or median points and (2) partition the input points into k clusters based on these mean or median points (we call this step Partition). Note that for some constrained clustering problems, the Partition step may not be trivial. More formally, we have the following definition.
Definition 1 (Partition Step) Given an instance P of k-CMeans (or k-CMedian) and k cluster centers (i.e., the mean or median points), the Partition step is to form k clusters of P , where the clusters should satisfy the constraint and each cluster is assigned to an individual cluster center, such that the objective function of the ordinary k-means (or k-median) clustering is minimized.
To determine the set of k mean or median points in step (1), most existing algorithms (either explicitly or implicitly) rely on the locality property. To shed some light on this, consider a representative and elegant approach by Kumar et al. [51] for k-means clustering. Let {Opt 1 , · · · , Opt k } be the set of k unknown optimal clusters in non-increasing order of their sizes. Their approach uses random sampling and sphere peeling to iteratively find k mean points. At the j-th iterative step, it draws j-1 peeling spheres centered at the j-1 already obtained mean points, and takes a random sample on the points outside the peeling spheres to find the j-th mean point. Due to the locality property, the points belonging to the first j-1 clusters lie inside their corresponding j-1 Voronoi cells; that is, for each peeling sphere, most of the covered points belong to their corresponding cluster, and thus ensures the correctness of the peeling step.
However, when the additional constraint (such as coloring or size) is imposed on the points, the locality property may no longer exist (see Figure 1b), and thus the correctness of the peeling step cannot always be guaranteed. In this scenario, the core-set technique [36] is also unlikely to be able to resolve the issue. The main reason is that although the core-set can greatly reduce the size of the input points, it is quite challenging to impose the constraint through the core-set.
To overcome this challenge, we present a unified framework, called Peelingand-Enclosing (PnE), in this paper, based on a standalone new geometric technique called Simplex Lemma. The simplex lemma aims to address the major obstacle encountered by the peeling strategy in [51] for constrained clustering problems. More specifically, due to the loss of locality, at the j-th peeling step, the points of the j-th cluster Opt j could be scattered over all the Voronoi cells of the first j-1 mean points, and therefore their mean point can no longer be simply determined by the sample outside the j-1 peeling spheres. To resolve this issue, our main idea is to view Opt j as the union of j unknown subsets, Q 1 , · · · , Q j , with each Q l , 1 ≤ l ≤ j-1, being the set of points inside the Voronoi cell (or peeling sphere) of the obtained l-th mean point and Q j being the set of remaining points of Opt j . After approximating the mean point of each unknown subset by using random sampling, we build a simplex to enclose a region which contains the mean point of Opt j , and then search the simplex region for a good approximation of the j-th mean point. To make this approach work, we need to overcome two difficulties: (a) how to generate the desired simplex to contain the j-th mean point, and (b) how to efficiently search the (approximate) j-th mean point inside the simplex.
For difficulty (a), our idea is to use the already determined j-1 mean points (which can be shown that they are also the approximate mean points of Q 1 , · · · , Q j−1 , respectively) and another point, which is the mean of those points in Opt j outside the peeling spheres (or Voronoi cells) of the first j-1 mean points (i.e., Q j ), to build a (j-1)-dimensional simplex to contain the jth mean point. Since we do not know how Opt j is partitioned (i.e., how Opt j intersects the j-1 peeling spheres), we vary the radii of the peeling spheres O(log n) times to guess the partition and generate a set of simplexes, where the radius candidates are based on an upper bound of the optimal value determined by a novel estimation algorithm (in Section 3.4). We show that among the set of simplexes, one of them contains the j-th (approximate) mean point.
For difficulty (b), our simplex lemma (in Section 2) shows that if each vertex v l of the simplex V is the (approximate) mean point of Q l , then we can find a good approximation of the mean point of Opt j by searching a small-size grid inside V. A nice feature of the simplex lemma is that the grid size is independent of the dimensionality of the space and thus can be used to handle high dimensional data. In some sense, our simplex lemma can be viewed as a considerable generalization of the well-known sampling lemma (i.e., Lemma 4 in this paper) in [44], which has been widely used for estimating the mean of a point set through random sampling [35,44,51]. Different from Lemma 4, which requires a global view of the point set (meaning that the sample needs to be taken from the point set), our simplex lemma only requires some partial views (e.g., sample sets are taken from those unknown subsets whose size might be quite small). If Opt j is the point set, our simplex lemma enables us to bound the error by the variance 2 of Opt j (i.e., a local measure) and the optimal value of the clustering problem on the whole instance P (i.e., a global measure), and thus helps us to ensure the quality of our solution.
For the k-CMedian problem, we show that although the simplex lemma no longer holds since the median point may lie outside the simplex, a weaker version (in Section 5.1) exists, which searches a surrounding region of the simplex. Thus our Peeling-and-Enclosing framework works for both k-CMeans and k-CMedian. It generates in total O((log n) k ) k-tuple candidates for the constrained k mean or median points. To determine the best k mean or median points, we need to use the property of each individual problem to design a selection algorithm. The selection algorithm takes each k-tuple candidate, computes a clustering (i.e., completing the Partition step) satisfying the additional constraint, and outputs the k-tuple with the minimum cost. We present a selection algorithm for each considered problem in Sections 4 and 5.4.
Simplex Lemma
In this section, we present the Simplex Lemma for approximating the mean point of an unknown point set Q, where the only known information is a set Lemma 1 (Simplex Lemma I) Let Q be a set of points in R d with a partition of Q = ∪ j l=1 Q l and Q l1 ∩ Q l2 = ∅ for any l 1 = l 2 . Let o be the mean point of Q, and o l be the mean point of Q l for 1 ≤ l ≤ j. Let the variance of Q be δ 2 = 1 |Q| q∈Q ||q−o|| 2 , and V be the simplex determined by {o 1 , · · · , o j }. Then for any 0 < ≤ 1, it is possible to construct a grid of size O((8j/ ) j ) inside V such that at least one grid point τ satisfies the inequality ||τ − o|| ≤ √ δ. Proof The following claim has been proved in [51].
o 1 o 2 o 3 o 4 o (a) o o 1 o 2 o 3 o 4 o 0 1 o 0 2 o 0 3 o 0 4 o 0 (b)
Claim 1 Let Q be a set of points in R d space, and o be the mean point of Q. For any pointõ ∈ R d , q∈Q ||q −õ|| 2 = q∈Q ||q − o|| 2 + |Q| × ||o −õ|| 2 .
Let Q 2 = Q \ Q 1 , and o 2 be its mean point. By Claim 1, we have the following two equalities.
q∈Q1 ||q − o|| 2 = q∈Q1 ||q − o 1 || 2 + |Q 1 | × ||o 1 − o|| 2 ,(1)q∈Q2 ||q − o|| 2 = q∈Q2 ||q − o 2 || 2 + |Q 2 | × ||o 2 − o|| 2 .(2)
Let L = ||o 1 − o 2 ||. By the definition of mean point, we have
o = 1 |Q| q∈Q q = 1 |Q| ( q∈Q1 q + q∈Q2 q) = 1 |Q| (|Q 1 |o 1 + |Q 2 |o 2 ).(3)
Thus the three points {o, o 1 , o 2 } are collinear, while ||o 1 − o|| = (1 − α)L and ||o 2 − o|| = αL. Meanwhile, by the definition of δ, we have
δ 2 = 1 |Q| ( q∈Q1 ||q − o|| 2 + q∈Q2 ||q − o|| 2 ).(4)
Combining (1) and (2), we have
δ 2 = 1 |Q| ( q∈Q1 ||q − o 1 || 2 + |Q 1 | × ||o 1 − o|| 2 + q∈Q2 ||q − o 2 || 2 + |Q 2 | × ||o 2 − o|| 2 ) ≥ 1 |Q| (|Q 1 | × ||o 1 − o|| 2 + |Q 2 | × ||o 2 − o|| 2 ) = α((1 − α)L) 2 + (1 − α)(αL) 2 = α(1 − α)L 2 .(5)Thus, we have L ≤ δ √ α(1−α)
, which means that
||o 1 −o|| = (1−α)L ≤ 1−α α δ.
Proof (of Lemma 1) We prove this lemma by induction on j.
Base case: For j = 1, since Q 1 = Q, o 1 = o. Thus, the simplex V and the grid are all simply the point o 1 . Clearly τ = o 1 satisfies the inequality. Induction step: Assume that the lemma holds for any j ≤ j 0 for some j 0 ≥ 1 (i.e., the induction hypothesis). Now we consider the case of j = j 0 + 1. First, we assume that |Q l | |Q| ≥ 4j for each 1 ≤ l ≤ j. Otherwise, we can reduce the problem to the case of a smaller j in the following way. Let I = {l|1 ≤ l ≤ j, |Q l | |Q| < 4j } be the index set of small subsets. Then, l∈I |Q l | |Q| < 4 , and l ∈I |Q l | |Q| ≥ 1 − 4 . By Lemma 2, we know that ||o − o|| ≤ /4
1− /4 δ, where o is the mean point of ∪ l ∈I Q l . Let (δ ) 2 be the variance of ∪ l ∈I Q l . Then, we have
(δ ) 2 ≤ |Q| |∪ l ∈I Q l | δ 2 ≤ 1 1− /4 δ 2 .
Thus, if we replace Q and by ∪ l ∈I Q l and 16 , respectively, and find a point τ such that
||τ − o || 2 ≤ 16 (δ ) 2 ≤ /16 1− /4 δ 2 , then we have ||τ − o|| 2 ≤ (||τ − o || + ||o − o||) 2 ≤ 9 16 1 − /4 δ 2 ≤ δ 2 ,(6)
where the last inequality is due to the fact < 1. This means that we can reduce the problem to a problem with the point set ∪ l ∈I Q l and a smaller j (i.e., j − |I|). By the induction hypothesis, we know that the reduced problem can be solved, where the new simplex would be a subset of V determined by {o l | 1 ≤ l ≤ j, l ∈ I}, and therefore the induction step holds for this case. Note that in general, we do not know I, but we can enumerate all the 2 j possible combinations to guess I if j is a fixed number as is the case in the algorithm in Section 3.2. Thus, in the following discussion, we can assume that |Q l | |Q| ≥ 4j for each 1 ≤ l ≤ j.
For
each 1 ≤ l ≤ j, since |Q l | |Q| ≥ 4j , by Lemma 2, we know that ||o l − o|| ≤ 1− 4j 4j δ ≤ 2 j δ.
This, together with triangle inequality, implies that for any
1 ≤ l, l ≤ j, ||o l − o l || ≤ ||o l − o|| + ||o l − o|| ≤ 4 j/ δ.(7)
Thus, if we pick any index l 0 , and draw a ball B centered at o l0 and with radius r = max 1≤l≤j {||o l − o l0 ||} ≤ 4 j/ δ (by (7)), the whole simplex V will be inside B.
Note that o = j l=1 |Q l | |Q| o l , so o lies inside the simplex V.
To guarantee that o is contained by the ball B, we can construct B only in the (j − 1)-dimensional space spanned by {o 1 , · · · , o j }, rather than the whole R d space. Also, if we build a grid inside B with grid length r 4j , i.e., generating a uniform mesh with each cell being a (j − 1)-dimensional hypercube of edge length r 4j , the total number of grid points is no more than O(( 8j ) j ). With this grid, we know that for any point p inside V, there exists a grid point g such
that ||g − p|| ≤ j( r 4j ) 2 = 4 √ j r ≤ √ δ.
This means that we can find a grid point τ inside V, such that ||τ − o|| ≤ √ δ. Thus, the induction step holds, and the lemma is true for any j ≥ 1.
In the above lemma, we assume that the exact positions of {o 1 , · · · , o j } are known (see Figure 2a). However, in some scenarios (e.g., in the Algorithm in Section 3.2), we only know an approximate position of each mean point o i (see Figure 2b). The following lemma shows that an approximate position of o can still be similarly determined (see Section 7.1 for the proof).
Lemma 3 (Simplex Lemma II) Let Q, o, Q l , o l , 1 ≤ l ≤ j, and δ be defined as in Lemma 1. Let {o 1 , · · · , o j } be j points in R d such that ||o l − o l || ≤ L for 1 ≤ l ≤ j and L > 0, and V be the simplex determined by {o 1 , · · · , o j }. Then for any 0 < ≤ 1, it is possible to construct a grid of size O((8j/ ) j ) inside V such that at least one grid point τ satisfies the inequality ||τ − o|| ≤ √ δ + (1 + )L.
Peeling-and-Enclosing Algorithm for k-CMeans
In this section, we present a new Peeling-and-Enclosing (PnE) algorithm for generating a set of candidates for the mean points of k-CMeans. Our algorithm uses peeling spheres and the simplex lemma to iteratively find a good candidate for each unknown cluster. An overview of the algorithm is given in Section 3.1.
Some notations: Let P = {p 1 , · · · , p n } be the set of R d points in k-CMeans, and OPT = {Opt 1 , · · · , Opt k } be the k unknown optimal constrained clusters with m j being the mean point of Opt j for 1 ≤ j ≤ k. Without loss of generality, we assume that |Opt 1 | ≥ |Opt 2 | ≥ · · · ≥ |Opt k |. Denote by δ 2 opt the optimal objective value, i.e., δ 2 opt = 1 n k j=1 p∈Optj ||p − m j || 2 . We also set > 0 as the parameter related to the quality of the approximate clustering result. Our Peeling-and-Enclosing algorithm needs an upper bound ∆ on the optimal value δ 2 opt . Specifically, δ 2 opt satisfies the condition ∆/c ≤ δ 2 opt ≤ ∆ for some constant c ≥ 1. In Section 3.4, we will present a novel algorithm to determine such an upper bound for general constrained k-means clustering problems. Then, it searches for a (1 + )-approximation δ 2 of δ 2 opt in the set
Overview of the Peeling-and-Enclosing Algorithm
p v1 p v2 p v3 m 4 Opt 4 (a) p v1 p v2 p v3 m 4 Opt 4 (b) p v1 p v2 p v3 ⇡ m 4 Opt 4 (c) p v1 p v2 p v3 ⇡ m 4 Opt 4 p v4 (d)H = {∆/c, (1 + )∆/c, (1 + ) 2 ∆/c, · · · , (1 + ) log 1+ c ∆/c ≥ ∆}.(8)
Obviously, there exists one element of H lying inside the interval [δ 2 opt , (1 + )δ 2 opt ], and the size of H is O( 1 log c).
At each searching step, our algorithm performs a sphere-peeling and simplexenclosing procedure to iteratively generate k approximate mean points for the constrained clusters. Initially, our algorithm uses Lemmas 4 and 5 to find an approximate mean point p v1 for Opt 1 (note that since Opt 1 is the largest cluster, |Opt 1 |/n ≥ 1/k and the sampling lemma applies). At the (j + 1)-th iteration, it already has the approximate mean points p v1 , · · · , p vj for Opt 1 , · · · , Opt j , respectively (see Figure 3(a)). Due to the lack of locality, some points of Opt j+1 could be scattered over the regions (e.g., Voronoi cells or peeling spheres) of Opt 1 , · · · , Opt j and are difficult to be distinguished from the points in these clusters. Since the number of such points could be small (comparing to that of the first j clusters), they need to be handled differently from the remaining points. Our idea is to separate them using j peeling spheres, B j+1,1 , · · · , B j+1,j , centered at the j approximate mean points respectively and with some properly guessed radius (see Figure 3(b)). Let A be the set of unknown points in Opt j+1 \ (∪ j l=1 B j+1,l ). Our algorithm considers two cases, (a) |A| is large enough and (b) |A| is small. For case (a), since |A| is large enough, we can use Lemma 4 and Lemma 5 to find an approximate mean point π of A, and then construct a simplex determined by π and p v1 , · · · , p vj to contain the j + 1-th mean point (see Figure 3(c)). Note that A and Opt j+1 ∩ B j+1,l , 1 ≤ l ≤ j, can be viewed as a partition of Opt j+1 where the points covered by multiple peeling spheres can be assigned to anyone of them, and p v l can be shown as an approximate mean point of Opt j+1 ∩ B j+1,l ; thus the simplex lemma applies. For case (b), it directly constructs a simplex determined just by p v1 , · · · , p vj . For either case, our algorithm builds a grid inside the simplex and uses Lemma 3 to find an approximate mean point for Opt j+1 (i.e., p vj+1 , see Figure 3(d)). The algorithm repeats the Peeling-and-Enclosing procedure k times to generate the k approximate mean points.
Peeling-and-Enclosing Algorithm
Before presenting our algorithm, we first introduce two basic lemmas from [29,44] for random sampling. Let S be a set of n points in R d space, and T be a randomly selected subset of size t from S. Denote by m(S) and m(T ) the mean points of S and T respectively.
Lemma 4 ( [44]) With probability 1 − η, ||m(S) − m(T )|| 2 < 1 ηt δ 2 , where δ 2 = 1 n s∈S ||s − m(S)|| 2 and 0 < η < 1.
Lemma 5 ( [29])
Let Ω be a set of elements, and S be a subset of Ω with |S| |Ω| = α for some α ∈ (0, 1). If we randomly select
t ln t η ln(1+α) = O( t α ln t η ) ele- ments
from Ω, then with probability at least 1 − η, the sample contains at least t elements from S for 0 < η < 1 and t ∈ Z + .
Our Peeling-and-Enclosing algorithm is shown in Algorithm 1.
Algorithm 1 Peeling-and-Enclosing for k-CMeans 2. For each root-to-leaf path of every T i , build a k-tuple candidate using the k points associated with the path.
Input: P = {p 1 , · · · , pn} in R d , k ≥ 2, a constant ∈ (0, 14k
Algorithm 2 Peeling-and-Enclosing-Tree
Input: P = {p 1 , · · · , pn} in R d , k ≥ 2, a constant ∈ (0, 1 4k 2 )
, and δ > 0. 1. Initialize T as a single root node v associated with no point. 2. Recursively grow each node v in the following way
(a) If the height of v is already k, then it is a leaf. (b) Otherwise, let j be the height of v. Build the radius candidate set R = ∪ log n t=0 { 1+l 2 2(1+ ) j2 t/2 √ δ | 0 ≤ l ≤ 4 + 2 }. For each r ∈ R, do i. Let {pv 1 , · · · ,
pv j } be the j points associated with the nodes on the root-to-v path. ii. For each pv l , 1 ≤ l ≤ j, construct a ball B j+1,l centered at pv l and with radius r. iii. Take a random sample from P \ ∪ j l=1 B j+1,l of size s = 8k 3 9 ln k 2 6 . Compute the mean points of all the subsets of the sample, and denote them by Π = {π 1 , · · · , π 2 s −1 }. iv. For each π i ∈ Π, construct a simplex using {pv 1 , · · · , pv j , π i } as its vertices.
Also construct another simplex using {pv 1 , · · · , pv j } as its vertices. For each simplex, build a grid with size O(( 32j/ 2 ) j ) inside itself and each of its 2 j possible degenerated sub-simplices. v. In total, there are 2 s+j (32j/ 2 ) j grid points inside the 2 s simplices. For each grid point, add one child to v, and associate it with the grid point.
3. Output T .
Theorem 1 Let P be the set of n R d points and k ∈ Z + be a fixed constant.
In O(2 poly( k ) n(log n) k+1 d) time, Algorithm 1 outputs O(2 poly( k ) (log n) k ) ktuple candidate mean points. With constant probability, there exists one k-tuple candidate in the output which is able to induce a 1 + O( ) -approximation of k-CMeans (together with the solution for the corresponding Partition step).
Remark 1 (1) To increase the success probability to be close to 1, e.g., 1 − 1 n , one only needs to repeatedly run the algorithm O(log n) times; both the time complexity and the number of k-tuple candidates increase by a factor of O(log n).
(2) In general, the Partition step may be challenging to solve. As shown in Section 4, the constrained clustering problems considered in this paper admit efficient selection algorithms for their Partition steps.
Proof of Theorem 1
Let β j = |Opt j |/n, and δ 2
j = 1 |Optj | p∈Optj ||p − m j || 2 ,
where m j is the mean point of Opt j . By our assumption in the beginning of Section 3, we know that β 1 ≥ · · · ≥ β k . Clearly, k j=1 β j = 1 and the optimal objective value δ 2 opt = k j=1 β j δ 2 j . Proof Synopsis: Instead of directly proving Theorem 1, we consider the following Lemma 6 and Lemma 7 which jointly ensure the correctness of Theorem 1. In Lemma 6, we show that there exists such a root-to-leaf path in one of the returned trees that its associated k points along the path, denoted by {p v1 , · · · , p v k }, are close enough to the mean points m i , · · · , m k of the k optimal clusters, respectively. The proof is based on mathematical induction; each step needs to build a simplex, and applies Simplex Lemma II to bound the error, i.e., ||p vj − m j || in (9). The error is estimated by considering both the local (i.e., the variance of cluster Opt j ) and global (i.e., the optimal value δ opt ) measurements. This is a more accurate estimation, comparing to the widely used Lemma 4 which considers only the local measurement. Such an improvement is due to the increased flexibility in the Simplex Lemma II, and is a key to our proof. In Lemma 7, we further show that the k points, {p v1 , · · · , p v k }, in Lemma 6 induce a (1 + O( ))-approximation of k-CMeans.
Lemma 6 Among all the trees generated by Algorithm 1, with constant probability, there exists at least one tree, T i , which has a root-to-leaf path with each of its nodes v j at level j (1 ≤ j ≤ k) associating with a point p vj and satisfying the inequality
||p vj − m j || ≤ δ j + (1 + )j β j δ opt .(9)
Before proving this lemma, we first show its implication.
Lemma 7 If Lemma 6 is true, then {p v1 , · · · , p v k } is able to induce a (1 + O( ))
-approximation of k-CMeans (together with the solution for the corresponding Partition step).
Proof We assume that Lemma 6 is true. Then for each 1 ≤ j ≤ k, we have
p∈Optj ||p − p vj || 2 = p∈Optj ||p − m j || 2 + |Opt j | × ||m j − p vj || 2 ≤ p∈Optj ||p − m j || 2 + |Opt j | × 2( 2 δ 2 j + (1 + ) 2 j 2 β j δ 2 opt ) = (1 + 2 2 )|Opt j |δ 2 j + 2(1 + ) 2 j 2 nδ 2 opt ,(10)
where the first equation follows from Claim 1 in the proof of Lemma 2 (note that m j is the mean point of Opt j ), the inequality follows from Lemma 6 and the fact that (a + b) 2 ≤ 2(a 2 + b 2 ) for any two real numbers a and b, and the last equality follows from the fact that |Optj | βj = n. Summing both sides of (10) over j, we have
k j=1 p∈Optj ||p − p vj || 2 ≤ k j=1 ((1 + 2 2 )|Opt j |δ 2 j + 2(1 + ) 2 j 2 nδ 2 opt ) ≤ (1 + 2 2 ) k j=1 |Opt j |δ 2 j + 2(1 + ) 2 k 3 nδ 2 opt = (1 + O(k 3 ) )nδ 2 opt ,(11)
where the last equation follows from the fact that k j=1 |Opt j |δ 2 j = nδ 2 opt . By (11), we know that {p v1 , · · · , p v k } will induce a (1+O(k 3 ) )-approximation for k-CMeans (together with the solution for the corresponding Partition step). Note that k is assumed to be a fixed number. Thus the lemma is true.
Lemma 7 implies that Lemma 6 is indeed sufficient to ensure the correctness of Theorem 1 (except for the number of candidates and the time complexity). Now we prove Lemma 6.
Proof (of Lemma 6) Let T i be the tree generated by Algorithm 2 when δ falls in the interval of [δ opt , (1 + )δ opt ]. We will focus our discussion on T i , and prove the lemma by mathematical induction on j. Base case: For j = 1, since β 1 = max{β j |1 ≤ j ≤ k}, we have β 1 ≥ 1 k . By Lemma 4 and Lemma 5, we can find the approximate mean point through random sampling. Let Ω and S (in Lemma 5) be P and Opt 1 , respectively. Also, p v1 is the mean point of the random sample from P . Lemma 5 ensures that the sample contains enough number of points from Opt 1 , and Lemma 4 implies that
||p v1 − m 1 || ≤ δ 1 ≤ δ 1 + (1 + ) β1 δ opt .
Induction step: Suppose j > 1. We assume that there is a path in T i from the root to the (j − 1)-th level, such that for each 1 ≤ l ≤ j − 1, the level-l node v l on the path is associated with a point p v l satisfying the inequality ||p v l − m l || ≤ δ l + (1 + )l β l δ opt (i.e., the induction hypothesis). Now we consider the case of j. Below we will show that there is one child of v j−1 , i.e., v j , such that its associated point p vj satisfies the inequality ||p vj − m j || ≤ δ j + (1 + )j βj δ opt . First, we have the following claim (see Section 7.2 for the proof).
Claim 2 In the set of radius candidates in Algorithm 2, there exists one value r j ∈ R such that
r j ∈ [j /β j δ opt , (1 + 2 )j /β j δ opt ].(12)
Now, we construct the j − 1 peeling spheres, {B j,1 , · · · , B j,j−1 } as in Algorithm 2. For each 1 ≤ l ≤ j − 1, B j,l is centered at p v l and with radius r j . By Markov's inequality and the induction hypothesis, we have the following claim (see Section 7.3 for the proof).
Claim 3 For each 1 ≤ l ≤ j − 1, |Opt l \ ( j−1 w=1 B j,w )| ≤ 4βj n . Claim 3 shows that |Opt l \( j−1 w=1 B j,w )| is bounded for 1 ≤ l ≤ j −1
, which helps us to find the approximate mean point of Opt j . Induced by the j − 1 peeling spheres {B j,1 , · · · , B j,j−1 }, Opt j is divided into j subsets, Opt j ∩ B j,1 , · · · , Opt j ∩ B j,j−1 and Opt j \ (
j−1 w=1 B j,w ). For ease of discussion, let P l denote Opt j ∩ B j,l for 1 ≤ l ≤ j − 1, P j denote Opt j \ ( j−1 w=1 B j,w )
, and τ l denote the mean point of P l for 1 ≤ l ≤ j. Note that the peeling spheres may intersect with each other. For any two intersecting spheres B j,l1 and B j,l2 , we arbitrarily assign the points in Opt j ∩ (B j,l1 ∩ B j,l2 ) to either P l1 or P l2 . Thus, we can assume that {P l | 1 ≤ l ≤ j} are pairwise disjoint. Now consider the size of P j . We have the following two cases: (a) |P j | ≥ 3 βj j n and (b) |P j | < 3 βj j n. We show how, in each case, Algorithm 2 can obtain an approximate mean point for Opt j by using the simplex lemma (i.e., Lemma 3).
For case (a), by Claim 3, together with the fact that β l ≤ β j for l > j, we know that
k l=1 |Opt l \ ( j−1 w=1 B j,w )| ≤ j−1 l=1 |Opt l \ ( j−1 w=1 B j,w )| + |P j | + k l=j+1 |Opt l | ≤ 4(j − 1)β j n + |P j | + (k − j)β j n,(13)
where the second inequality follows from Claim 3. So we have
|P j | k l=1 |Opt l \ ( j−1 w=1 B j,w )| ≥ |P j | 4(j−1)βj n + |P j | + (k − j)β j n .(14)
We view the right-hand side as a function of |P j |. Given any h > 0, the function f (x) = x x+h is an increasing function on the variable x ∈ [0, +∞). Note that we assume |P j | ≥ 3 βj j n. Thus
|P j | k l=1 |Opt l \ ( j−1 w=1 B j,w )| ≥ 3 j β j n 4(j−1)βj n + 3 j β j n + (k − j)β j n > 4 8kj ≥ 4 8k 2 ,(15)
(15) implies that P j is large enough, comparing to the set of points outside the peeling spheres. Hence, we can obtain an approximate mean point π for P j in the following way. First, we set t = k 5 , η = k , and take a sample of size t ln(t/η) 4 /8k 2 = 8k 3 9 ln k 2 6 . By Lemma 5, we know that with probability 1 − k , the sample contains at least k 5 points from P j . Then we let π be the mean point of the k 5 points from P j , and a 2 be the variance of P j . By Lemma 4, we know that with probability 1 − k , ||π − τ j || 2 ≤ 4 a 2 . Also, since Once obtaining π, we can apply Lemma 3 to find a point p vj satisfying the condition of ||p vj − m j || ≤ δ j + (1 + )j βj δ opt . We construct a simplex V (a) with vertices {p v1 , · · · , p vj−1 } and π (see Figure 4a). Note that Opt j is partitioned by the peeling spheres into j disjoint subsets, P 1 , · · · , P j . Each P l (1 ≤ l ≤ j − 1) lies inside B j,l , which implies that τ l , i.e., the mean point of P l , is also inside B j,l . Further, by Claim 2, for 1 ≤ l ≤ j − 1, we have
|Pj | |Optj | = |Pj | βj n ≥ 3 j (because |P j | ≥ 3 βj j n for case (a)), we have a 2 ≤ |Optj | |Pj | δ 2 j ≤ j 3 δ 2 j . Thus, ||π − τ j || 2 ≤ jδ 2 j .(16)⇡ p v1 p v2 p v3 B 4,1 B 4,2 B 4,3 (a) p v 1 p v 2 p v 3 B 4,1 B 4,2 B 4,3 (b)||p v l − τ l || ≤ r j ≤ (1 + 2 )j /β j δ opt .(17)
Recall that β j δ 2 j ≤ δ 2 opt . Thus, together with (16), we have
||π − τ j || ≤ jδ j ≤ j/β j δ opt .(18)
By (17) and (18), if setting the value of L (in Lemma 3) to be
max{r j , ||π − τ j ||} ≤ max{(1 + 2 )j /β j δ opt , j/β j δ opt } = (1 + 2 )j /β j δ opt ,(19)
and the value of (in Lemma 3) to be 0 = 2 /4, by Lemma 3 we can construct a grid inside the simplex V (a) with size O((8j/ 0 ) j ) to ensure the existence of the grid point τ satisfying the inequality of
||τ − m j || ≤ √ 0 δ j + (1 + 0 )L ≤ δ j + (1 + )j β j δ opt .(20)
Hence, let p vj be the grid point τ , and the induction step holds for this case. For case (b), we can also apply Lemma 3 to find an approximate mean point in a way similar to case (a); the difference is that we construct a simplex V (b) with vertices {p v1 , · · · , p vj−1 } (see Figure 4b). Roughly speaking, since |P j | is small, the mean points of Opt j \ P j and Opt j are very close to each other (by Lemma 2). Thus, we can ignore P j and just consider Opt j \ P j .
Let a 2 and m j denote the variance and mean point of Opt j \P j respectively. We know that {P 1 , P 2 , · · · , P j−1 } is a partition on Opt j \ P j . Thus, similar with case (a), we construct a simplex V (b) determined by {p v1 , · · · , p vj−1 } (see Figure 4b), set the value of L to be r j ≤ (1 + 2 )j βj δ opt , and then build a grid inside V (b) with size O(( 8j 0 ) j ), where 0 = 2 /4. By Lemma 3, we know that there exists one grid point τ satisfying the condition of
||τ − m j || ≤ √ 0 a + (1 + 0 )L ≤ 2 a + (1 + )j β j δ opt .(21)
Meanwhile, we know that
|Opt j \ P j | ≥ (1 − 3 /j)|Opt j |, since |P j | ≤ 3 j |Opt j |. Thus, we have a 2 ≤ |Optj | |Optj \Pj | δ 2 j ≤ 1 1− 3 /j δ 2 j , and ||m j − m j || ≤
Lemma 2). Together with (21), we have
||τ − m j || ≤ ||τ − m j || + ||m j − m j || ≤ 2 a + (1 + )j β j δ opt + 3 /j 1 − 3 /j δ j ≤ 2 1 1 − 3 /j δ j + (1 + )j β j δ opt + 3 /j 1 − 3 /j δ j ≤ ( 2 1 1 − 3 /j + 3 /j 1 − 3 /j )δ j + (1 + )j β j δ opt ≤ δ j + (1 + )j β j δ opt .(22)
Hence, let p vj be the grid point τ , and the induction step holds for this case. Since Algorithm 2 executes every step in our above discussion, the induction step, as well as the lemma, is true.
Success probability: From the above analysis, we know that in the j-th iteration, only case (a) (i.e., |P j | ≥ 3 βj j n) needs to consider the success probability of random sampling. Recall that in case (a), we take a sample of size 8k 3 9 ln k 2 6 . Thus with probability 1 − k , it contains at least k 5 points from P j . Meanwhile, with probability 1 − k , ||π − τ j || 2 ≤ 4 a 2 . Hence, the success probability in the j-th iteration is (1 − k ) 2 . By taking the union bound, the success probability in all k iterations is (
1 − k ) 2k ≥ 1 − 2 .
Number of Candidates and Running time: Algorithm 1 calls Algorithm 2 O( 1 log c) times (in Section 3.4, we will show that c can be a constant number). It is easy to see that each node in the returned tree has |R|2 s+j ( 32j 2 ) j children, where |R| = O( log n ), and s = 8k 3 9 ln k 2 6 . Since the tree has the height of k, the complexity of the tree is O(2 poly( k ) (log n) k ). Consequently, the number of candidates is O(2 poly( k ) (log n) k ). Further, since each node takes O(|R|2 s+j ( 32j 2 ) j nd) time, the total time complexity of the algorithm is O(2 poly( k ) n(log n) k+1 d).
Upper Bound Estimation
As mentioned in Section 3.1, our Peeling-and-Enclosing algorithm needs an upper bound ∆ on the optimal value δ 2 opt . To compute this, our main idea is to use some unconstrained k-means clustering algorithm A * (e.g., the linear time (1 + )-approximation algorithm in [51]) on the input points P without considering the constraint, to obtain a λ-approximation to the k-means clustering for some constant λ > 1. Let C = {c 1 , · · · , c k } be the set of mean points returned by algorithm A * 3 . Below, we show that the Cartesian product
[C] k = C × · · · × C k contains one k-tuple, which is an (18λ + 16)-approximation of k-CMeans on the same input P . Clearly, to select the k-tuple from [C] k with the smallest objective value, we still need to solve the Partition step on each k-tuple to form the desired clusters. Similar to Remark 1, we refer the reader to Section 4 for the selection algorithms for the considered constrained clustering problems.
Theorem 2 Let P = {p 1 , · · · , p n } be the input points of k-CMeans, and C = {c 1 , · · · , c k } be the mean points of a λ-approximation of the k-means clustering on P (without considering the constraint) for some constant λ ≥ 1.
Then [C] k contains at least one k-tuple which is able to induce an (18λ + 16)approximation of k-CMeans (together with the solution for the corresponding Partition step). Proof Synopsis: Let ω be the objective value of the k-means clustering on P corresponding to the k mean points in C. To prove Theorem 2, we create a new instance of k-CMeans: for each point p i ∈ P , move it to its nearest point, say c t , in {c 1 , · · · , c k }; letp i denote the new p i (note that c t andp i coincide with each other; see Figure 5a). The setP = {p 1 , · · · ,p n } forms a new instance of k-CMeans. Letδ 2 opt be the optimal value of k-CMeans onP , and δ 2 opt ([C] k ) be the minimum cost of k-CMeans on P by restricting its mean points to be one k-tuple in [C] k . We show thatδ 2 opt is bounded by a combination of ω and δ 2 opt , and δ 2 opt ([C] k ) is bounded by a combination of ω andδ 2 opt (see Lemma 8). Together with the fact that ω is no more than λδ 2 opt , we consequently obtain that δ 2 opt ([C] k ) ≤ (18λ + 16)δ 2 opt , which implies Theorem 2.
Lemma 8δ 2 opt ≤ 2ω + 2δ 2 opt , and δ 2 opt ([C] k ) ≤ 2ω + 8δ 2 opt .
Proof We first prove the inequality ofδ 2 opt ≤ 2ω + 2δ 2 opt . Consider any point p i ∈ P . Let Opt l be the optimal cluster containing p i . Then, we have
||p i − m l || 2 ≤ (||p i − p i || + ||p i − m l ||) 2 ≤ 2||p i − p i || 2 + 2||p i − m l || 2 ,(23)
where the first inequality follows from triangle inequality, and the second inequality follows from the fact that (a+b) 2 ≤ 2a 2 +2b 2 for any two real numbers a and b. For both sides of (23), we take the averages over all the points in P , and obtain
1 n k l=1 pi∈Opt l ||p i − m l || 2 ≤ 2 n n i=1 ||p i − p i || 2 + 2 n k l=1 pi∈Opt l ||p i − m l || 2 .(24)
Note that the left-hand side of (24) is not smaller thanδ 2 opt , sinceδ 2 opt is the optimal objective value of k-CMeans onP . For the right-hand side of (24), the first term 2 1 n n i=1 ||p i − p i || 2 = 2ω (by the construction ofP ), and the second term 2 1 n k l=1 pi∈Opt l ||p i − m l || 2 = 2δ 2 opt . Thus, we haveδ 2 opt ≤ 2ω + 2δ 2 opt . Next, we show the inequality δ 2 opt ([C] k ) ≤ 2ω + 8δ 2 opt . Consider k-CMeans clustering onP . Let {m 1 , · · · ,m k } be the optimal constrained mean points of P , and {Õ 1 , · · · ,Õ k } be the corresponding optimal clusters. Let {c 1 , · · · ,c k } be the k-tuple in [C] k withc l being the nearest point in C tom l . Thus, by an argument similar to the one for (23), we have
||p i −c l || 2 ≤ 2||p i −m l || 2 + 2||m l −c l || 2 ≤ 4||p i −m l || 2 .(25)
for eachp i ∈Õ l . In (25), the last one follows from the facts thatc l is the nearest point in C tom l andp i ∈ C, which implies that ||m l −c l || ≤ ||m l −p i || (see Figure 5b). Summing both sides of (25) over all the points inP , we have
k l=1 pi∈Õ l ||p i −c l || 2 ≤ 4 k l=1 pi∈Õ l ||p i −m l || 2 .(26)
Now, consider the following clustering on P . For each p i , ifp i ∈Õ l , we cluster it to the correspondingc l . Then the objective value of the clustering is
1 n k l=1 pi∈Õ l ||p i −c l || 2 ≤ 1 n k l=1 pi∈Õ l (2||p i −p i || 2 + 2||p i −c l || 2 ) ≤ 2 1 n n i=1 ||p i −p i || 2 + 8 1 n k l=1 pi∈Õ l ||p i −m l || 2 .(27)
The left-hand side of (27), 1 n k l=1
pi∈Õ l ||p i −c l || 2 , is no smaller than δ 2 opt ([C] k ) (by the definition), and the right-hand side of (27) is equal to 2ω +8δ 2 opt . Thus, we have δ 2 opt ([C] k ) ≤ 2ω + 8δ 2 opt .
Proof (of Theorem 2) By the two inequalities in Lemma 8, we know that δ 2 opt ([C] k ) ≤ 18ω + 16δ 2 opt . It is obvious that the optimal objective value of the k-means clustering is no larger than that of k-CMeans on the same set of input points P . This implies that ω ≤ λδ 2 opt . Thus, we have
δ 2 opt ([C] k ) ≤ (18λ + 16)δ 2 opt .(28)
So there exists one k-tuple in [C] k , which is able to induce an (18λ + 16)approximation.
Selection Algorithms for k-CMeans
As shown in Section 3, a set of k-tuple candidates for the mean points of k-CMeans can be obtained by our Peeling-and-Enclosing algorithm. To determine the best candidate, we need a selection algorithm to compute the clustering for each k-tuple candidate, and select the one with the smallest objective value. Clearly, the key to designing a selection algorithm is to solve the Partition step (i.e., generating the clustering) for each k-tuple candidate. We need to design a problem-specific algorithm for the Partition step, to satisfy the constraint of each individual problem. We consider all the constrained k-means clustering problems which are mentioned in Section 1.1, except for the uncertain data clustering, since Cormode and McGregor [24] have showed that it can be reduced to an ordinary k-means clustering problem. However, the k-median version of the uncertain data clustering does not have such a property. In Section 5.4, we will discuss how to obtain the (1+ )-approximation by applying our Peeling-and-Enclosing framework.
r-Gather k-means Clustering
Let P be a set of n points in R d . r-Gather k-means clustering (denoted by (r, k)-GMeans) on P is the problem of clustering P into k clusters with size at least r, such that the average squared Euclidean distance from each point in P to the mean point of its cluster is minimized [3].
To solve the Partition problem of (r, k)-GMeans, we adopt the following strategy. For each k-tuple candidate P v = {p v1 , · · · p v k } returned by Algorithm 1, build a complete bipartite graph G (see Figure 6a): each vertex in the left column corresponds to a point in P , and each vertex in the right column represents a candidate mean point in P v ; for each pair of vertices in different partite sets, connect them by an edge with the weight equal to their squared Euclidean distance. We can solve the Partition problem by finding the minimum cost matching in G: each vertex in the left has the supply 1, and each vertex in the right has the demand r and capacity n. After adding a source node s connecting to all the verities in the left and a sink node t connecting to all the vertices in the right, we can reduce the Partition problem to a minimum cost circulation problem, and solve it by using the algorithm in [34]. Denote by V and E the sets of vertices and edges of G. The running time for solving the minimum cost circulation problem is O(|E| 2 log |V | + |E| · |V | log 2 |V |) [59]. In our case, |E| = O(n) and |V | = O(n) if k is a fixed constant. Also, the time complexity for building G is O(nd). Thus, the total time for solving the Partition problem is O n n(log n) 2 + d 4 . Together with the time complexity in Theorem 1, we have the following theorem.
Theorem 3 There exists an algorithm yielding a (1 + )-approximation for (r, k)-GMeans with constant probability, in O 2 poly( k ) (log n) k+1 n n log n+d time.
r-Capacity k-means Clustering
r-Capacity k-means clustering (denoted by (r, k)-CaMeans) [48] on a set P of n points in R d is the problem of clustering P into k clusters with size at most r, such that the average squared Euclidean distance from each point in P to the mean point of its cluster is minimized.
We can solve the Partition problem of (r, k)-CaMeans in a way similar to that of (r, k)-GMeans; the only difference is that the demand r is replaced by the capacity r.
Theorem 4 There exists an algorithm yielding a (1 + )-approximation for (r, k)-CaMeans with constant probability, in O 2 poly( k ) (log n) k+1 n n log n+d time.
l-Diversity k-means Clustering
Let P = ñ i=1 P i be a set of colored points in R d and ñ i=1 |P i | = n, where the points in each P i share the same color. l-Diversity k-means clustering (denoted by (l, k)-DMeans) on P is the problem of clustering P into k clusters such that the points sharing the same color inside each cluster have a fraction no more than 1 l for some l > 1, and the average squared Euclidean distance from each point in P to the mean point of its cluster is minimized.
Similar to (r, k)-GMeans, we reduce the Partition problem of (l, k)-DMeans to a minimum cost circulation problem for each k-tuple candidate P v = {p v1 , · · · p v k }. The challenge is that we do not know the size of each resulting cluster, and therefore it is difficult to control the flow on each edge if directly using the bipartite graph built in Figure 6a. Instead, we add a set of "gates" between the input P and the k-tuple P v (see Figure 6b). First, following the definition of (l, k)-DMeans, we partition the "vertices" P intoñ groups {P 1 , · · · , Pñ}. For each P i , we generate a new set of vertices (i.e., the gates) P i = {c i 1 , · · · , c i k }, and connect each pair of p ∈ P i and c i j ∈ P i by an edge with weight ||p − p vj || 2 . We also connect each pair of c i j and p vj by an edge with weight 0. In Figure 6b, the size of vertices |V | = n + kñ + k + 2 = O(kn), and the size of edges |E| = n + kn + kñ + k = O(kn). Below we show that we can use c i j to control the flow from P i to p vj by setting appropriate capacities and demands.
Let t = max 1≤i≤ñ |P i |. We consider the value |Opt j |/l that is the upper bound on the number of points with the same color in Opt j (recall Opt j is the j-th optimal cluster defined in Section 3). The upper bound |Opt j |/l can be either between 1 and t or larger than t. Clearly, if the upper bound is larger than t, there is no need to consider the upper bound anymore. Thus, we can enumerate all the (t + 1) k cases to guess the upper bound |Opt j |/l for 1 ≤ j ≤ k. Let u j be the guessed upper bound for Opt j . If u j is no more than t, then each c i j , 1 ≤ i ≤ñ, has the capacity u j , and p vj has the demand l × u j and capacity l × (u j + 1) − 1. Otherwise (i.e., u j > t), set the capacity of each c i j , 1 ≤ i ≤ñ, to be n, and the demand and capacity of p vj to be l × (t + 1) and n, respectively. By using the algorithm in [59], we solve the minimum cost circulation problem for each of the (t + 1) k guesses.
Theorem 5 For any colored point set P = ñ i=1 P i in R d with n = |P | and t = max 1≤i≤ñ |P i |, there exists an algorithm yielding, in O 2 poly( k ) (log n) k+1 (t + 1) k n n log n+d time, a (1 + )-approximation for (l, k)-DMeans with constant probability.
Note: We can solve the problem in [55] by slightly changing the above Partition algorithm. In [55], it requires that the size of each cluster is at least l and the points inside each cluster have distinct colors, which means that the upper bound u j is always equal to 1 for each 1 ≤ j ≤ k. Thus, there is no need to guess the upper bounds in our Partition algorithm. We can simply set the capacity for each c i j to be 1, and the demand for each p vj to be l. With this change, our algorithm yields a (1+ )-approximation with constant probability in O 2 poly( k ) (log n) k+1 n n log n+d time.
Chromatic k-means Clustering
Let P = ñ i=1 P i be a set of colored points in R d and ñ i=1 |P i | = n, where the points in each P i share the same color. Chromatic k-means clustering (denoted by k-ChMeans) [8,28] on P is the problem of clustering P into k clusters such that no pair of points with the same color is clustered into the same cluster, and the average squared Euclidean distance from each point in P to the mean point of its cluster is minimized.
To satisfy the chromatic requirement, each P i should have a size no more than k. Given a k-tuple candidate P v = {p v1 , · · · , p v k }, we can consider the partition problem for each P i independently, since there is no mutual constraint among them. It is easy to see that finding a partition of P i is equivalent to computing a minimum cost one-to-one matching between P i and P v , where the cost of the matching between any p ∈ P i and p vj ∈ P v is their squared Euclidean distance. We can build this bipartite graph in O(k 2 d) time, and solve this matching problem by using Hungarian algorithm in O(k 3 ) time. Thus, the running time of the Partition step for each P v is O(k 2 (k + d)n).
Theorem 6 There exists an algorithm yielding a (1 + )-approximation for k-ChMeans with constant probability, in O 2 poly( k ) (log n) k+1 nd time.
Fault Tolerant k-means Clustering
Fault Tolerant k-means clustering (denoted by (l, k)-FMeans) [64] on a set P of n points in R d and a given integer 1 ≤ l ≤ k is the problem of finding k points C = {c 1 , · · · , c k } ⊂ R d , such that the average of the total squared distances from each point in P to its l nearest points in C is minimized.
To solve the Partition problem of (l, k)-FMeans, our idea is to reduce (l, k)-FMeans to k-ChMeans, and use the Partition algorithm for k-ChMeans to generate the desired clusters. The reduction simply makes l monochromatic copies {p 1 i , · · · , p l i } for each p i ∈ P . The following lemma shows the relation of the two problems.
Lemma 9 For any constant λ ≥ 1, a λ-approximation of (l, k)-FMeans on P is equivalent to a λ-approximation of k-ChMeans on
n i=1 {p 1 i , · · · , p l i }.
Proof We build a bijection between the solutions of (l, k)-FMeans and k-ChMeans. First, we consider the mapping from (l, k)-FMeans to k-ChMeans. Let C = {c 1 , · · · , c k } be the k mean points of (l, k)-FMeans, and {c i(1) , · · · , c i(l) } ⊂ C be the l nearest mean points to each p i ∈ P . If using C as the k mean points of k-ChMeans on n i=1 {p 1 i , · · · , p l i }, the l copies {p 1 i , · · · , p l i } of p i will be respectively clustered to the l clusters of {c i(1) , · · · , c i(l) } to minimize the cost. Now consider the mapping from k-ChMeans to (l, k)-FMeans. Let C = {c 1 , · · · , c k } be the k mean points of k-ChMeans. For each i, {c i(1) , · · · , c i(l) } are the mean points of the l clusters that {p 1 i , · · · , p l i } are clustered to. It is easy to see that the l nearest mean points of p i are {c i(1) , · · · , c i(l) } if we use C as the k mean points of (l, k)-FMeans.
With this bijection, we can pair up the solutions to the two problems. Clearly, each pair of solutions to (l, k)-FMeans and k-ChMeans formed by the bijection have the equal objective value. Further, their optimal objective values are equal to each other, and for any pair of solutions, their approximation ratios are the same. Thus, Lemma 9 is true.
With Lemma 9, we immediately have the following theorem.
Theorem 7 There exists an algorithm yielding a (1 + )-approximation for (l, k)-FMeans with constant probability, in O 2 poly( k ) (log n) k+1 nd time.
Note: As mentioned in [42], a more general version of fault tolerant clustering problem is to allow each point p i ∈ P to have an individual l-value l i . From the above discussion, it is easy to see that this general version can also be solved in the same way (i.e., through reduction to k-ChMeans) and achieve the same approximation result.
Semi-Supervised k-means Clustering
As shown in Section 1.1, semi-supervised clustering has various forms. In this paper, we consider the semi-supervised k-means clustering problem which takes into account the geometric cost and priori knowledge. Let P be a set of n points in R d , and S = {S 1 , · · · , S k } be a given clustering of P . Semisupervised k-means clustering (denoted by k-SMeans) on P and S is the problem of finding a clustering S = {S 1 , · · · , S k } of P such that the following objective function is minimized,
α Cost(S) E 1 + (1 − α) dist{S, S} E 2 ,(29)
where α ∈ [0, 1] is a given constant, E 1 and E 2 are two given scalars to normalize the two terms, Cost(S) is the k-means clustering cost of S, and dist{S, S} is the distance between S and S defined in the same way as in [13]. For any pair of S j and S i , 1 ≤ j, i ≤ k, their difference is |S j \ S i |. Given a bipartite matching σ between S and S, dist{S, S} is defined as
k j=1 |S j \ S σ(j) |.
The challenge is that the bipartite matching σ is unknown in advance. We fix the k-tuple candidate P v = {p v1 , · · · p v k }. To find the desired σ to minimize the objective function (29), we build a bipartite graph, where the left (resp., right) column contains k vertices corresponding to p v1 , · · · , p v k (resp., S 1 , · · · , S k ), respectively. For each pair (p vj , S i ), we connect them by an edge; we calculate the edge weight w (i,j) in the following way. For each p ∈ S i , it could be potentially assigned to any of the k clusters in S; if i = σ(j), the induced k costs of p will be {c 1
p , c 2 p , · · · , c k p }, where c l p = α ||p−pv l || 2 E1 if l = j, or c l p = α ||p−pv l || 2 E1 + (1 − α) 1 E2 otherwise. Thus, we set w (i,j) = p∈Si min 1≤l≤k c l p .(30)
We solve the minimum cost bipartite matching problem to determine σ. To build such a bipartite graph, we need to first compute all the kn distances from the points in P to the k-tuple P v ; then, we calculate the k 2 edge weights via (30). The bipartite graph can be built in a total of O(knd + k 2 n) time, and the optimal matching can be obtained via Hungarian algorithm in O(k 3 ) time.
Theorem 8 There exists an algorithm yielding a (1 + )-approximation for k-SMeans with constant probability, in O 2 poly( k ) (log n) k+1 nd time.
Constrained k-Median Clustering (k-CMedian)
In this section, we extend our approach for k-CMeans to the constrained kmedian clustering problem (k-CMedian). Similar to k-CMeans, we show that the Peeling-and-Enclosing framework can be used to construct a set of candidates for the constrained median points. Combining this with the selection algorithms (with trivial modification) in Section 4, we achieve the (1 + ) approximations for a class of k-CMedian problems.
To solve k-CMedian, a straightforward idea is to extend the simplex lemma to median points and combine it with the Peeling-and-Enclosing framework to achieve an approximate solution. However, due to the essential difference between mean and median points, such an extension for the simplex lemma is not always possible. The main reason is that the median point (i.e., Fermat point) does not necessarily lie inside the simplex, and thus there is no guarantee to find the median point by searching inside the simplex. Below is an example showing that the median point actually can lie outside the simplex.
Let P = {p 1 , p 2 , · · · , p 9 } be a set of points in R d . We consider the following partition of P , P 1 = {p i | 1 ≤ i ≤ 5} and P 2 = {p i | 6 ≤ i ≤ 9}. Assume that all the points of P locate at the three vertices of a triangle ∆abc. Particularly, {p 1 , p 2 , p 6 } coincide with vertex a, {p 3 , p 4 , p 5 } with vertex b, and {p 7 , p 8 , p 9 } with vertex c (see Figure7). It is easy to see that the median points of P 1 and P 2 are b and c, respectively. If the angle ∠bac ≥ 2π 3 , the median point of P is vertex a (note that the median point can be viewed as the Fermat point of ∆abc with each vertex associated with weight 3). This means that the median point of P is outside the simplex formed by the median points of P 1 and P 2 a (p 1 , p 2 , (i.e., segment bc). Thus, a good approximation of the median point cannot be obtained by searching a grid inside bc.
p 6 ) b (p 3 , p 4 , p 5 ) c (p 7 , p 8 , p 9 )
To overcome this difficulty, we show that a weaker version of the simplex lemma exists for median, which enables us to achieve similar results for k-CMedian.
Weaker Simplex Lemma for Median Point
Comparing to the simplex lemma in Section 2, the following Lemma 10 has two differences. One is that the lemma requires a partial partition on a significantly large subset of P , rather than a complete partition on P . Secondly, the grid is built in the flat spanned by {o 1 , · · · , o j }, instead of the simplex. Later, we will show that the grid is actually built in a surrounding region of the simplex, and thus the lemma is called "weaker simplex lemma".
Lemma 10 (Weaker Simplex Lemma) Let P be a set of n points in R d , and j l=1 P l ⊂ P be a partial partition of P with P l1 ∩ P l2 = ∅ for any l 1 = l 2 . Let o l be the median point of P l for 1 ≤ l ≤ j, and F be the flat spanned by {o 1 , · · · , o j }. If |P \ ( j l=1 P l )| ≤ |P | for some constant ∈ (0, 1/5) and each P l is contained inside a ball B(o l , L) centered at o l and with radius L ≥ 0, then it is possible to build a grid in F with size O(j 2 ( j √ j ) j ) such that at least one grid point τ satisfies the following inequality, where o is the median point of P (see Figure 8).
1 |P | p∈P ||τ − p|| ≤ (1 + 9 4 ) 1 |P | p∈P ||p − o|| + (1 + )L.(31)
Proof Synopsis: To prove Lemma 10, we letõ be the orthogonal projection of o to F (see Figure8). In Claim 4, we show that the distance between o and o is bounded, and consequently, the induced cost ofõ, i.e., 1 |P | p∈P ||p −õ||, is also bounded according to Claim 5. Thus,õ is a good approximation of o, and we can focus on building a grid inside F to approximateõ. Since F is unbounded, we need to determine a range for the grid. Claim 6 resolves the issue. It considers two cases. One is that there are at least two subsets in the partial partition, {P 1 , · · · , P j }, having large enough fractions of P ; the other is that only one subset is large enough. For either case, Claim 6 shows that we can determine the range of the grid using the location information of {o 1 , · · · , o j }. Finally, we can obtain the desired grid point τ in the following way: draw a set of balls centered at {o 1 , · · · , o j } with proper radii; build the grids inside each of the balls, and find the desired grid point τ in one of these balls. Note that since all the balls are inside F, the complexity of the union of the grids is independent of the dimensionality d.
Claim 4 ||o −õ|| ≤ L + 1 1 − 1 |P | p∈P ||o − p||.(32)
Proof Lemma 10 assumes that j l=1 P l ≥ (1 − )|P |. By Markov's inequality, we know that there exists one point q ∈ j l=1 P l such that
||q − o|| ≤ 1 1 − 1 |P | p∈P ||o − p||.(33)
Let P lq be the subset containing q. Then from (33), we immediately have
||o −õ|| ≤ ||o lq − o|| ≤ ||o lq − q|| + ||q − o|| ≤ L + 1 1 − 1 |P | p∈P ||o − p||.(34)
This implies Claim 4 (see Figure 9). Claim 5
1 |P | p∈P ||p −õ|| ≤ 1 1 − 1 |P | p∈P ||p − o|| + L.(35)
Proof For any point p ∈ P l , let dist{oõ, p} (resp., dist{F, p}) denote its distance to the line oõ (resp., flat F). See Figure 10. Then we have
||p −õ|| = dist 2 {oõ, p} + dist 2 {F, p},(36)
||p − o|| ≥ dist{oõ, p}.
Combining (36) and (37), we have
||p −õ|| − ||p − o|| ≤ dist 2 {oõ, p} + dist 2 {F, p} − dist{oõ, p} ≤ dist{F, p} ≤ ||p − o l || ≤ L.(38)
For any point p ∈ P \ ( j l=1 P l ), we have ||p −õ|| ≤ ||p − o|| + ||o −õ||.
Combining (38), (39), and (32), we have
1 |P | p∈P ||p −õ|| = 1 |P | ( p∈ j l=1 P l ||p −õ|| + p∈P \( j l=1 P l ) ||p −õ||) ≤ 1 |P | ( p∈ j l=1 P l (L + ||p − o||) + p∈P \( j l=1 P l ) (||p − o|| + ||o −õ||)) ≤ (1 − )L + 1 |P | p∈P ||p − o|| + L + 1 − 1 |P | p∈P ||p − o|| = 1 1 − 1 |P | p∈P ||p − o|| + L.(40)
Thus the claim is true. Claim 6 At least one of the following two statements is true.
1. There exist at least two points in {o 1 , · · · , o j } whose distances toõ are no more than L + 3j
1− 1 |P | p∈P ||p − o||. 2.
There exists one point in {o 1 , · · · , o j }, say o l0 , whose distance toõ is no more than (1 + 1+2 √ 3−12 )L. 5
Proof We consider two cases: (i) there are two subsets P l1 and P l2 of P with size at least 1− 3j |P |, and (ii) no such pair of subsets exists. For case (i), by Markov's inequality, we know that there exist two points q ∈ P l1 and q ∈ P l2 such that
||q − o|| ≤ 3j 1 − 1 |P | p∈P ||p − o||; (41) ||q − o|| ≤ 3j 1 − 1 |P | p∈P ||p − o||.(42)
This, together with triangle inequality, indicates that both ||o l1 −o|| and ||o l2 − o|| are no more than L + 3j
1− 1 |P | p∈P ||p − o||.
Sinceõ is the orthogonal projection of o to F, we have ||o l1 −õ|| ≤ ||o l1 − o|| and ||o l2 −õ|| ≤ ||o l2 − o||. Thus, the first statement is true in this case.
For case (ii), i.e., no two subsets with size at least 1− 3j |P |, since j l=1 |P l | ≥ (1 − )|P |, by pigeonhole principle we know that there must exist one P l0 , 1 ≤ l 0 ≤ j, with size
|P l0 | ≥ (1 − (j − 1) 1 3j )(1 − )|P | ≥ 2 3 (1 − )|P |.(43)
Let x = ||o−o l0 ||. We assume that x > L, since otherwise the second statement is automatically true. Now imagine moving o slightly toward o l0 by a small distance δ. See Figure 11. For any point p ∈ P l0 , letp be its orthogonal projection to the line oo l0 , and a and b be the distances ||o −p|| and ||p −p||, respectively. Then, the 5 Note that we assume < 1 5 in Lemma 10, so (1 + 1+2 √ 3−12 )L is a finite real number.
distance between p and o is decreased by
√ a 2 + b 2 − (a − δ) 2 + b 2 . Also, we have lim δ→0 √ a 2 + b 2 − (a − δ) 2 + b 2 δ = lim δ→0 2a − δ √ a 2 + b 2 + (a − δ) 2 + b 2 = (a/b) (a/b) 2 + 1 .(44)
Since p is inside ball B(o l0 , L), we have a/b ≥ (x − L)/L. For any point p ∈ P \ P l0 , the distance to o is non-increased or increased by at most δ. Thus, the average distance from the points in P to o is decreased by at least
2 3 (1 − ) ((x − L)/L)δ ((x − L)/L) 2 + 1 − (1 − 2 3 (1 − ))δ.(45)
Since the original position of o is the median point of P , the value of (45) should be non-positive. With simple calculation, we have
(x − L)/L ≤ 1 + 2 √ 3 − 12 =⇒ x ≤ (1 + 1 + 2 √ 3 − 12 )L.(46)
By the same argument in case (i), we know that ||o l0 −õ|| ≤ ||o l0 − o||. This, together with (46), implies that the second statement is true for case (ii). This completes the proof for this claim.
With the above claims, we now prove Lemma 10.
Proof (of Lemma 10) We build a grid in F as follows. First, draw a set of balls.
-For each o l , 1 ≤ l ≤ j, draw a ball (called type-1 ball) centered at o l and with radius (1 + 1+2 . We claim that among the above balls, there must exist one ball that containsõ. If there is only one subset in {P 1 , · · · , P j } with size no smaller than 1− 3j |P |, it corresponds to the second case in Claim 6, and thus there exists a type-1 ball containingõ. Now consider the case that there are multiple subsets, say {P l1 , · · · , P lt } for some t ≥ 2, all with size no smaller than 1− 3j |P |. Without loss of generality, assume that ||o l1 − o l2 || = max{||o l1 − o ls || | 1 ≤ s ≤ t}. Then, we can view t s=1 P ls as a big subset of P bounded by a ball centered at o l1 and with radius ||o l1 − o l2 || + L. By the same argument given in the proof of Claim 6 for (43), we know that | t s=1 P ls | ≥ 2 3 (1 − )|P |. This also means that we can reduce this case to the second case in Claim 6, i.e., replacing P l0 , o l0 and L by | t s=1 P ls |, o l1 and ||o l1 − o l2 || + L respectively. Thus, there is a type-2 ball containingõ.
Next, we discuss how to build the grids inside these balls. For type-1 balls with radius (1 + 1+2 √ 3−12 )L, we build the grids inside them with grid length √ j L. For type-2 balls with radius r l,l = (1 + 1+2 √ 3−12 )(||o l − o l || + L) for some l and l , we build the grids inside them with grid length
1 1 + 1+2 √ 3−12 (1 − ) 6j √ j r l,l .(47)
Ifõ is contained in a type-1 ball, then there exists one grid point τ whose distance toõ is no more than L. Ifõ is contained in a type-2 ball, such a distance is no more than
(1 − ) 6j (||o l − o l || + L)(48)
by (47). By the first statement in Claim 6 and triangle inequality, we know that
||o l − o l || ≤ ||o l −õ|| + ||õ − o l || ≤ 2(L + 3j 1 − 1 |P | p∈P ||p − o||).(49)
(48) and (49) imply that there exists one grid point τ whose distance toõ is no more than
1 |P | p∈P ||p − o|| + (1 − ) 2j L ≤ 1 |P | p∈P ||p − o|| + L.(50)
Thus in both types of ball-containing, by triangle inequality and Claim 5, we have
1 |P | p∈P ||p − τ || ≤ 1 |P | p∈P (||p −õ|| + ||õ − τ ||) ≤ ( 1 1 − + ) 1 |P | p∈P ||p − o|| + (1 + )L ≤ (1 + 9 4 ) 1 |P | p∈P ||p − o|| + (1 + )L,(51)
where the second inequality follows from the assumption that ≤ 1 5 . As for the grid size, since we build the grids inside the balls in the (j − 1)dimensional flat F, through simple calculation, we know that the grid size is O(j 2 ( j √ j ) j ). This completes the proof.
Peeling-and-Enclosing Algorithm for k-CMedian Using Weaker Simplex Lemma
In this section, we present a unified Peeling-and-Enclosing algorithm for generating a set of candidate median points for k-CMedian. Similar to the algorithm for k-CMeans, our algorithm iteratively determines the k median points. At each iteration, it uses a set of peeling spheres and a simplex to search for an Fig. 12: The gray area is U .
✏/16 F õ o
approximate median point. Since the simplex lemma no longer holds for k-CMedian, we use the weaker simplex lemma as a replacement. Thus a number of changes are needed to accommodate the differences. Before presenting our algorithm, we first introduce the following lemma proved by Badȏiu et al. in [12] for finding an approximate median point of a given point set. Sketch of the proof of Theorem 9. Since our algorithm uses some ideas in Theorem 9, we sketch its proof for completeness. First, by Markov's inequality, we know that there exists one point, say s 1 , from R whose distance to o is no more than 2 1 |P | p∈P ||o − p|| with certain probability. Then the sampling procedure can be viewed as an incremental process starting with s 1 ; a flat F spanned by all previously obtained sample points is maintained; at each time that a new sample point is added, F is updated. Letõ be the projection of o on F, and
U = {p ∈ R d | π 2 − 16 ≤ ∠oõp ≤ π 2 + 16 }.(52)
See Figure 12. It has been shown that this incremental sampling process stops before at most O(1/ 3 log 1/ ) points are taken, and one of the following two events happens with constant probability: (1) F is close enough to o, and (2) |P \ U | is small enough. For either event, a grid can be built inside F, and one of the grid points τ is the desired approximate median point. Below we give an overview of our Peeling-and-Enclosing algorithm for k-CMedian. Let P = {p 1 , · · · , p n } be the set of R d points in k-CMedian, and OPT = {Opt 1 , · · · , Opt k } be the k (unknown) optimal clusters with m j being the median point of cluster Opt j for 1 ≤ j ≤ k. Without loss of generality, we assume that |Opt 1 | ≥ |Opt 2 | ≥ · · · ≥ |Opt k |. Denote by µ opt the optimal objective value, i.e., µ opt = 1 n k j=1 p∈Optj ||p − m j ||.
Algorithm overview: We mainly focus on the differences with the k-CMeans algorithm. First, our algorithm uses Theorem 9 (instead of Lemma 4) to find an approximation p v1 for m 1 . Then, it iteratively finds the approximate median points for {m 2 , · · · , m k } using the Peeling-and-Enclosing strategy. At the (j + 1)-th iteration, it has already obtained the approximate median points p v1 , · · · , p vj for clusters Opt 1 , · · · , Opt j , respectively. To find the approximate median point p vj+1 for Opt j+1 , the algorithm draws j peeling spheres B j+1,1 , · · · , B j+1,j centered at {p v1 , · · · , p vj }, respectively, and considers the size of A = Opt j+1 \ ( j l=1 B j+1,l ). If |A| is small, it builds a flat (instead of a simplex) spanned by {p v1 , · · · , p vj }, and finds p vj+1 by using the weaker simplex lemma where the j peeling spheres can be viewed as a partial partition on Opt j+1 . If |A| is large, it adopts a strategy similar to the one in Theorem 9 to find p vj+1 : start with the flat F spanned by {p v1 , · · · , p vj }, and grow F by repeatedly adding a sample point in A to it. As it will be shown in Lemma 11, F will become close enough to m j+1 , and p vj+1 can be obtained by searching a grid (built in a way similar to Lemma 10) in F. By choosing a proper value (i.e., O( )µ opt ) for L in Lemma 10 and Lemma 11, we can achieve the desired (1 + )-approximation. As for the running time, although Theorem 9 introduces an extra factor of log n for estimating the optimal cost of each Opt j+1 , our algorithm actually does not need it as such estimations have already been obtained during the Peeling-and-Enclosing step (see Claim 2 in the proof of Lemma 6). Thus, the running time is still O(n(log n) k+1 d), which is the same as k-CMeans.
The algorithm is shown in Algorithm 3. The following lemma is needed to ensure the correctness of our algorithm.
Lemma 11
Let F be a flat in R d containing {p v1 , · · · , p vj } and having a distance to m j+1 no more than 2 |Optj+1| p∈Optj+1 ||p − m j+1 ||. Assume that all the peeling spheres B j+1,1 , · · · , B j+1,j are centered at {p v1 , · · · , p vj }, respectively, and have a radius L ≥ 0. Then if |Opt j+1 \ (( j w=1 B j+1,w ) U )| ≤ |Opt j+1 |, we have
1 |Opt j+1 | p∈Optj+1 ||p −m j+1 || ≤ (1 + 2 ) 1 |Opt j+1 | p∈Optj+1 ||p − m j+1 || + L(53)
for any 0 ≤ ≤ 1, wherem j+1 is the projection of m j+1 on F and U is defined in (52) For each point p ∈ ( j w=1 B j+1,w ) Opt j+1 , similar to (38) in Claim 5, we know that the cost increases by at most L if the median point moves from m j+1 tom j+1 . Thus we have
p∈Optj+1 ( j w=1 Bj+1,w) ||p −m j+1 || ≤ p∈Optj+1 ( j w=1 Bj+1,w) (||p − m j+1 || + L).(54)
For the part Opt j+1 \ (( j w=1 B j+1,w ) U ), by triangle inequality we have
p∈Optj+1\(( j w=1 Bj+1,w) U ) ||p −m j+1 || ≤ p∈Optj+1\(( j w=1 Bj+1,w) U ) (||p − m j+1 || + ||m j+1 −m j+1 ||) ≤ p∈Optj+1\(( j w=1 Bj+1,w) U ) ||p − m j+1 || + 2 p∈Optj+1 ||p − m j+1 ||,(55)
where the second inequality follows from the assumption that F's distance to m j+1 is no more than 2 |Optj+1| p∈Optj+1 ||p − m j+1 || and
|Opt j+1 \ (( j w=1 B j+1,w ) U )| ≤ |Opt j+1 |.
For each point p ∈ Opt j+1 ∩ U , recall that the angle ∠m j+1mj+1 p ∈ [ π 2 − 16 , π 2 + 16 ] in (52). In Theorem 3.2 of [12], it showed that ||p −m j+1 || ≤ (1 + )||p − m j+1 ||. Therefore,
p∈Optj+1∩U ||p −m j+1 || ≤ (1 + ) p∈Optj+1∩U ||p − m j+1 ||.(56)
Combining (54), (55) and (56), we obtain (53).
To complete the Peeling-and-Enclosing algorithm for k-CMedian, we also need an upper bound for the optimal objective value. In Section 5.3, we will show how to obtain such an estimation. For this moment, we assume that the upper bound is available.
Using the same idea for proving Theorem 1, we obtain the following theorem for k-CMedian.
Theorem 10 Let P be a set of n points in R d and k ∈ Z + be a fixed constant. In O(2 poly( k ) n(log n) k+1 d) time, Algorithm 3 outputs O(2 poly( k ) (log n) k ) ktuple candidate median points. With constant probability, there exists one ktuple candidate in the output which is able to induce a 1+O( ) -approximation of k-CMedian (together with the solution for the corresponding Partition step).
Algorithm 3 Peeling-and-Enclosing for k-CMedian
Input: P = {p 1 , · · · , pn} in R d , k ≥ 2, a constant ∈ (0, 1 4k 2 ), and an upper bound ∆ ∈ [µopt, cµopt] with c ≥ 1. Output: A set of k-tuple candidates for the k constrained median points.
1. For i = 0 to log 1+ c do (a) Set µ = (1 + ) i ∆/c, and run Algorithm 4. (b) Let T i be the output tree.
2. For each root-to-leaf path of every T i , build a k-tuple candidate using the k points associated with the path.
Algorithm 4 Peeling-and-Enclosing-Tree II Input: P = {p 1 , · · · , pn} in R d , k ≥ 2, a constant ∈ (0, 1 4k 2 ), and µ > 0. 1. Initialize T with a single root node v associated with no point. 2. Recursively grow each node v in the following way (a) If the height of v is already k, then it is a leaf. (b) Otherwise, let j be the height of v. Build the radius candidates set R =
∪ log n t=0 { 1+l 2 2(1+ ) j2 t µ | 0 ≤ l ≤ 4 + 2 }.
For each r ∈ R, do i. Let {pv 1 , · · · , pv j } be the j points associated with nodes on the root-to-v path. ii. For each pv l , 1 ≤ l ≤ j, construct a ball B j+1,l centered at pv l and with radius r. iii. Compute a flat spanned by {pv 1 , · · · , pv j }, and build a grid inside it by Lemma 10. iv. Take a random sample from P \ ∪ j l=1 B j+1,l with size s = k 3 11 ln k 2 6 , and compute the flat determined by these sample points and {pv 1 , · · · , pv j }. Build a grid inside the flat by Theorem 9. v. In total, there are O(2 poly( k ) ) grid points inside these two flats. For each grid point, add one child to v, and associate it with the grid point.
3. Output T .
Upper Bound Estimation for k-CMedian
In this section, we show how to obtain an upper bound of the optimal objective value of k-CMedian.
Theorem 11 Let P = {p 1 , · · · , p n } be the input points of k-CMedian, and C be the set of k median points of a λ-approximation of k-median on P (without considering the constraint) for some constant λ ≥ 1. Then the Cartesian product [C] k contains at least one k-tuple which is able to induce a (3λ + 2)approximation of k-CMedian (together with the solution for the corresponding Partition step).
Let {c 1 , · · · , c k } be the k median points in C, and ω be the corresponding objective value of the k-median approximate solution on P . Recall that {m 1 , · · · , m k } are the k unknown optimal constrained median points of P , and OPT = {Opt 1 , · · · , Opt k } are the corresponding k optimal constrained clusters. To prove Theorem 11, we create a new instance of k-CMedian in the following way: for each point p i ∈ P , move it to its nearest point, say c t , in {c 1 , · · · , c k }; letp i denote the new p i (note that c t andp i overlap with each other). Then the setP = {p 1 , · · · ,p n } forms a new instance of k-CMedian. Let µ opt andμ opt be the optimal cost of P andP respectively, and µ opt ([C] k ) be the minimum cost of P by restricting its k constrained median points to being a k-tuple in [C] k . The following two lemmas are keys to proving Theorem 11.
Lemma 12μ opt ≤ ω + µ opt .
Proof For each p i ∈ Opt l , by triangle inequality we have
||p i − m l || ≤ ||p i − p i || + ||p i − m l ||.(57)
For both sides of (57), taking the averages over i and l, we get Note that the left-hand side of (58) is not smaller thanμ opt , sinceμ opt is the optimal object value of k-CMedian onP . For the right-hand side of (58), the first term 1 n n i=1 ||p i − p i || = ω (by the construction ofP ), and the second term 1 n k l=1 pi∈Opt l ||p i − m l || = µ opt . Thus, we haveμ opt ≤ ω + µ opt .
Lemma 13 µ opt ([C] k ) ≤ ω + 2μ opt .
Proof Consider k-CMedian onP . Let {m 1 , · · · ,m k } be the optimal constraint median points, and {Õ 1 , · · · ,Õ k } be the corresponding optimal constraint clusters ofP . Let {c 1 , · · · ,c k } be the k-tuple in [C] k withc l being the nearest point in C tom l . Thus, by an argument similar to the one for (57), we have the following inequality, wherep i is assumed to be clustered inÕ l .
||p i −c l || ≤ ||p i −m l || + ||m l −c l || ≤ 2||p i −m l ||.
In (59), the last one follows from the facts thatc l is the nearest point in C tõ m l andp i ∈ C, which implies ||m l −c l || ≤ ||m l −p i ||. For both sides of (59), taking the averages over i and l, we have
Now, consider the following k-CMedian on P . For each p i , ifp i ∈Õ l , we cluster it to the corresponding median pointc l . Then the objective value of the clustering is As for the running time of the Peeling-and-Enclosing algorithm, it still builds the trees with heights equal to k. But the number of children for each node is different. Recall that in the proof of Claim 2, in order to obtain an estimation for β j = |Optj | n , we need to try O(log n) times since 1 n ≤ β j ≤ 1; but for k-PMedian, the range of β j becomes [ wmin W , 1] where w min = min 1≤i≤n w i (note that W = n i=1 w i ≤ n). Thus, the running time of Peeling-and-Enclosing algorithm becomes O(nh(log n wmin ) k+1 d). Furthermore, for each k-tuple candidate, we perform the Partition step through assigning each D i to the m j with the smallest dist{v i , m j }. Obviously, the Partition step can be finished within linear time. Thus we have the following theorem.
Theorem 12 There exists an algorithm yielding a (1 + )-approximation for k-PMedian with constant probability, in O(2 poly( k ) nh (log n wmin ) k+1 d) time, where w min = min 1≤i≤n w i .
Future Work
Following this work, some interesting problems deserve to be further studied in the future. For example, we reduce the partition step to the minimum cost circulation problem for several constrained clustering problems in Section 4; however, since the goal is to find an approximate solution, one may consider using the geometric information to solve the Partition step approximately. In Euclidean space, several techniques have been developed for solving approximate matching problems efficiently [7,61]. But it is still not clear whether such techniques can be extended to solve the constrained matching problems (such as the r-gather or l-diversity) considered in this paper, especially in high dimensional space. We leave it as an open problem for future work.
Thus, the base case holds. Induction step: Assume that the lemma holds for any j ≤ j 0 for some j 0 ≥ 1 (i.e., the induction hypothesis). Now we consider the case of j = j 0 + 1. Similar to the proof of Lemma 1, we assume that |Q l | |Q| ≥ 4j for each 1 ≤ l ≤ j. Otherwise, through a similar idea from Lemma 1, it can be reduced to the case with smaller j, and solved by the induction hypothesis. Hence, in the following discussion, we assume that |Q l | |Q| ≥ 4j for each 1 ≤ l ≤ j. To find such a τ , we consider the distance from o l to o for any 1 ≤ l ≤ j. We have
||o l − o || ≤ ||o l − o l || + ||o l − o|| + ||o − o || ≤ 2 j δ + 2L,(65)
where the first inequality follows from triangle inequality, and the second inequality follows from the facts that ||o l − o l || and ||o − o || are both bounded by L, and ||o l − o|| ≤ 2 j δ (by Lemma 2). This implies that we can use a similar idea in Lemma 1 to construct a ball B centered at any o l0 and with radius r = max 1≤l≤j {||o l − o l0 ||}. Also, the simplex V is inside B. Note that
||o l − o l0 || ≤ ||o l − o || + ||o − o l0 || ≤ 4 j δ + 4L(66)
by (65), which implies r ≤ 4 j δ + 4L. Similar to Lemma 1, we can build a grid inside B with grid length r 4j , and the number of grid points is O((8j/ ) j ). Moreover, o must lie inside V by the definition. In this grid, we can find a grid point τ such that ||τ − o || ≤ 4 √ j r ≤ √ δ + L. Thus, ||τ − o|| ≤ ||τ − o || + ||o − o|| ≤ √ δ + (1 + )L, and the induction step, as well as the lemma, holds.
Proof of Claim 2 for Lemma 6
Since 1 ≥ β j ≥ 1 n , there is one integer t between 1 and log n, such that 2 t−1 ≤ 1 βj ≤ 2 t . Thus 2 t/2−1 √ δ opt ≤ βj δ opt ≤ 2 t/2 √ δ opt . Together with δ ∈ [δ opt , (1 + )δ opt ], we have
2 t/2−1 √ δ 1 + ≤ β j δ opt ≤ 2 t/2 √ δ.(67)
Thus if settingr j = 2 t/2 √ δ, we have β j δ opt ≤r j ≤ 2(1 + ) β j δ opt .
We consider the interval I = [ j 2(1+ )r j , jr j ]. (68) ensures that j βj δ opt ∈ I. Also, we build a grid in the interval with grid length 2 1 2(1+ ) jr j , i.e., R j = { 1+l 2 2(1+ ) jr j | 0 ≤ l ≤ 4+ 2 }. Moreover, the grid length 2 1 2(1+ ) jr j ≤ 2 j βj δ opt , which implies that there exists r j ∈ R j such that j β j δ opt ≤ r j ≤ (1 + 2 )j β j δ opt .
Note that R j ⊂ R, where R = ∪ log n t=0 { 1+l 2 2(1+ ) j2 t/2 √ δ | 0 ≤ l ≤ 4 + 2 }. Thus, the Claim is true.
Proof of Claim 3 for Lemma 6
Note that δ 2 opt = k j=1 β j δ 2 j , and β j ≤ β l for each 1 ≤ l ≤ j − 1. Thus, we have δ l ≤ 1 β l δ opt ≤ 1 βj δ opt . Together with j βj δ opt ≤ r j (Claim 2) and ||p v l − m l || ≤ δ l + (1 + )l β l δ opt (by the induction hypothesis), we have r j − ||p v l − m l || ≥ j β j δ opt − ( δ l + (1 + )(j − 1) β l δ opt )
≥ (1 − (j − 1) ) β j δ opt − δ l ≥ (1 − (j − 1) − √ ) β j δ opt .(70)
Since ∈ (0, 1 4k 2 ) in the input of Algorithm 1, we know r j − ||p v l − m l || > 0. That is, m l is covered by the ball B j,l .
| 18,549 |
1809.10736
|
2892472836
|
Open story generation is the problem of automatically creating a story for any domain without retraining. Neural language models can be trained on large corpora across many domains and then used to generate stories. However, stories generated via language models tend to lack direction and coherence. We introduce a policy gradient reinforcement learning approach to open story generation that learns to achieve a given narrative goal state. In this work, the goal is for a story to end with a specific type of event, given in advance. However, a reward based on achieving the given goal is too sparse for effective learning. We use reward shaping to provide the reinforcement learner with a partial reward at every step. We show that our technique can train a model that generates a story that reaches the goal 94 of the time and reduces model perplexity. A human subject evaluation shows that stories generated by our technique are perceived to have significantly higher plausible event ordering and plot coherence over a baseline language modeling technique without perceived degradation of overall quality, enjoyability, or local causality.
|
Early story and plot generation systems relied on symbolic planning @cite_19 @cite_8 @cite_11 @cite_0 @cite_12 @cite_3 or case-based reasoning @cite_22 @cite_13 . These techniques only generated stories for predetermined, well-defined domains, conflating the robustness of manually-engineered knowledge with algorithm suitability. Regardless, symbolic planners in particular are able to provide long-term causal coherence. Early machine learning story generation techniques include textual case-based reasoning trained on blogs @cite_6 and probabilistic graphical models learned from crowdsourced example stories @cite_23 .
|
{
"abstract": [
"MEXICA is a computer model that produces frameworks for short stories based on the engagement-reflection cognitive account of writing. During engagement MEXICA generates material guided by content and rhetorical constraints, avoiding the use of explicit goals or story-structure information. During reflection the system breaks impasses, evaluates the novelty and interestingness of the story in progress and verifies that coherence requirements are satisfied. In this way, MEXICA complements and extends those models of computerised story-telling based on traditional problem-solving techniques where explicit goals drive the generation of stories. This paper describes the engagement-reflection account of writing, the general characteristics of MEXICA and reports an evaluation of the program.",
"",
"Conflict is an essential element of interesting stories, but little research in computer narrative has addressed it directly. We present a model of narrative conflict inspired by narratology research and based on Partial Order Causal Link (POCL) planning. This model informs an algorithm called CPOCL which extends previous research in story generation. Rather than eliminate all threatened causal links, CPOCL marks certain steps in a plan as non-executed in order to preserve the conflicting subplans of all characters without damaging the causal soundness of the overall story.",
"We describe Say Anything, a new interactive storytelling system that collaboratively writes textual narratives with human users. Unlike previous attempts, this interactive storytelling system places no restrictions on the content or direction of the user’s contribution to the emerging storyline. In response to these contributions, the computer continues the storyline with narration that is both coherent and entertaining. This capacity for open-domain interactive storytelling is enabled by an extremely large repository of nonfiction personal stories, which is used as a knowledge base in a case-based reasoning architecture. In this article, we describe the three main components of our case-based reasoning approach: a million-item corpus of personal stories mined from internet weblogs, a case retrieval strategy that is optimized for narrative coherence, and an adaptation strategy that ensures that repurposed sentences from the case base are appropriate for the user’s emerging fiction. We describe a series of evaluations of the system’s ability to produce coherent and entertaining stories, and we compare these narratives with single-author stories posted to internet weblogs.",
"AI planning has featured in a number of Interactive Storytelling prototypes: since narratives can be naturally modelled as a sequence of actions it is possible to exploit state of the art planners in the task of narrative generation. However the characteristics of a \"good\" plan, such as optimality, aren't necessarily the same as those of a \"good\" narrative, where errors and convoluted sequences may offer more reader interest, so some narrative structuring is required. We have looked at injecting narrative control into plan generation through the use of PDDL3.0 state trajectory constraints which enable us to express narrative control information within the planning representation. As part of this we have developed an approach to planning with trajectory constraints. The approach decomposes the problem into a set of smaller subproblems using the temporal orderings described by the constraints and then solves them incrementally. In this paper we outline our method and present results that illustrate the potential of the approach.",
"",
"Story generation is the problem of automatically selecting a sequence of events that meet a set of criteria and can be told as a story. Story generation is knowledge-intensive; traditional story generators rely on a priori defined domain models about fictional worlds, including characters, places, and actions that can be performed. Manually authoring the domain models is costly and thus not scalable. We present a novel class of story generation system that can generate stories in an unknown domain. Our system (a) automatically learns a domain model by crowdsourcing a corpus of narrative examples and (b) generates stories by sampling from the space defined by the domain model. A large-scale evaluation shows that stories generated by our system for a previously unknown topic are comparable in quality to simple stories authored by untrained humans.",
"In this paper we present a system for automatic story generation that reuses existing stories to produce a new story that matches a given user query. The plot structure is obtained by a case-based reasoning (CBR) process over a case base of tales and an ontology of explicitly declared relevant knowledge. The resulting story is generated as a sketch of a plot described in natural language by means of natural language generation (NLG) techniques.",
"Narrative, and in particular storytelling, is an important part of the human experience. Consequently, computational systems that can reason about narrative can be more effective communicators, entertainers, educators, and trainers. One of the central challenges in computational narrative reasoning is narrative generation, the automated creation of meaningful event sequences. There are many factors - logical and aesthetic - that contribute to the success of a narrative artifact. Central to this success is its understandability. We argue that the following two attributes of narratives are universal: (a) the logical causal progression of plot, and (b) character believability. Character believability is the perception by the audience that the actions performed by characters do not negatively impact the audience's suspension of disbelief. Specifically, characters must be perceived by the audience to be intentional agents. In this article, we explore the use of refinement search as a technique for solving the narrative generation problem - to find a sound and believable sequence of character actions that transforms an initial world state into a world state in which goal propositions hold. We describe a novel refinement search planning algorithm - the Intent-based Partial Order Causal Link (IPOCL) planner - that, in addition to creating causally sound plot progression, reasons about character intentionality by identifying possible character goals that explain their actions and creating plan structures that explain why those characters commit to their goals. We present the results of an empirical evaluation that demonstrates that narrative plans generated by the IPOCL algorithm support audience comprehension of character intentions better than plans generated by conventional partial-order planners.",
"In this paper, we describe a method for implementing the behaviour of artificial actors in the context of interactive storytelling. We have developed a fully implemented prototype based on the Unreal Tournament™ game engine, and carried experiments with a simple sitcom-like scenario. We discuss the central role of artificial actors in interactive storytelling and how real-time generation of their behaviour participates in the creation of a dynamic storyline. We follow previous work describing the behaviour of artificial actors through AI planning formalisms, and adapt it to the context of narrative representation. In this context, the narrative equivalent of a character’s behaviour consists in its role. The set of possible roles for a given actor is represented as a Hierarchical Task Network (HTN). The system uses HTN planning to dynamically generate the character roles, by interleaving planning and execution, which supports dynamic interaction between actors, as well as user intervention in the unfolding plot. Finally, we present several examples of short plots and situations generated by the system from the dynamic interaction of artificial actors."
],
"cite_N": [
"@cite_22",
"@cite_8",
"@cite_3",
"@cite_6",
"@cite_0",
"@cite_19",
"@cite_23",
"@cite_13",
"@cite_12",
"@cite_11"
],
"mid": [
"2156264173",
"",
"2396534511",
"2128116572",
"1622242599",
"1498806819",
"25648700",
"2090487795",
"2170516265",
"2103708864"
]
}
|
Controllable Neural Story Plot Generation via Reward Shaping
|
Automated plot generation is the problem of creating a sequence of main plot points for a story in a given domain and with a set of specifications. Many prior approaches to plot generation relied on planning [Lebowitz, 1987;Gervás et al., 2005;Porteous and Cavazza, 2009;Riedl and Young, 2010]. In many cases, these plot generators are provided with a goal, outcome state, or other guiding knowledge to ensure that the resulting story is coherent. However, these approaches also required extensive domain knowledge engineering.
Machine learning approaches to automated plot generation can learn storytelling and domain knowledge from a corpus of existing stories or plot summaries. To date, most existing neural network-based story and plot generation systems lack the ability to receive guidance from the user to achieve a specific goal. For example, one might want a system to create a story that ends in two characters getting married. Neural language modeling-based story generation approaches in particular [Roemmele and Gordon, 2015;Khalifa et al., 2017;Gehring et al., 2017;Martin et al., 2018] are prone to generating stories with little aim since each sentence, event, word, * Denotes equal contribution. or letter is generated by sampling from a probability distribution. By themselves, large neural language models have been shown to work well with a variety of short-term tasks, such as understanding short children's stories [Radford et al., 2019]. However, while recurrent neural networks (RNNs) using LSTM or GRU cells can theoretically maintain long-term context in their hidden layers, in practice RNNs only use a relatively small part of the history of tokens [Khandelwal et al., 2018]. Consequently, stories or plots generated by RNNs tend to lose coherence as the generation continues.
One way to address both the control and the coherence issues in story and plot generation is to use reinforcement learning (RL). By providing a reward each time a goal is achieved, a RL agent learns a policy that maximizes the future expected reward. For plot generation, we seek a means to learn a policy model that produces output similar to plots found in the training corpus and also moves the plot along from a start state s 0 towards a given goal s g . The system should be able to do this even if there is no comparable example in the training corpus where the plot starts in s 0 and ends in s g .
Our primary contribution is a reward-shaping technique that reinforces weights in a neural language model which guide the generation of plot points towards a given goal. Reward shaping is the automatic construction of approximate, intermediate rewards by analyzing a task domain [Ng et al., 1999]. We evaluate our technique in two ways. First, we compare our reward-shaping technique to the goal achievement rate and perplexity of a standard language modeling technique. Second, we conduct a human subject study to compare subjective ratings of the output of our system against a conventional language modeling baseline. We show that our technique improves the perception of plausible event ordering and plot coherence over a baseline story generator, in addition to performing computationally better than the baseline.
Reinforcement Learning for Plot Generation
We model story generation as a planning problem: find a sequence of events that transitions the state of the world into one in which the desired goal holds. In the case of this work, the goal is that a given verb (e.g., marry, punish, rescue) occurs in the final event of the story. While simplistic, it highlights the challenge of control in story generation.
Specifically, we use reinforcement learning to plan out the events of a story and use policy gradients to learn a policy model. We start by training a language model on a corpus of story plots. A language model P (x n |x n−1 ...x n−k ; θ) gives a distribution over the possible tokens x n that are likely to come next given a history of tokens x n−1 ...x n−k and the parameters of a model θ. This language model is a first approximation of the policy model. Generating a plot by iteratively sampling from this language model, however, provides no guarantee that the plot will arrive at a desired goal except by coincidence. We use REINFORCE [Williams, 1992] to specialize the language model to keep the local coherence of tokens initially learned and also to prefer selecting tokens that move the plot toward a given goal.
If reward is only provided when a generated plot achieves the given goal, then the rewards will be very sparse. Policy gradient learning requires a dense reward signal to provide feedback after every step. As such, our primary contribution is a reward-shaping technique where the original training corpus is automatically analyzed to construct a dense reward function that guides the plot generator toward the given goal.
Initial Language Model
Martin et al. [2018] demonstrated that the predictive accuracy of a plot generator could be improved by switching from natural language sentences to an abstraction called an event. An event is a tuple e = wn(s), vn(v), wn(o), wn(m) , where v is the verb of the sentence, s is the subject of the verb, o is the object of the verb, and m is a propositional object, indirect object, causal complement, or any other significant noun. The parameters o and m may take the special value of empty to denote there is no object of the verb or any additional sentence information, respectively. We use the same event representation in this work. As in Martin et al., we stem all words and then apply the following functions. The function wn(·) gives the WordNet [Miller, 1995] Synset of the argument two levels up in the hypernym tree (i.e. the grandparent Synset). The function vn(·) gives the VerbNet [Schuler and Kipper-Schuler, 2005] class of the argument. See Figure for an example of an "eventified" sentence. Where possible, we split sentences into multiple events, which creates a potential oneto-many relationship between a sentence and the event(s) it produces. Once produced, events are used sequentially.
In this paper, we use an encoder-decoder network [Sutskever et al., 2014] as our starting language model and our baseline. Encoder-decoder networks can be trained to generate sequences of text for dialogue or story generation by pairing one or more sentences in a corpus with the successive sentence and learning a set of weights that captures the relationship between the sentences. Our language model is thus
P (e i+1 |e i ; θ) where e i = s i , v i , o i , m i .
Policy Gradient Descent
We seek a policy model θ such that P (e i+1 |e i ; θ) is the distribution over events according to the corpus and also that increase the likelihood of reaching a given goal event in the future. For each input event e i in the corpus, an action involves choosing the most probable next event e i+1 from the probability distribution of the language model. The reward is calculated by determining how far the event e i+1 is from our given goal event. The final gradient used for updating the parameters of the network and shifting the distribution of the language model is calculated as follows:
∇ θ J(θ) = R(v(e i+1 ))∇ θ logP (e i+1 |e i ; θ)(1)
where e i+1 and R(v(e i+1 )) are, respectively, the event chosen at timestep i + 1 and the reward for the verb in that event. The policy gradient technique thus gives an advantage to highly-rewarding events by facilitating a larger step towards the likelihood of predicting these events in the future, over events which have a lower reward. In the next section we describe how R(v(e i+1 )) is computed.
Reward Shaping
For the purpose of controllability in plot generation, we wish to reward the network whenever it generates an event that makes it more likely to achieve the given goal. For the purposes of this paper, the goal is a given VerbNet class that we wish to see at the end of a plot. Reward shaping [Ng et al., 1999] is a technique whereby sparse rewards-such as rewarding the agent only when a given goal is reached-are replaced with a dense reward signal that provides rewards at intermediate states in the exploration leading to the goal.
To produce a smooth, dense reward function, we make the observation that certain events-and thus certain verbs-are more likely to appear closer to the goal than others in story plots. For example, suppose our goal is to generate a plot in which one character admires another (admire-31.2 is the VerbNet class that encapsulates the concept of falling in love). Events that contain the verb meet are more likely to appear nearby events that contain admire, whereas events that contain the verb leave are likely to appear farther away.
To construct the reward function, we pre-process the stories in our training corpus and calculate two key components: (a) the distance of each verb from the target/goal verb, and (b) the frequency of the verbs found in existing stories.
Distance
The distance component of the reward function measures how close the verb v of an event is to the target/goal verb g, which is used to reward the model when it produces events with verbs that are closer to the target verb. The formula for estimating this metric for a verb v is:
r 1 (v) = log s∈Sv,g l s − d s (v, g)(2)
where S v,g is the subset of stories in the corpus that contain v prior to the goal verb g, l s is the length of story s, and d s (v, g) is the number of events between the event containing v and the event containing g in story s (i.e., the distance within a story). Subtracting from the length of the story produces a larger reward when events with v and g are closer.
Story-Verb Frequency
Story-verb frequency rewards based on how likely any verb is to occur in a story before the target verb. This component estimates how often a particular verb v appears before the target verb g throughout the stories in the corpus. This discourages the model from outputting events with verbs that rarely occur in stories before the target verb. The following equation is used for calculating the story-verb frequency metric:
r 2 (v) = log k v,g N v(3)
where N v is the count of verb v in the corpus, and k v,g is the number of times v appears before goal verb g in any story.
Final Reward
The final reward for a verb-and thus event as a whole-is calculated as the product of the distance and frequency metrics. The rewards are normalized across all the verbs in the corpus. The final reward is:
R(v) = α × r 1 (v) × r 2 (v)(4)
where α is the normalization constant. When combined, both r metrics advantage verbs that 1) appear close to the target, while also 2) being present before the target in a story frequently enough to be considered significant.
Verb Clustering
In order to discourage the model from jumping to the target quickly, we cluster all verbs based on Equation 4 using the Jenks Natural Breaks optimization technique [Jenks and Caspall, 1971]. We restrict the vocabulary of v out -the model's output verb-to the set of verbs in the c + 1 th cluster, where c is the index of the cluster that verb v in -the model's input verb-belongs to. The rest of the event is generated by sampling from the full distribution. The intuition is that by restricting the vocabulary of the output verb, the gradient update in Equation 1 takes greater steps toward verbs that are more likely to occur next (i.e., in the next cluster) in a story headed toward a given goal. If the sampled verb has a low probability, the step will be smaller than if the verb is highly probable according to the language model.
Automated Experiments
We ran experiments to measure three properties of our model:
(1) how often our model can produce a plot-a sequence of events-that contains a desired target verb; (2) the perplexity of our model; and (3) the average length of the stories. Perplexity is a measure of the predictive ability of a model; particularly, how "surprised" the model is by occurrences in a corpus. We compare our results to those of a baseline event2event story generation model from Martin et al. [2018].
Corpus Preparation
We use the CMU movie summary corpus [Bamman et al., 2013]. However, this corpus proves to be too diverse; there is high variance between stories, which dilutes event patterns. We used Latent Dirichlet Analysis to cluster the stories from the corpus into 100 "genres". We selected a cluster that appeared to contain soap-opera-like plots. The stories were "eventified"-turned into event sequences, as explained in the Initial Language Model section of the paper.
We chose admire-31.2 and marry-36.2 as two target verbs because those VerbNet classes capture the sentiments of "falling in love" and "getting married", which are appropriate for our sub-corpus. The romance corpus was split into 90% training, and 10% testing data. We used consecutive events from the eventified corpus as source and target data, respectively, for training the sequence-to-sequence network.
Model Training
For our experiments we trained the encoder-decoder network using Tensorflow. Both the encoder and the decoder comprised of LSTM units, with a hidden layer size of 1024. The network was pre-trained for a total of 200 epochs using minibatch gradient descent and batch size of 64.
We created three models:
• Seq2Seq: This pre-trained model is identical to the "generalized multiple sequential event2event" model in Martin et al. [2018]. This is our baseline.
• DRL-clustered: Starting with the weights from the Seq2Seq, we continued training using the policy gradient technique and the reward function, along with the clustering and vocabulary restriction in the verb position described in the previous section, while keeping all network parameters constant.
• DRL-unrestricted: This is the same as DRL-clustered but without vocabulary restriction while sampling the verb for the next event during training ( § Verb Clustering).
The DRL-clustered and DRL-unrestricted models are trained for a further 200 epochs than the baseline.
Experimental Setup
With each event in our held-out dataset as a seed event, we generated stories with our baseline Seq2Seq, DRL-clustered, and DRL-unrestricted models. For all models, the story generation process was terminated when: (1) the model outputs an event with the target verb;
(2) the model outputs an endof-story token; or (3) the length of the story reaches 15 lines. Goal achievement rate was calculated by measuring the percentage of these stories that ended in the target verb (admire or marry). Additionally, we average generated story lengths to compare to the average story length in our test data where the goal event occurs (setting length to 15 if it doesn't occur). Finally, we measure the perplexity for all the models, with the exception of the testing data since it is not a model.
Results and Discussion
Results are summarized in Table 1. Only 22.47% of the stories in the testing set, on average, end in our desired goals, illustrating how rare the chosen goals were in the corpus. The DRL-clustered model generated the given goals on average 93.82% of the time, compared to 37.72% on average for the baseline Seq2Seq and 19.935% for the DRL-unrestricted model. This shows that our policy gradient approach can direct the plot to a pre-specified ending and that our clustering method is integral to doing so. Removing verb clustering from our reward calculation to create the DRL-unrestricted model harms goal achievement; the system rarely sees a verb in the next cluster so the reward is frequently low, making distribution shaping towards the goal difficult. We use perplexity as a metric to estimate how accurate the learned distribution is for predicting unseen data. We observe that perplexity values drop substantially for the DRL models (7.61 for DRL-clustered and 5.73 for DRLunrestricted with goal admire; 7.05 for DRL-clustered and 9.78 for DRL-unrestricted with goal marry) when compared with the Seq2Seq baseline (48.06). This can be attributed to the fact that our reward function is based on the distribution of verbs in the story corpus, refining the model's ability to recreate the corpus distribution. Because DRL-unrestricted's rewards are based on subsequent verbs in the corpus instead of verb clusters, it sometimes results in a lower perplexity, but at the expense of not learning how to achieve the goal often.
The average story length is an important metric because it is trivial to train a language model that reaches the goal event in a single step. DRL models don't have to produce stories the same length as the those in the testing corpus, as long as the length is not extremely short (leaping to conclusions) or too long (the story generator is timing out). The baseline Seq2Seq model creates stories that are about the same length as the testing corpus stories, showing that the model is mostly mimicking the behavior of the corpus it was trained on. The DRL-unrestricted model produces similar behavior, due to the absence of clustering or vocabulary restriction to prevent the story from rambling. However, the DRL-clustered model creates slightly shorter stories, showing that it is reaching the goal quicker, while not jumping immediately to the goal.
Human Evaluation
The best practice in the evaluation of story/plot generation is human subject evaluation. However, the use of the event representation makes human subject evaluation difficult since events are not easily readable. Martin et al. [2018] used a second neural network to translate events into human-readable sentences, but their technique did not have sufficient accuracy to use in a human evaluation. The use of a second network also makes it impossible to isolate the generation of events from the generation of the final natural language sentence in terms of human perception. To overcome this challenge, we have developed an evaluation protocol that allows us to directly evaluate plots with human judges. Specifically, we recruited and taught individuals to convert event sequences into natural language before giving generated plots to human judges. By having concise, grammatically-and semanticallyguaranteed human translations of generated plot events we know that the human judges are evaluating the raw events and not the creative aspects of the way sentences are written.
Corpus Creation
We collected 5 stories generated by our DRL-clustered system, 5 generated from our Seq2Seq baseline, and 3 from the eventified testing corpus. The stories were selected by randomly picking start events-keeping the same start events across conditions-until we had stories that were 5-10 events long. By keeping a story length limit, we guarantee having DRL stories that reached the goal. The testing corpus was mainly used to verify the translation process's accuracy since we do not expect our models to reach this upper bound; thus only three stories were selected. We trained 26 unbiased people to "translate" events into short natural language sentences.
Each translator was instructed that their "primary goal is to produce faithful translations of stories from an abstract 'event' representation into a natural language sentence." The instructions then continued with: (1) a refresher on parts of speech, (2) the format of the event representation, (3) examples of events and their corresponding sentences, (4) resources on WordNet and VerbNet with details on how to use both, and (5) additional general guidelines and unusual cases they might encounter (e.g., how to handle empty parameters in events). The translators were further instructed to not add extraneous details, swap the order of words in the event, nor choose a better verb even if the plot would be improved.
Pairs of people translated plots individually and then came together to reach a consensus on a final version of the plot. That is, human translators reversed the eventification process to create a human-readable sentence from an event. Table 2 shows an example of an entire eventified story and the corresponding human translations.
Experimental Setup
We recruited 175 participants on Amazon Mechanical Turk. Each participant was compensated $10 for completing the questionnaire. Participants were given one of the translated plots at a time, rating each of the following statements on a 5-point Likert scale for how much they agreed (Strongly Agree, Somewhat Agree, Neither Agree nor Disagree, Somewhat Disagree, or Strongly Disagree):
Through the equal interval assumption, we turn Likert values into numerals 1 (Strongly Disagree) to 5 (Strongly Agree).
The first seven questions are taken from a tool designed specifically for the evaluation of computer-generated stories which has been validated against human judgments [Purdy et al., 2018]. Each participant answered the questions for all three story conditions. The question about the story being a soap opera was added to determine how the performance of the DRL story generator affects reader perceptions of the theme, since the system was trained on soap-opera-like plots. The single plot question was added to determine if our DRL model was maintaining the plot better than the Seq2Seq model. The questions about correct grammar, interesting language, and avoiding repetition are irrelevant to our evaluation since the natural language was produced by the human translators but were kept for consistency with Purdy et al. [2018].
Finally, participants answered two additional prompts that required short answer responses: (1) Please give a summary of the story above in your own words; and (2) For THIS STORY, please select which of the previous attributes (e.g. enjoyable, plausible, coherent) you found to be the MOST IMPORTANT and explain WHY. The answers to these questions were not evaluated, but if any participants failed to answer the short answer questions, their data was removed from the results. We removed 25 participants' data in total.
Results and Discussion
We performed one-way repeated-measures ANOVA on the data since each participant rated a story from each category, using Tukey HSD as the post-test. We verified that the data is normal, and the variances are not significantly different. The data was collected independently. Average scores and their significance across conditions can be seen in Figure 2.
Questions on interesting language and avoiding repetition are not found to be significant across all three conditions. Since these are not related to event generation model performance this provides an indication that the translations are fair across all conditions. Grammar was significantly different between testing corpus stories and DRL-generated stories (p < 0.05), which was unanticipated. Upon further analysis, both the baseline Seq2Seq model and the DRL model generated empty values for the object and modifier at higher rates than found in the corpus. It is harder to make complete, grammatical sentences with only two tokens in an event, namely when a verb is transitive-requiring at least one object. Beyond more expected results, such as having a better plausible order, the testing corpus stories were also significantly more likely to be perceived as being soap operas (p < 0.01), the genre from which the corpus stories were drawn. It is unclear why this would be the case, except that both the Seq2Seq and DRL models could be failing to learn some aspect of the genre despite being trained on the same corpus. It is also worth noting that randomly-selecting 5 generated stories does not guarantee that they will be representative of their respective models.
Stories in the DRL condition were significantly perceived to have more plausible orderings than those in the baseline Event Output (subject, verb, object, modifier) The gathering dispersed to Hawaii. gathering.n.01, characterize-29.2-1-1, time interval.n.01, empty The community remembered their trip. physical entity.n.01, cheat-10.6, pack, empty They robbed the pack. physical entity.n.01, admire-31.2, social gathering.n.01, empty They adored the party. Table 2: An example eventified story from the DRL-clustered system paired with the translation written by a pair of participants. Seq2Seq condition (p < 0.05) and were significantly more likely to be perceived as following a single plot (p < 0.05). Since stories generated by the baseline Seq2Seq model begin to lose coherence as the story progresses, these results confirm our hypothesis that the DRL's use of reward shaping keeps the plot on track. The DRL is also perceived as generating stories with more local causality than the Seq2Seq, although the results were not statistically significant.
For all other dimensions, the DRL stories are not found to be significantly different than baseline stories. When further training a pre-trained language model using a reward function instead of the standard cross-entropy loss there is a non-trivial chance that model updates will degrade any aspect of the model that is not related to goal achievement. Thus, a positive result is one in which DRL-condition stories are never significantly lower than Seq2Seq-condition stories. This shows that we are able to get to the goal state without any significant degradation in other aspects of story generation.
Conclusions
Language model-based story and plot generation systems produce stories that lack direction. Our reward shaping tech-nique learns a policy that generates stories that are probabilistically comparable with the training corpus while also reaching a pre-specified goal ∼93% of the time. Furthermore, the reward-shaping technique improves perplexity when generated plots are compared to the testing corpus. However, in plot generation, the comparison to an existing corpus is not the most significant metric because novel plots may also be good. A human subject study showed that the reward shaping technique significantly improves the plausible ordering of events and the likelihood of producing a sequence of events that is perceived to be a single, coherent plot. We thus demonstrated for the first time that control over neural plot generation can be achieved in the form of providing a goal that indicates how a plot should end.
| 4,284 |
1809.10736
|
2892472836
|
Open story generation is the problem of automatically creating a story for any domain without retraining. Neural language models can be trained on large corpora across many domains and then used to generate stories. However, stories generated via language models tend to lack direction and coherence. We introduce a policy gradient reinforcement learning approach to open story generation that learns to achieve a given narrative goal state. In this work, the goal is for a story to end with a specific type of event, given in advance. However, a reward based on achieving the given goal is too sparse for effective learning. We use reward shaping to provide the reinforcement learner with a partial reward at every step. We show that our technique can train a model that generates a story that reaches the goal 94 of the time and reduces model perplexity. A human subject evaluation shows that stories generated by our technique are perceived to have significantly higher plausible event ordering and plot coherence over a baseline language modeling technique without perceived degradation of overall quality, enjoyability, or local causality.
|
Reinforcement learning (RL) addresses some of the issues of preserving coherence for text generation when sampling from a neural language model. Additionally, it provides the ability to specify a goal. Reinforcement learning @cite_10 is a technique that is used to solve a Markov decision process (MDP). An MDP is a tuple @math where @math is the set of possible world states, @math is the set of possible actions, @math is a transition function @math , @math is a reward function @math , and @math is a discount factor @math . The result of reinforcement learning is a policy @math , which defines which actions should be taken in each state in order to maximize the expected future reward. The policy gradient learning approach to reinforcement learning directly optimizes the parameters of a policy model, which is represented as a neural network. One model-free policy gradient approach, REINFORCE @cite_16 , learns a policy by sampling from the current policy and backpropagating any reward received through the weights of the policy model.
|
{
"abstract": [
"This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units. These algorithms, called REINFORCE algorithms, are shown to make weight adjustments in a direction that lies along the gradient of expected reinforcement in both immediate-reinforcement tasks and certain limited forms of delayed-reinforcement tasks, and they do this without explicitly computing gradient estimates or even storing information from which such estimates could be computed. Specific examples of such algorithms are presented, some of which bear a close relationship to certain existing algorithms while others are novel but potentially interesting in their own right. Also given are results that show how such algorithms can be naturally integrated with backpropagation. We close with a brief discussion of a number of additional issues surrounding the use of such algorithms, including what is known about their limiting behaviors as well as further considerations that might be used to help develop similar but potentially more powerful reinforcement learning algorithms.",
"Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability. The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning."
],
"cite_N": [
"@cite_16",
"@cite_10"
],
"mid": [
"2119717200",
"2121863487"
]
}
|
Controllable Neural Story Plot Generation via Reward Shaping
|
Automated plot generation is the problem of creating a sequence of main plot points for a story in a given domain and with a set of specifications. Many prior approaches to plot generation relied on planning [Lebowitz, 1987;Gervás et al., 2005;Porteous and Cavazza, 2009;Riedl and Young, 2010]. In many cases, these plot generators are provided with a goal, outcome state, or other guiding knowledge to ensure that the resulting story is coherent. However, these approaches also required extensive domain knowledge engineering.
Machine learning approaches to automated plot generation can learn storytelling and domain knowledge from a corpus of existing stories or plot summaries. To date, most existing neural network-based story and plot generation systems lack the ability to receive guidance from the user to achieve a specific goal. For example, one might want a system to create a story that ends in two characters getting married. Neural language modeling-based story generation approaches in particular [Roemmele and Gordon, 2015;Khalifa et al., 2017;Gehring et al., 2017;Martin et al., 2018] are prone to generating stories with little aim since each sentence, event, word, * Denotes equal contribution. or letter is generated by sampling from a probability distribution. By themselves, large neural language models have been shown to work well with a variety of short-term tasks, such as understanding short children's stories [Radford et al., 2019]. However, while recurrent neural networks (RNNs) using LSTM or GRU cells can theoretically maintain long-term context in their hidden layers, in practice RNNs only use a relatively small part of the history of tokens [Khandelwal et al., 2018]. Consequently, stories or plots generated by RNNs tend to lose coherence as the generation continues.
One way to address both the control and the coherence issues in story and plot generation is to use reinforcement learning (RL). By providing a reward each time a goal is achieved, a RL agent learns a policy that maximizes the future expected reward. For plot generation, we seek a means to learn a policy model that produces output similar to plots found in the training corpus and also moves the plot along from a start state s 0 towards a given goal s g . The system should be able to do this even if there is no comparable example in the training corpus where the plot starts in s 0 and ends in s g .
Our primary contribution is a reward-shaping technique that reinforces weights in a neural language model which guide the generation of plot points towards a given goal. Reward shaping is the automatic construction of approximate, intermediate rewards by analyzing a task domain [Ng et al., 1999]. We evaluate our technique in two ways. First, we compare our reward-shaping technique to the goal achievement rate and perplexity of a standard language modeling technique. Second, we conduct a human subject study to compare subjective ratings of the output of our system against a conventional language modeling baseline. We show that our technique improves the perception of plausible event ordering and plot coherence over a baseline story generator, in addition to performing computationally better than the baseline.
Reinforcement Learning for Plot Generation
We model story generation as a planning problem: find a sequence of events that transitions the state of the world into one in which the desired goal holds. In the case of this work, the goal is that a given verb (e.g., marry, punish, rescue) occurs in the final event of the story. While simplistic, it highlights the challenge of control in story generation.
Specifically, we use reinforcement learning to plan out the events of a story and use policy gradients to learn a policy model. We start by training a language model on a corpus of story plots. A language model P (x n |x n−1 ...x n−k ; θ) gives a distribution over the possible tokens x n that are likely to come next given a history of tokens x n−1 ...x n−k and the parameters of a model θ. This language model is a first approximation of the policy model. Generating a plot by iteratively sampling from this language model, however, provides no guarantee that the plot will arrive at a desired goal except by coincidence. We use REINFORCE [Williams, 1992] to specialize the language model to keep the local coherence of tokens initially learned and also to prefer selecting tokens that move the plot toward a given goal.
If reward is only provided when a generated plot achieves the given goal, then the rewards will be very sparse. Policy gradient learning requires a dense reward signal to provide feedback after every step. As such, our primary contribution is a reward-shaping technique where the original training corpus is automatically analyzed to construct a dense reward function that guides the plot generator toward the given goal.
Initial Language Model
Martin et al. [2018] demonstrated that the predictive accuracy of a plot generator could be improved by switching from natural language sentences to an abstraction called an event. An event is a tuple e = wn(s), vn(v), wn(o), wn(m) , where v is the verb of the sentence, s is the subject of the verb, o is the object of the verb, and m is a propositional object, indirect object, causal complement, or any other significant noun. The parameters o and m may take the special value of empty to denote there is no object of the verb or any additional sentence information, respectively. We use the same event representation in this work. As in Martin et al., we stem all words and then apply the following functions. The function wn(·) gives the WordNet [Miller, 1995] Synset of the argument two levels up in the hypernym tree (i.e. the grandparent Synset). The function vn(·) gives the VerbNet [Schuler and Kipper-Schuler, 2005] class of the argument. See Figure for an example of an "eventified" sentence. Where possible, we split sentences into multiple events, which creates a potential oneto-many relationship between a sentence and the event(s) it produces. Once produced, events are used sequentially.
In this paper, we use an encoder-decoder network [Sutskever et al., 2014] as our starting language model and our baseline. Encoder-decoder networks can be trained to generate sequences of text for dialogue or story generation by pairing one or more sentences in a corpus with the successive sentence and learning a set of weights that captures the relationship between the sentences. Our language model is thus
P (e i+1 |e i ; θ) where e i = s i , v i , o i , m i .
Policy Gradient Descent
We seek a policy model θ such that P (e i+1 |e i ; θ) is the distribution over events according to the corpus and also that increase the likelihood of reaching a given goal event in the future. For each input event e i in the corpus, an action involves choosing the most probable next event e i+1 from the probability distribution of the language model. The reward is calculated by determining how far the event e i+1 is from our given goal event. The final gradient used for updating the parameters of the network and shifting the distribution of the language model is calculated as follows:
∇ θ J(θ) = R(v(e i+1 ))∇ θ logP (e i+1 |e i ; θ)(1)
where e i+1 and R(v(e i+1 )) are, respectively, the event chosen at timestep i + 1 and the reward for the verb in that event. The policy gradient technique thus gives an advantage to highly-rewarding events by facilitating a larger step towards the likelihood of predicting these events in the future, over events which have a lower reward. In the next section we describe how R(v(e i+1 )) is computed.
Reward Shaping
For the purpose of controllability in plot generation, we wish to reward the network whenever it generates an event that makes it more likely to achieve the given goal. For the purposes of this paper, the goal is a given VerbNet class that we wish to see at the end of a plot. Reward shaping [Ng et al., 1999] is a technique whereby sparse rewards-such as rewarding the agent only when a given goal is reached-are replaced with a dense reward signal that provides rewards at intermediate states in the exploration leading to the goal.
To produce a smooth, dense reward function, we make the observation that certain events-and thus certain verbs-are more likely to appear closer to the goal than others in story plots. For example, suppose our goal is to generate a plot in which one character admires another (admire-31.2 is the VerbNet class that encapsulates the concept of falling in love). Events that contain the verb meet are more likely to appear nearby events that contain admire, whereas events that contain the verb leave are likely to appear farther away.
To construct the reward function, we pre-process the stories in our training corpus and calculate two key components: (a) the distance of each verb from the target/goal verb, and (b) the frequency of the verbs found in existing stories.
Distance
The distance component of the reward function measures how close the verb v of an event is to the target/goal verb g, which is used to reward the model when it produces events with verbs that are closer to the target verb. The formula for estimating this metric for a verb v is:
r 1 (v) = log s∈Sv,g l s − d s (v, g)(2)
where S v,g is the subset of stories in the corpus that contain v prior to the goal verb g, l s is the length of story s, and d s (v, g) is the number of events between the event containing v and the event containing g in story s (i.e., the distance within a story). Subtracting from the length of the story produces a larger reward when events with v and g are closer.
Story-Verb Frequency
Story-verb frequency rewards based on how likely any verb is to occur in a story before the target verb. This component estimates how often a particular verb v appears before the target verb g throughout the stories in the corpus. This discourages the model from outputting events with verbs that rarely occur in stories before the target verb. The following equation is used for calculating the story-verb frequency metric:
r 2 (v) = log k v,g N v(3)
where N v is the count of verb v in the corpus, and k v,g is the number of times v appears before goal verb g in any story.
Final Reward
The final reward for a verb-and thus event as a whole-is calculated as the product of the distance and frequency metrics. The rewards are normalized across all the verbs in the corpus. The final reward is:
R(v) = α × r 1 (v) × r 2 (v)(4)
where α is the normalization constant. When combined, both r metrics advantage verbs that 1) appear close to the target, while also 2) being present before the target in a story frequently enough to be considered significant.
Verb Clustering
In order to discourage the model from jumping to the target quickly, we cluster all verbs based on Equation 4 using the Jenks Natural Breaks optimization technique [Jenks and Caspall, 1971]. We restrict the vocabulary of v out -the model's output verb-to the set of verbs in the c + 1 th cluster, where c is the index of the cluster that verb v in -the model's input verb-belongs to. The rest of the event is generated by sampling from the full distribution. The intuition is that by restricting the vocabulary of the output verb, the gradient update in Equation 1 takes greater steps toward verbs that are more likely to occur next (i.e., in the next cluster) in a story headed toward a given goal. If the sampled verb has a low probability, the step will be smaller than if the verb is highly probable according to the language model.
Automated Experiments
We ran experiments to measure three properties of our model:
(1) how often our model can produce a plot-a sequence of events-that contains a desired target verb; (2) the perplexity of our model; and (3) the average length of the stories. Perplexity is a measure of the predictive ability of a model; particularly, how "surprised" the model is by occurrences in a corpus. We compare our results to those of a baseline event2event story generation model from Martin et al. [2018].
Corpus Preparation
We use the CMU movie summary corpus [Bamman et al., 2013]. However, this corpus proves to be too diverse; there is high variance between stories, which dilutes event patterns. We used Latent Dirichlet Analysis to cluster the stories from the corpus into 100 "genres". We selected a cluster that appeared to contain soap-opera-like plots. The stories were "eventified"-turned into event sequences, as explained in the Initial Language Model section of the paper.
We chose admire-31.2 and marry-36.2 as two target verbs because those VerbNet classes capture the sentiments of "falling in love" and "getting married", which are appropriate for our sub-corpus. The romance corpus was split into 90% training, and 10% testing data. We used consecutive events from the eventified corpus as source and target data, respectively, for training the sequence-to-sequence network.
Model Training
For our experiments we trained the encoder-decoder network using Tensorflow. Both the encoder and the decoder comprised of LSTM units, with a hidden layer size of 1024. The network was pre-trained for a total of 200 epochs using minibatch gradient descent and batch size of 64.
We created three models:
• Seq2Seq: This pre-trained model is identical to the "generalized multiple sequential event2event" model in Martin et al. [2018]. This is our baseline.
• DRL-clustered: Starting with the weights from the Seq2Seq, we continued training using the policy gradient technique and the reward function, along with the clustering and vocabulary restriction in the verb position described in the previous section, while keeping all network parameters constant.
• DRL-unrestricted: This is the same as DRL-clustered but without vocabulary restriction while sampling the verb for the next event during training ( § Verb Clustering).
The DRL-clustered and DRL-unrestricted models are trained for a further 200 epochs than the baseline.
Experimental Setup
With each event in our held-out dataset as a seed event, we generated stories with our baseline Seq2Seq, DRL-clustered, and DRL-unrestricted models. For all models, the story generation process was terminated when: (1) the model outputs an event with the target verb;
(2) the model outputs an endof-story token; or (3) the length of the story reaches 15 lines. Goal achievement rate was calculated by measuring the percentage of these stories that ended in the target verb (admire or marry). Additionally, we average generated story lengths to compare to the average story length in our test data where the goal event occurs (setting length to 15 if it doesn't occur). Finally, we measure the perplexity for all the models, with the exception of the testing data since it is not a model.
Results and Discussion
Results are summarized in Table 1. Only 22.47% of the stories in the testing set, on average, end in our desired goals, illustrating how rare the chosen goals were in the corpus. The DRL-clustered model generated the given goals on average 93.82% of the time, compared to 37.72% on average for the baseline Seq2Seq and 19.935% for the DRL-unrestricted model. This shows that our policy gradient approach can direct the plot to a pre-specified ending and that our clustering method is integral to doing so. Removing verb clustering from our reward calculation to create the DRL-unrestricted model harms goal achievement; the system rarely sees a verb in the next cluster so the reward is frequently low, making distribution shaping towards the goal difficult. We use perplexity as a metric to estimate how accurate the learned distribution is for predicting unseen data. We observe that perplexity values drop substantially for the DRL models (7.61 for DRL-clustered and 5.73 for DRLunrestricted with goal admire; 7.05 for DRL-clustered and 9.78 for DRL-unrestricted with goal marry) when compared with the Seq2Seq baseline (48.06). This can be attributed to the fact that our reward function is based on the distribution of verbs in the story corpus, refining the model's ability to recreate the corpus distribution. Because DRL-unrestricted's rewards are based on subsequent verbs in the corpus instead of verb clusters, it sometimes results in a lower perplexity, but at the expense of not learning how to achieve the goal often.
The average story length is an important metric because it is trivial to train a language model that reaches the goal event in a single step. DRL models don't have to produce stories the same length as the those in the testing corpus, as long as the length is not extremely short (leaping to conclusions) or too long (the story generator is timing out). The baseline Seq2Seq model creates stories that are about the same length as the testing corpus stories, showing that the model is mostly mimicking the behavior of the corpus it was trained on. The DRL-unrestricted model produces similar behavior, due to the absence of clustering or vocabulary restriction to prevent the story from rambling. However, the DRL-clustered model creates slightly shorter stories, showing that it is reaching the goal quicker, while not jumping immediately to the goal.
Human Evaluation
The best practice in the evaluation of story/plot generation is human subject evaluation. However, the use of the event representation makes human subject evaluation difficult since events are not easily readable. Martin et al. [2018] used a second neural network to translate events into human-readable sentences, but their technique did not have sufficient accuracy to use in a human evaluation. The use of a second network also makes it impossible to isolate the generation of events from the generation of the final natural language sentence in terms of human perception. To overcome this challenge, we have developed an evaluation protocol that allows us to directly evaluate plots with human judges. Specifically, we recruited and taught individuals to convert event sequences into natural language before giving generated plots to human judges. By having concise, grammatically-and semanticallyguaranteed human translations of generated plot events we know that the human judges are evaluating the raw events and not the creative aspects of the way sentences are written.
Corpus Creation
We collected 5 stories generated by our DRL-clustered system, 5 generated from our Seq2Seq baseline, and 3 from the eventified testing corpus. The stories were selected by randomly picking start events-keeping the same start events across conditions-until we had stories that were 5-10 events long. By keeping a story length limit, we guarantee having DRL stories that reached the goal. The testing corpus was mainly used to verify the translation process's accuracy since we do not expect our models to reach this upper bound; thus only three stories were selected. We trained 26 unbiased people to "translate" events into short natural language sentences.
Each translator was instructed that their "primary goal is to produce faithful translations of stories from an abstract 'event' representation into a natural language sentence." The instructions then continued with: (1) a refresher on parts of speech, (2) the format of the event representation, (3) examples of events and their corresponding sentences, (4) resources on WordNet and VerbNet with details on how to use both, and (5) additional general guidelines and unusual cases they might encounter (e.g., how to handle empty parameters in events). The translators were further instructed to not add extraneous details, swap the order of words in the event, nor choose a better verb even if the plot would be improved.
Pairs of people translated plots individually and then came together to reach a consensus on a final version of the plot. That is, human translators reversed the eventification process to create a human-readable sentence from an event. Table 2 shows an example of an entire eventified story and the corresponding human translations.
Experimental Setup
We recruited 175 participants on Amazon Mechanical Turk. Each participant was compensated $10 for completing the questionnaire. Participants were given one of the translated plots at a time, rating each of the following statements on a 5-point Likert scale for how much they agreed (Strongly Agree, Somewhat Agree, Neither Agree nor Disagree, Somewhat Disagree, or Strongly Disagree):
Through the equal interval assumption, we turn Likert values into numerals 1 (Strongly Disagree) to 5 (Strongly Agree).
The first seven questions are taken from a tool designed specifically for the evaluation of computer-generated stories which has been validated against human judgments [Purdy et al., 2018]. Each participant answered the questions for all three story conditions. The question about the story being a soap opera was added to determine how the performance of the DRL story generator affects reader perceptions of the theme, since the system was trained on soap-opera-like plots. The single plot question was added to determine if our DRL model was maintaining the plot better than the Seq2Seq model. The questions about correct grammar, interesting language, and avoiding repetition are irrelevant to our evaluation since the natural language was produced by the human translators but were kept for consistency with Purdy et al. [2018].
Finally, participants answered two additional prompts that required short answer responses: (1) Please give a summary of the story above in your own words; and (2) For THIS STORY, please select which of the previous attributes (e.g. enjoyable, plausible, coherent) you found to be the MOST IMPORTANT and explain WHY. The answers to these questions were not evaluated, but if any participants failed to answer the short answer questions, their data was removed from the results. We removed 25 participants' data in total.
Results and Discussion
We performed one-way repeated-measures ANOVA on the data since each participant rated a story from each category, using Tukey HSD as the post-test. We verified that the data is normal, and the variances are not significantly different. The data was collected independently. Average scores and their significance across conditions can be seen in Figure 2.
Questions on interesting language and avoiding repetition are not found to be significant across all three conditions. Since these are not related to event generation model performance this provides an indication that the translations are fair across all conditions. Grammar was significantly different between testing corpus stories and DRL-generated stories (p < 0.05), which was unanticipated. Upon further analysis, both the baseline Seq2Seq model and the DRL model generated empty values for the object and modifier at higher rates than found in the corpus. It is harder to make complete, grammatical sentences with only two tokens in an event, namely when a verb is transitive-requiring at least one object. Beyond more expected results, such as having a better plausible order, the testing corpus stories were also significantly more likely to be perceived as being soap operas (p < 0.01), the genre from which the corpus stories were drawn. It is unclear why this would be the case, except that both the Seq2Seq and DRL models could be failing to learn some aspect of the genre despite being trained on the same corpus. It is also worth noting that randomly-selecting 5 generated stories does not guarantee that they will be representative of their respective models.
Stories in the DRL condition were significantly perceived to have more plausible orderings than those in the baseline Event Output (subject, verb, object, modifier) The gathering dispersed to Hawaii. gathering.n.01, characterize-29.2-1-1, time interval.n.01, empty The community remembered their trip. physical entity.n.01, cheat-10.6, pack, empty They robbed the pack. physical entity.n.01, admire-31.2, social gathering.n.01, empty They adored the party. Table 2: An example eventified story from the DRL-clustered system paired with the translation written by a pair of participants. Seq2Seq condition (p < 0.05) and were significantly more likely to be perceived as following a single plot (p < 0.05). Since stories generated by the baseline Seq2Seq model begin to lose coherence as the story progresses, these results confirm our hypothesis that the DRL's use of reward shaping keeps the plot on track. The DRL is also perceived as generating stories with more local causality than the Seq2Seq, although the results were not statistically significant.
For all other dimensions, the DRL stories are not found to be significantly different than baseline stories. When further training a pre-trained language model using a reward function instead of the standard cross-entropy loss there is a non-trivial chance that model updates will degrade any aspect of the model that is not related to goal achievement. Thus, a positive result is one in which DRL-condition stories are never significantly lower than Seq2Seq-condition stories. This shows that we are able to get to the goal state without any significant degradation in other aspects of story generation.
Conclusions
Language model-based story and plot generation systems produce stories that lack direction. Our reward shaping tech-nique learns a policy that generates stories that are probabilistically comparable with the training corpus while also reaching a pre-specified goal ∼93% of the time. Furthermore, the reward-shaping technique improves perplexity when generated plots are compared to the testing corpus. However, in plot generation, the comparison to an existing corpus is not the most significant metric because novel plots may also be good. A human subject study showed that the reward shaping technique significantly improves the plausible ordering of events and the likelihood of producing a sequence of events that is perceived to be a single, coherent plot. We thus demonstrated for the first time that control over neural plot generation can be achieved in the form of providing a goal that indicates how a plot should end.
| 4,284 |
1809.10252
|
2951622944
|
Sampling-based Motion Planners (SMPs) have become increasingly popular as they provide collision-free path solutions regardless of obstacle geometry in a given environment. However, their computational complexity increases significantly with the dimensionality of the motion planning problem. Adaptive sampling is one of the ways to speed up SMPs by sampling a particular region of a configuration space that is more likely to contain an optimal path solution. Although there are a wide variety of algorithms for adaptive sampling, they rely on hand-crafted heuristics; furthermore, their performance decreases significantly in high-dimensional spaces. In this paper, we present a neural network-based adaptive sampler for motion planning called Deep Sampling-based Motion Planner (DeepSMP). DeepSMP generates samples for SMPs and enhances their overall speed significantly while exhibiting efficient scalability to higher-dimensional problems. DeepSMP's neural architecture comprises of a Contractive AutoEncoder which encodes given workspaces directly from a raw point cloud data, and a Dropout-based stochastic deep feedforward neural network which takes the workspace encoding, start and goal configuration, and iteratively generates feasible samples for SMPs to compute end-to-end collision-free optimal paths. DeepSMP is not only consistently computationally efficient in all tested environments but has also shown remarkable generalization to completely unseen environments. We evaluate DeepSMP on multiple planning problems including planning of a point-mass robot, rigid-body, 6-link robotic manipulator in various 2D and 3D environments. The results show that on average our method is at least 7 times faster in point-mass and rigid-body case and about 28 times faster in 6-link robot case than the existing state-of-the-art.
|
RRT* @cite_6 extends RRTs to guarantee asymptotic optimality by incrementally rewiring the RRT graph connections such that the shortest path is asymptotically guaranteed @cite_6 . However, to determine an @math -near optimal path in @math dimensions, roughly @math samples are required, which makes RRT* no better than grid search methods @cite_9 . Likewise, experiments in @cite_0 @cite_3 also confirmed that RRT* exhibits slow convergence rates to optimal path solution in higher-dimensional spaces. The following sections discusses various existing biased adaptive sampling methods to speed up the convergence rate of SMPs to compute optimal near-optimal path solution.
|
{
"abstract": [
"Rapidly-exploring Random Tree star (RRT*) is a recently proposed extension of Rapidly-exploring Random Tree (RRT) algorithm that provides a collision-free, asymptotically optimal path regardless of obstacles geometry in a given environment. However, one of the limitation in the RRT* algorithm is slow convergence to optimal path solution. As a result it consumes high memory as well as time due to the large number of iterations utilised in achieving optimal path solution. To overcome these limitations, we propose the potential function based-RRT* that incorporates the artificial potential field algorithm in RRT*. The proposed algorithm allows a considerable decrease in the number of iterations and thus leads to more efficient memory utilization and an accelerated convergence rate. In order to illustrate the usefulness of the proposed algorithm in terms of space execution and convergence rate, this paper presents rigorous simulation based comparisons between the proposed techniques and RRT* under different environmental conditions. Moreover, both algorithms are also tested and compared under non-holonomic differential constraints.",
"Asymptotically-optimal sampling-based motion planners, like RRT*, perform vast amounts of collision checking, and are hence rather slow to converge in complex problems where collision checking is relatively expensive. This paper presents two novel motion planners, Lazy-PRM* and Lazy-RRG*, that eliminate the majority of collision checks using a lazy strategy. They are sampling-based, any-time, and asymptotically complete algorithms that grow a network of feasible vertices connected by edges. Edges are not immediately checked for collision, but rather are checked only when a better path to the goal is found. This strategy avoids checking the vast majority of edges that have no chance of being on an optimal path. Experiments show that the new methods converge toward the optimum substantially faster than existing planners on rigid body path planning and robot manipulation problems.",
"Rapidly-exploring random trees (RRTs) are pop- ular in motion planning because they find solutions efficiently to single-query problems. Optimal RRTs (RRT*s) extend RRTs to the problem of finding the optimal solution, but in doing so asymptotically find the optimal path from the initial state to every state in the planning domain. This behaviour is not only inefficient but also inconsistent with their single-query nature. For problems seeking to minimize path length, the subset of states that can improve a solution can be described by a prolate hyperspheroid. We show that unless this subset is sam- pled directly, the probability of improving a solution becomes arbitrarily small in large worlds or high state dimensions. In this paper, we present an exact method to focus the search by directly sampling this subset. The advantages of the presented sampling technique are demonstrated with a new algorithm, Informed RRT*. This method retains the same probabilistic guarantees on complete- ness and optimality as RRT* while improving the convergence rate and final solution quality. We present the algorithm as a simple modification to RRT* that could be further extended by more advanced path-planning algorithms. We show exper- imentally that it outperforms RRT* in rate of convergence, final solution cost, and ability to find difficult passages while demonstrating less dependence on the state dimension and range of the planning problem.",
""
],
"cite_N": [
"@cite_0",
"@cite_9",
"@cite_3",
"@cite_6"
],
"mid": [
"2211552581",
"1485487406",
"1976930960",
""
]
}
|
Deeply Informed Neural Sampling for Robot Motion Planning
|
Sampling-based Motion Planners (SMPs) have emerged as a promising framework for solving high-dimensional, constrained motion planning problems [1] [2]. SMPs ensure probabilistic completeness, which implies that a probability of finding a feasible path solution, if one exists, approaches to one as the limit of the number of randomly drawn samples from an obstacle-free space increases to infinity [2]. However, despite their ability to compute motion plans irrespective of the obstacles geometry, these methods exhibit slow convergence to computing path solutions due to their reliance on the extensive exploration of a given obstaclefree configuration space [3] [4]. Recent research shows that biasing a sample distribution towards the region with high probability of finding a path solution can considerably enhance the performance of classical single-query SMPs such as RRT and RRT* [3]. To the best of our knowledge, there does not exist any effective and reliable solution that uses the knowledge from the past planning problems to bias the In this paper, we propose a neural network-based adaptive sampler that generates samples in particular regions of a configuration space where there is likely to exist an optimal path solution. Our method consists of two neural models, i.e., an obstacle-space encoder and random samples generator. We use a Contractive AutoEncoder (CAE) [5] for the encoding of an obstacle-space into an invariant, robust feature space. A samples generator that comprises a Dropoutbased [6] stochastic Deep Neural Network (DNN) that takes the obstacle-space encoding, start and goal configuration as an input, and generates samples distributing over the region of configuration space containing the path solutions. We evaluate our method on various complex motion planning tasks such as planning of a rigid-body (piano-mover problem) and 6 degree-of-freedom (DOF) robotic arm (UR6), and planning through narrow passages. We also benchmark our method against existing biased-sampling based stateof-the-art SMPs including Informed-RRT* [7] and Batch Informed Trees (BIT*) [8]. The results show that our algorithm generates samples that enable unbiased SMPs such as RRT* to compute near-optimal paths in a considerably lesser computational time than BIT* and Informed-RRT*.
B. Learning-based Search Methods
Many approaches exist that use learning to improve classical SMPs computationally. A recent method called a Lightning Framework [13] stored paths into a lookup table and used a learned heuristic to write new paths as well as to read and repair old paths. Another similar framework by Coleman et al. [14] is an experience-based strategy to cache experiences in a graph instead of individual trajectories. Although these approaches exhibit superior performance in higher-dimensional spaces when compared to conventional planning methods, lookup tables are memory inefficient and incapable of generalizing well to new planning problems. Zucker et al. [15] proposed a reinforcement learning-based method to bias samples in discretized workspaces. However, reinforcement learning-based approaches are known for their slow convergence as they require a large number of interactive experiences.
III. PROBLEM DEFINITION
This section presents the notations we will be using in this paper, along with the definitions of fundamental motion planning problems addressed by our work.
Let S be a list of finite length N ∈ N then S i is a mapping from a given index i ∈ N to an element of S at i-th index. For algorithms described in our paper, S 0 and S T corresponds to the initial and last elements of a list, respectively. Let a given state space be denoted as X ⊂ R d , where d ∈ N ≥2 denotes the dimension of a state space. The collision and collision-free state spaces are denoted as X obs ⊂ X and X free = X\X obs , respectively. Let the initial state and goal region be represented as x init ∈ X free and X goal ⊂ X free , respectively. Let a trajectory be denoted as a non-empty finite-length list σ : [0, T ] ⊂ X. For a given path planning problem, a trajectory σ is said to be feasible if it connects x init and x ∈ X goal , i.e. σ 0 = x init and σ T ∈ X goal , and a path formed by connecting all consecutive states in σ lies entirely in the obstacle-free space X free i.e., Problem 1 (Feasible Path Planning) Given a triplet {X, X free , X obs }, an initial state x init and a goal region X goal ⊂ X free , find a path σ : [0, T ] → X free such that σ 0 = x init and σ T ∈ X goal .
Let a cost function c(·) computes a cost of a given path σ in terms of a summation of Euclidean distances between all the consecutive states in σ. Let a set of all feasible path solutions to a given planning problem be denoted as Π. The optimality problem of motion planning is then to find the optimal, feasible, path solution σ * ∈ Π that has a minimum cost among all other feasible path solutions i.e., Problem 2 (Optimal Path Planning) Assuming that multiple solutions to Problem 1 exists, find a path σ * ∈ Π such that c(σ * ) = {min σ∈Π c(σ)}.
Let Ω ⊂ X free be a potential region containing optimal/nearoptimal path solution. The problem of adaptive sampling, also known as biased sampling, is to generate collision-free samples x ∈ Ω such that SMPs compute the optimal path σ * in a least possible time t ∈ R. The problem of adaptive sampling is formalized as follow.
Problem 3 (Adaptive Sampling) Given a planning problem {x init , X goal , X}, generate samples x ∈ Ω, where Ω ⊂ X free , such that the sampling-based motion planning methods compute optimal path solution σ * in a least-possible time t ∈ R.
IV. INFORMED NEURAL SAMPLER
This section presents our novel informed neural sampling algorithm called DeepSMP 1 . It comprises two neural modules. The first module is an autoencoder which learns an invariant and robust feature space to embed a point cloud data from obstacle space. The second module is a stochastic DNN which takes obstacles encoding, start and goal configurations to generate samples incrementally for SMPs during online execution. Note that any SMP can utilize these informed samples for rapid convergence to the optimal solution and that the method works for unseen environments via the obstacle space encoding. The following sections describe both neural modules, online sample generation heuristic called DeepSMP, dataset collection, and hyper-parameters initialization.
A. Obstacle Encoding
A Contractive AutoEncoder (CAE) is used to learn a latent-space embedding Z of a raw point cloud data x obs ⊂ X obs (see Fig. 1 (a)). The encoder and decoder functions of CAE are denoted as f (x obs ; θ e ) and g(f (x obs ); θ d ), respectively, where θ e and θ d are parameters of their corresponding approximating functions. CAE is trained through unsupervised learning using the following objective function.
L CAE θ e , θ d = 1 N obs x∈D obs ||x − g(f (x))|| 2 + λ ij (θ e ij ) 2
(1) where ||x − g(f (x))|| 2 is a reconstruction loss, and λ ij (θ e ij ) 2 is a regularization term with a coefficient λ. Furthermore, D obs contains a dataset of point clouds x obs ⊂ X obs from N obs ∈ N different workspaces. The regularization term allows the feature space Z := f (x obs ) to be contractive in the neighborhood of the training data which results in an invariant and robust feature learning [5].
1) Model Architecture: Since the decoding function g(f (x obs )) is an inverse of encoding function f (x obs ), we present the architectural details of encoding unit only.
The encoding function consists of three fully-connected linear hidden layers followed by an output linear layer. The output from each hidden layer is passed through a Parametric Rectified Linear Unit (PReLU) [16].
For 2D workspaces, the input point cloud is of size 1400× 2 where three hidden layers transform the inputs to 512, 256 and 128 units, respectively. The output layer takes 128 units and transforms them to latent space embedding Z of size 28 units. The decoding function takes the latent space embedding Z to reconstruct the raw point cloud data.
For 3D workspaces, the hidden layers 1, 2 and 3 transform the input point cloud 1400 × 3 to 786, 512, and 256 hidden units, respectively. Finally, the output layer transforms the 256 units from preceding hidden layer to a latent space of size 60 units.
B. Deep Sampler
Deep Sampler is a stochastic feedforward deep neural network with parameters θ. It takes obstacles encoding Z, robot state x t at step t, and goal state x T to produce a next statex t+1 ∈ X free that would take a robot closer to the goal region (see Fig. 1(b)) i.e.,
x t+1 = DeepSampler((x t , x T , Z); θ)(2)
We use RRT* [2] to produce near-optimal paths to train DeepSMP. The training paths are in the form of a tuple i.e., σ * = {x 0 , x 1 , · · · , x T }, such that the path formed by connecting all following states in σ * is a feasible solution. The training objective is to minimize mean-squared-error (MSE) between the predicted statesx t+1 and the actual states x t+1 given by RRT*, i.e.,
L MSE (θ) = 1 N pN j T −1 i=0 ||x j,i+1 − x j,i+1 || 2 ,(3)
where N p ∈ N corresponds to the total number of pathsN times their path lengths. 1) Model Architecture: Deep Sampler is a twelve-layer deep neural network where each hidden layer is a sandwich of a linear layer, PReLU [16] and Dropout (p) [6] with an exception of last hidden layer which does not contain Dropout (p). The twelveth layer is an output layer which takes hidden units from preceding layer and transforms them to the desired output size which is equal to the dimension of robot configurations. The configurations for the 2D pointmass robot, 3D point-mass robot, rigid-body and 6 DOF robot have dimensions 2, 3, 3 and 6 respectively. For all presented problems, except planning of 6 DOF robot, the input to Deep Sampler is given by concatenating the obstacles' representation Z, robot's current state x t and goal state x T . For 6 DOF, we assume a single environment, therefore, the input to Deep Sampler comprises of current state x t and goal state x T only.
C. Online Execution of DeepSMP
During the online phase, we use our trained obstacle encoder and DeepSampler to generate random samples for a 4 for i ← 0 to n do 5 if i < n limit then 6 x rand ← DeepSampler Z, x rand , x goal 7 else 8
Algorithm 1: DeepSMP(x init , x goal , x obs ) 1 Initialize SMP(x init , x goal , X) 2 x rand ← x init 3 Z ← f (x obs )
x rand ← RandomSampler() 9 σ ← SMP x rand 10 if x rand ∈ X goal then 11 x rand ← x init 12 if σ T ∈ X goal then 13 return σ 14 return ∅ given SMP. Fig. 1 shows the flow of information between encoder f (x obs ) and DeepSampler. Algorithm 1 outlines DeepSMP which combines our informed neural sampler with any classical SMP such RRT*. Algorithm 1 starts by initializing a given SMP (Line 1). The obstacles encoder f (x obs ) provides an encoding Z of a raw point cloud data from X obs (Line 3). DeepSMP algorithm runs for n ∈ N iterations (Line 4). DeepSampler incrementally generates samples between given start and goal configurations until i < n limit (Line 5-6), where n limit < n. Upon reaching a given goal configuration, DeepSampler is executed again to produce samples from a given start configuration to the goal configuration by re-initializing random sample x rand to x init (Lines 10-11). After n limit iterations, DeepSMP switches to random sampling (Line 7-8) to ensure completeness guarantees of an underlying SMP. Note that σ is a feasible path solution returned by SMP. The path σ is continually optimized for a given cost function c(·) for a given number of iteration n. Finally, after n iterations, a feasible, optimized path solution σ, if one exists, is returned as a solution to a given planning problem (Lines 12-13).
D. Data Collection
The data collection consists of creating a random set of workspaces, sampling collision-free start and goal configurations in those workspaces, and generating paths using a classical motion planner for every start and goal pair. The following sections describe the procedure to create workspaces, start and goal pairs, and near-optimal paths. 1) Workspaces: Many different 2D and 3D workspaces were generated by randomly placing various quadrilateral blocks without repetition in the operating region of 40 × 40 and 40 × 40 × 40, respectively. Each random placement of the obstacle blocks led to a different workspace.
2) Start and goal configuration: For each generated workspace, a number of start and goal configurations were sampled randomly from its obstacle-free space. 3) Near-optimal paths: Finally, for each generated start and goal pair within all workspaces, a feasible, near-optimal path was generated using the RRT* motion planner.
Complete dataset comprised 110 different workspaces for the presented scenarios in the results section i.e., simple 2D (s2D), complex 2D (c2D), complex 3D (c3D), and rigidbody (rigid). The training dataset contained 100 workspaces with 4000 training paths in every workspace. There were two types of test datasets. The first test dataset comprised already seen 100 workspaces with 200 unseen start and goal configurations in each of the workspaces. The second test dataset comprised entirely unseen 10 workspaces where each contained 2000 unseen start and goal configurations. For rigid-body case, the range of angular configuration was scaled to the range of positional configurations, i.e., −20 to 20, for training and testing. In case of 6 DOF robot, we consider only a single environment thus no environment encoding is included, and only start and goal configurations are sampled to collect example trajectories (50,000) from collision-free space to train our feedforward neural network (DeepSampler). The test scenario for 6 DOF robot is to generate paths for unseen start and goal pairs.
E. Hyper-parameters
DeepSMP neural models were trained in mini-batches using Adagrad [17] optimizer with a learning rate of 0.1. CAE was trained on raw point cloud data from N obs = 30, 000 different workspaces which were generated randomly as described earlier. The regularization coefficient λ was set to 10 −3 . For DeepSampler, Dropout probability p was kept constant to 0.5 for both training and testing. The number n limit is set to the number of nodes in the longest path available in the training data. For RRT*, gamma of ballradius was set to 1.6 whereas tree extension step sizes for point-mass and rigid-body were kept at 0.01 and 0.9, respectively. Finally for the 6-DOF robot, we use OMPL's RRT* and ROS with their default parameter settings for path generation.
V. RESULTS
This section presents the results of DeepSMP for the motion planning of a point-mass robot, rigid-body, and Universal 6 DOF robot (UR6) in both 2D and 3D environments. All experiments were carried out on a computer with 3.40GHz× 8 Intel Core i7 processor with a 16 GB RAM and GeForce GTX 1080 GPU. DeepSMP, implemented in PyTorch, was compared against Informed-RRT* and BIT* implemented in Python. In the following results, the datasets seen-X obs and unseen-X obs comprised 100 workspaces seen by DeepSMP during training and 10 workspaces not seen by DeepSMP during training, respectively. Both test datasets seen-X obs and unseen-X obs contained 200 and 2000 unseen start and goal configurations, respectively, for every workspace. Note that every combination of either seen or unseen environment with unseen start and goal pair constitutes a new planning problem, i.e., not seen by DeepSMP during training. For each planning problem, we ran 20 trials of all presented SMPs to calculate the mean computational time. Figs. 2-5 show different example scenarios named as simple 2D (s2D), complex 2D (c2D), complex 3D (c3D) and rigid-body (rigid) where DeepSMP with underling RRT* method is planning motions. The mean computational time (in seconds) and iterations took by DeepSMP for each scenario are denoted as t and n, respectively. Table I presents the mean computational time comparison of DeepSMP with an underlying RRT* SMP against Informed-RRT* and BIT* for computing near-optimal paths in different environments s2D, c2D, c3D and rigid. Note that, unbiased RRT* method is not included in the comparison as the computation time of RRT*, for computing near-optimal paths, is much higher than all presented algorithms. We report the mean (t mean ), maximum (t max ), and minimum (t min ) time taken by an algorithm in every environment. It can be seen that in all test cases, the mean computation time of DeepSMP:RRT* remained consistently around 2 seconds. However, the mean computation time of Informed-RRT* and BIT* increases significantly as the dimensionality of the planning problem increases slightly. Furthermore, the rightmost column presents the ratio of mean computational time of BIT* to DeepSMP, and it is observed that on average, our method is at least 7 times faster than BIT*, the current state-of-art motion planner.
From experiments presented so far, it is evident that BIT* outperforms Informed-RRT*, therefore, in the following experiments only DeepSMP and BIT* are compared. Fig. 6 compares the mean computation time of DeepSMP: RRT* and BIT* in two test cases, i.e., seen-X obs and unseen-X obs . It can be observed that the mean computation time of DeepSMP stays around 2 seconds irrespective of the given problem's dimensionality. Furthermore, the mean computational time of BIT* not only fluctuates but also increases significantly as the dimensionality of the planning problem increases slightly. Finally, Fig. 7 shows DeepSMP planning motions for a Universal 6-DOF robot. In Fig. 7 (a), the robotic manipulator is at the start configuration whereas its target configuration is symbolized as a shadowed region. Fig. 7 (b) shows the traces of a path planned by DeepSMP for the given start and goal pair. In this problem, the mean computational times taken by DeepSMP and BIT* are 1.7 and 48.8 seconds, respectively, which makes DeepSMP around 28 times faster than BIT*.
VI. DISCUSSION
A. Stochasticity through Dropout
Our stochastic feedforward DeepSampler uses Dropout [6] in every layer except the last two layers during both offline and online execution. Dropout is applied layer-wise to a neural network, and it drops each unit in the hidden layer with a probability p ∈ [0, 1]. In our models, the dropped out units are indicated as dotted circles in Fig. 1. Thus, the resulting neural network is a sliced version of the original deep model, and in every iteration during online execution, a different model emerges through randomly dropping some hidden units. These perturbations in DeepSampler through Dropout enables DeepSMP to generate different samples in the region likely to contain path solutions.
(a) Test-case 1: seen-X obs (b) Test-case 2: unseen-X obs
B. Bidirectional Sampling
Since our method incrementally generates samples, it can be easily extended to produce samples for bidirectional SMPs such as IB-RRT* [18]. To do so, treat both start and goal configuration as random variables x rand1 and x rand2 , respectively, and swap their roles by the end of every iteration in Algorithm 1. This way, two trees in bidirectional SMPs can be made to march towards each other to rapidly compute end-to-end collision-free paths.
C. Completeness
SMPs ensure probabilistic completeness. Let V SMP n denotes the tree vertices of SMP after n ∈ N iterations. Since all SMPs begin to build a tree from initial robot state x init i.e., V SMP 0 = x init , and randomly explore the entire configuration space by forming a connected tree as n approaches to infinity, they guarantee probabilistic completeness i.e.,
lim n→∞ P(V SMP n ∩ X goal = ∅) = 1(4)
DeepSMP also starts generating a connected tree from x init and after exploring a region that most likely contains a path solution for n limit iteration, it switches to uniform random sampling (see Algorithm 1). Therefore, if n limit n, DeepSMP also ensures probabilistic completeness i.e., as the number of iterations n approach to infinity, the probability of DeepSMP finding a path solution, if one exists, approaches to one:
lim n→∞ P(V DeepSMP n ∩ X goal = ∅) = 1(5)
D. Asymptotic Optimality
RRT* and its variants are known to ensure asymptotic optimality i.e., as the number of iterations n approaches to infinity/large-number, the probability of finding a minimum cost path solution reaches to one. This property comes from incrementally rewiring the RRT graph connections such that the shortest path is asymptotically guaranteed in RRT*. It is proposed that if the underlying SMP of DeepSMP is RRT* or any optimal variant of RRTs, DeepSMP is guaranteed to be asymptotic optimal. This follows from the fact that DeepSMP samples a selective region for fixed number of iterations and switches to uniform randoms sampling afterwards. Thus if the number of iterations goes infinity, through incremental rewiring of DeepSMP graph, the asymptotic optimality is also guaranteed.
E. Computational Complexity
A forward pass through a deep neural network is known to exhibit O(1) complexity. It can be seen in Algorithm 1 that adaptive samples are generated incrementally by forward passing through our stochastic DeepSampler. Hence, the proposed neural sampling method does not add any extra computational overhead to any underlying SMP for path generation. Thus, the computational complexity of DeepSMP method will essentially be the same as underlying SMP in Algorithm 1. For instance, as in our case, RRT* is an underlying SMP method, therefore, in presented experiments, Fig. 7: DeepSMP with RRT* planning motions for a 6 DOF manipulator. Fig (a) indicates the robot at start configuration and the goal configuration is indicated as a shadowed region . Fig (b) shows the path traces followed by the robot. In this problem, the mean computational times of DeepSMP and BIT* are 1.7 and 48.8 seconds, respectively, which makes DeepSMP about 28 times faster than BIT*.
the computational complexity of DeepSMP is O(nlogn), where n is the number of nodes in the tree.
VII. CONCLUSIONS AND FUTURE WORK
In this paper, we present a deep neural network based sampling method called DeepSMP which generates samples for Sampling-based Motion Planning algorithms to compute optimal paths rapidly and efficiently. The proposed method 1) adaptively samples a selective region of a configuration space that most likely contains an optimal path solution, 2) combined with SMP methods consistently demonstrate mean execution time of about 2 second in all presented experiments, and 3) generalizes to new unseen environments.
In our future work, we plan to propose an incremental online learning method that begins with an SMP method, and trains DeepSMP simultaneously to gradually switch from uniform sampling to adaptive sampling. To speed up the incremental online learning process, we plan to propose a method that prioritizes experiences to learn from selectively fewer training examples.
| 3,877 |
1809.10252
|
2951622944
|
Sampling-based Motion Planners (SMPs) have become increasingly popular as they provide collision-free path solutions regardless of obstacle geometry in a given environment. However, their computational complexity increases significantly with the dimensionality of the motion planning problem. Adaptive sampling is one of the ways to speed up SMPs by sampling a particular region of a configuration space that is more likely to contain an optimal path solution. Although there are a wide variety of algorithms for adaptive sampling, they rely on hand-crafted heuristics; furthermore, their performance decreases significantly in high-dimensional spaces. In this paper, we present a neural network-based adaptive sampler for motion planning called Deep Sampling-based Motion Planner (DeepSMP). DeepSMP generates samples for SMPs and enhances their overall speed significantly while exhibiting efficient scalability to higher-dimensional problems. DeepSMP's neural architecture comprises of a Contractive AutoEncoder which encodes given workspaces directly from a raw point cloud data, and a Dropout-based stochastic deep feedforward neural network which takes the workspace encoding, start and goal configuration, and iteratively generates feasible samples for SMPs to compute end-to-end collision-free optimal paths. DeepSMP is not only consistently computationally efficient in all tested environments but has also shown remarkable generalization to completely unseen environments. We evaluate DeepSMP on multiple planning problems including planning of a point-mass robot, rigid-body, 6-link robotic manipulator in various 2D and 3D environments. The results show that on average our method is at least 7 times faster in point-mass and rigid-body case and about 28 times faster in 6-link robot case than the existing state-of-the-art.
|
@cite_3 proposed the Informed-RRT* algorithm which takes an initial solution from RRT* algorithm to define an ellipsoidal region from which new samples are drawn to minimize the initial solution for a given cost function. Although Informed-RRT* demonstrated enhanced convergence towards an optimal solution, this method suffers in situations where finding an initial path solution takes most of the computation time. To address this limitation, proposed Batch Informed Trees (BIT*) @cite_16 . BIT* is an incremental graph search technique where an ellipsoidal subset, containing configurations to update the graph, is incrementally enlarged. BIT* is shown empirically to outperform prior methods such as RRT* and Informed-RRT*. However, confining a graph search to ellipsoidal region slows down the performance of an algorithm in maze-like scenarios especially where the start and goal configurations are very close to each other, but the path among them traverses a complicated maze stretching waypoints far away from the goal. Furthermore, such a method would not translate to non-stationary environments or unseen environments.
|
{
"abstract": [
"In this paper, we present Batch Informed Trees (BIT*), a planning algorithm based on unifying graph- and sampling-based planning techniques. By recognizing that a set of samples describes an implicit random geometric graph (RGG), we are able to combine the efficient ordered nature of graph-based techniques, such as A*, with the anytime scalability of sampling-based algorithms, such as Rapidly-exploring Random Trees (RRT).",
"Rapidly-exploring random trees (RRTs) are pop- ular in motion planning because they find solutions efficiently to single-query problems. Optimal RRTs (RRT*s) extend RRTs to the problem of finding the optimal solution, but in doing so asymptotically find the optimal path from the initial state to every state in the planning domain. This behaviour is not only inefficient but also inconsistent with their single-query nature. For problems seeking to minimize path length, the subset of states that can improve a solution can be described by a prolate hyperspheroid. We show that unless this subset is sam- pled directly, the probability of improving a solution becomes arbitrarily small in large worlds or high state dimensions. In this paper, we present an exact method to focus the search by directly sampling this subset. The advantages of the presented sampling technique are demonstrated with a new algorithm, Informed RRT*. This method retains the same probabilistic guarantees on complete- ness and optimality as RRT* while improving the convergence rate and final solution quality. We present the algorithm as a simple modification to RRT* that could be further extended by more advanced path-planning algorithms. We show exper- imentally that it outperforms RRT* in rate of convergence, final solution cost, and ability to find difficult passages while demonstrating less dependence on the state dimension and range of the planning problem."
],
"cite_N": [
"@cite_16",
"@cite_3"
],
"mid": [
"1814533834",
"1976930960"
]
}
|
Deeply Informed Neural Sampling for Robot Motion Planning
|
Sampling-based Motion Planners (SMPs) have emerged as a promising framework for solving high-dimensional, constrained motion planning problems [1] [2]. SMPs ensure probabilistic completeness, which implies that a probability of finding a feasible path solution, if one exists, approaches to one as the limit of the number of randomly drawn samples from an obstacle-free space increases to infinity [2]. However, despite their ability to compute motion plans irrespective of the obstacles geometry, these methods exhibit slow convergence to computing path solutions due to their reliance on the extensive exploration of a given obstaclefree configuration space [3] [4]. Recent research shows that biasing a sample distribution towards the region with high probability of finding a path solution can considerably enhance the performance of classical single-query SMPs such as RRT and RRT* [3]. To the best of our knowledge, there does not exist any effective and reliable solution that uses the knowledge from the past planning problems to bias the In this paper, we propose a neural network-based adaptive sampler that generates samples in particular regions of a configuration space where there is likely to exist an optimal path solution. Our method consists of two neural models, i.e., an obstacle-space encoder and random samples generator. We use a Contractive AutoEncoder (CAE) [5] for the encoding of an obstacle-space into an invariant, robust feature space. A samples generator that comprises a Dropoutbased [6] stochastic Deep Neural Network (DNN) that takes the obstacle-space encoding, start and goal configuration as an input, and generates samples distributing over the region of configuration space containing the path solutions. We evaluate our method on various complex motion planning tasks such as planning of a rigid-body (piano-mover problem) and 6 degree-of-freedom (DOF) robotic arm (UR6), and planning through narrow passages. We also benchmark our method against existing biased-sampling based stateof-the-art SMPs including Informed-RRT* [7] and Batch Informed Trees (BIT*) [8]. The results show that our algorithm generates samples that enable unbiased SMPs such as RRT* to compute near-optimal paths in a considerably lesser computational time than BIT* and Informed-RRT*.
B. Learning-based Search Methods
Many approaches exist that use learning to improve classical SMPs computationally. A recent method called a Lightning Framework [13] stored paths into a lookup table and used a learned heuristic to write new paths as well as to read and repair old paths. Another similar framework by Coleman et al. [14] is an experience-based strategy to cache experiences in a graph instead of individual trajectories. Although these approaches exhibit superior performance in higher-dimensional spaces when compared to conventional planning methods, lookup tables are memory inefficient and incapable of generalizing well to new planning problems. Zucker et al. [15] proposed a reinforcement learning-based method to bias samples in discretized workspaces. However, reinforcement learning-based approaches are known for their slow convergence as they require a large number of interactive experiences.
III. PROBLEM DEFINITION
This section presents the notations we will be using in this paper, along with the definitions of fundamental motion planning problems addressed by our work.
Let S be a list of finite length N ∈ N then S i is a mapping from a given index i ∈ N to an element of S at i-th index. For algorithms described in our paper, S 0 and S T corresponds to the initial and last elements of a list, respectively. Let a given state space be denoted as X ⊂ R d , where d ∈ N ≥2 denotes the dimension of a state space. The collision and collision-free state spaces are denoted as X obs ⊂ X and X free = X\X obs , respectively. Let the initial state and goal region be represented as x init ∈ X free and X goal ⊂ X free , respectively. Let a trajectory be denoted as a non-empty finite-length list σ : [0, T ] ⊂ X. For a given path planning problem, a trajectory σ is said to be feasible if it connects x init and x ∈ X goal , i.e. σ 0 = x init and σ T ∈ X goal , and a path formed by connecting all consecutive states in σ lies entirely in the obstacle-free space X free i.e., Problem 1 (Feasible Path Planning) Given a triplet {X, X free , X obs }, an initial state x init and a goal region X goal ⊂ X free , find a path σ : [0, T ] → X free such that σ 0 = x init and σ T ∈ X goal .
Let a cost function c(·) computes a cost of a given path σ in terms of a summation of Euclidean distances between all the consecutive states in σ. Let a set of all feasible path solutions to a given planning problem be denoted as Π. The optimality problem of motion planning is then to find the optimal, feasible, path solution σ * ∈ Π that has a minimum cost among all other feasible path solutions i.e., Problem 2 (Optimal Path Planning) Assuming that multiple solutions to Problem 1 exists, find a path σ * ∈ Π such that c(σ * ) = {min σ∈Π c(σ)}.
Let Ω ⊂ X free be a potential region containing optimal/nearoptimal path solution. The problem of adaptive sampling, also known as biased sampling, is to generate collision-free samples x ∈ Ω such that SMPs compute the optimal path σ * in a least possible time t ∈ R. The problem of adaptive sampling is formalized as follow.
Problem 3 (Adaptive Sampling) Given a planning problem {x init , X goal , X}, generate samples x ∈ Ω, where Ω ⊂ X free , such that the sampling-based motion planning methods compute optimal path solution σ * in a least-possible time t ∈ R.
IV. INFORMED NEURAL SAMPLER
This section presents our novel informed neural sampling algorithm called DeepSMP 1 . It comprises two neural modules. The first module is an autoencoder which learns an invariant and robust feature space to embed a point cloud data from obstacle space. The second module is a stochastic DNN which takes obstacles encoding, start and goal configurations to generate samples incrementally for SMPs during online execution. Note that any SMP can utilize these informed samples for rapid convergence to the optimal solution and that the method works for unseen environments via the obstacle space encoding. The following sections describe both neural modules, online sample generation heuristic called DeepSMP, dataset collection, and hyper-parameters initialization.
A. Obstacle Encoding
A Contractive AutoEncoder (CAE) is used to learn a latent-space embedding Z of a raw point cloud data x obs ⊂ X obs (see Fig. 1 (a)). The encoder and decoder functions of CAE are denoted as f (x obs ; θ e ) and g(f (x obs ); θ d ), respectively, where θ e and θ d are parameters of their corresponding approximating functions. CAE is trained through unsupervised learning using the following objective function.
L CAE θ e , θ d = 1 N obs x∈D obs ||x − g(f (x))|| 2 + λ ij (θ e ij ) 2
(1) where ||x − g(f (x))|| 2 is a reconstruction loss, and λ ij (θ e ij ) 2 is a regularization term with a coefficient λ. Furthermore, D obs contains a dataset of point clouds x obs ⊂ X obs from N obs ∈ N different workspaces. The regularization term allows the feature space Z := f (x obs ) to be contractive in the neighborhood of the training data which results in an invariant and robust feature learning [5].
1) Model Architecture: Since the decoding function g(f (x obs )) is an inverse of encoding function f (x obs ), we present the architectural details of encoding unit only.
The encoding function consists of three fully-connected linear hidden layers followed by an output linear layer. The output from each hidden layer is passed through a Parametric Rectified Linear Unit (PReLU) [16].
For 2D workspaces, the input point cloud is of size 1400× 2 where three hidden layers transform the inputs to 512, 256 and 128 units, respectively. The output layer takes 128 units and transforms them to latent space embedding Z of size 28 units. The decoding function takes the latent space embedding Z to reconstruct the raw point cloud data.
For 3D workspaces, the hidden layers 1, 2 and 3 transform the input point cloud 1400 × 3 to 786, 512, and 256 hidden units, respectively. Finally, the output layer transforms the 256 units from preceding hidden layer to a latent space of size 60 units.
B. Deep Sampler
Deep Sampler is a stochastic feedforward deep neural network with parameters θ. It takes obstacles encoding Z, robot state x t at step t, and goal state x T to produce a next statex t+1 ∈ X free that would take a robot closer to the goal region (see Fig. 1(b)) i.e.,
x t+1 = DeepSampler((x t , x T , Z); θ)(2)
We use RRT* [2] to produce near-optimal paths to train DeepSMP. The training paths are in the form of a tuple i.e., σ * = {x 0 , x 1 , · · · , x T }, such that the path formed by connecting all following states in σ * is a feasible solution. The training objective is to minimize mean-squared-error (MSE) between the predicted statesx t+1 and the actual states x t+1 given by RRT*, i.e.,
L MSE (θ) = 1 N pN j T −1 i=0 ||x j,i+1 − x j,i+1 || 2 ,(3)
where N p ∈ N corresponds to the total number of pathsN times their path lengths. 1) Model Architecture: Deep Sampler is a twelve-layer deep neural network where each hidden layer is a sandwich of a linear layer, PReLU [16] and Dropout (p) [6] with an exception of last hidden layer which does not contain Dropout (p). The twelveth layer is an output layer which takes hidden units from preceding layer and transforms them to the desired output size which is equal to the dimension of robot configurations. The configurations for the 2D pointmass robot, 3D point-mass robot, rigid-body and 6 DOF robot have dimensions 2, 3, 3 and 6 respectively. For all presented problems, except planning of 6 DOF robot, the input to Deep Sampler is given by concatenating the obstacles' representation Z, robot's current state x t and goal state x T . For 6 DOF, we assume a single environment, therefore, the input to Deep Sampler comprises of current state x t and goal state x T only.
C. Online Execution of DeepSMP
During the online phase, we use our trained obstacle encoder and DeepSampler to generate random samples for a 4 for i ← 0 to n do 5 if i < n limit then 6 x rand ← DeepSampler Z, x rand , x goal 7 else 8
Algorithm 1: DeepSMP(x init , x goal , x obs ) 1 Initialize SMP(x init , x goal , X) 2 x rand ← x init 3 Z ← f (x obs )
x rand ← RandomSampler() 9 σ ← SMP x rand 10 if x rand ∈ X goal then 11 x rand ← x init 12 if σ T ∈ X goal then 13 return σ 14 return ∅ given SMP. Fig. 1 shows the flow of information between encoder f (x obs ) and DeepSampler. Algorithm 1 outlines DeepSMP which combines our informed neural sampler with any classical SMP such RRT*. Algorithm 1 starts by initializing a given SMP (Line 1). The obstacles encoder f (x obs ) provides an encoding Z of a raw point cloud data from X obs (Line 3). DeepSMP algorithm runs for n ∈ N iterations (Line 4). DeepSampler incrementally generates samples between given start and goal configurations until i < n limit (Line 5-6), where n limit < n. Upon reaching a given goal configuration, DeepSampler is executed again to produce samples from a given start configuration to the goal configuration by re-initializing random sample x rand to x init (Lines 10-11). After n limit iterations, DeepSMP switches to random sampling (Line 7-8) to ensure completeness guarantees of an underlying SMP. Note that σ is a feasible path solution returned by SMP. The path σ is continually optimized for a given cost function c(·) for a given number of iteration n. Finally, after n iterations, a feasible, optimized path solution σ, if one exists, is returned as a solution to a given planning problem (Lines 12-13).
D. Data Collection
The data collection consists of creating a random set of workspaces, sampling collision-free start and goal configurations in those workspaces, and generating paths using a classical motion planner for every start and goal pair. The following sections describe the procedure to create workspaces, start and goal pairs, and near-optimal paths. 1) Workspaces: Many different 2D and 3D workspaces were generated by randomly placing various quadrilateral blocks without repetition in the operating region of 40 × 40 and 40 × 40 × 40, respectively. Each random placement of the obstacle blocks led to a different workspace.
2) Start and goal configuration: For each generated workspace, a number of start and goal configurations were sampled randomly from its obstacle-free space. 3) Near-optimal paths: Finally, for each generated start and goal pair within all workspaces, a feasible, near-optimal path was generated using the RRT* motion planner.
Complete dataset comprised 110 different workspaces for the presented scenarios in the results section i.e., simple 2D (s2D), complex 2D (c2D), complex 3D (c3D), and rigidbody (rigid). The training dataset contained 100 workspaces with 4000 training paths in every workspace. There were two types of test datasets. The first test dataset comprised already seen 100 workspaces with 200 unseen start and goal configurations in each of the workspaces. The second test dataset comprised entirely unseen 10 workspaces where each contained 2000 unseen start and goal configurations. For rigid-body case, the range of angular configuration was scaled to the range of positional configurations, i.e., −20 to 20, for training and testing. In case of 6 DOF robot, we consider only a single environment thus no environment encoding is included, and only start and goal configurations are sampled to collect example trajectories (50,000) from collision-free space to train our feedforward neural network (DeepSampler). The test scenario for 6 DOF robot is to generate paths for unseen start and goal pairs.
E. Hyper-parameters
DeepSMP neural models were trained in mini-batches using Adagrad [17] optimizer with a learning rate of 0.1. CAE was trained on raw point cloud data from N obs = 30, 000 different workspaces which were generated randomly as described earlier. The regularization coefficient λ was set to 10 −3 . For DeepSampler, Dropout probability p was kept constant to 0.5 for both training and testing. The number n limit is set to the number of nodes in the longest path available in the training data. For RRT*, gamma of ballradius was set to 1.6 whereas tree extension step sizes for point-mass and rigid-body were kept at 0.01 and 0.9, respectively. Finally for the 6-DOF robot, we use OMPL's RRT* and ROS with their default parameter settings for path generation.
V. RESULTS
This section presents the results of DeepSMP for the motion planning of a point-mass robot, rigid-body, and Universal 6 DOF robot (UR6) in both 2D and 3D environments. All experiments were carried out on a computer with 3.40GHz× 8 Intel Core i7 processor with a 16 GB RAM and GeForce GTX 1080 GPU. DeepSMP, implemented in PyTorch, was compared against Informed-RRT* and BIT* implemented in Python. In the following results, the datasets seen-X obs and unseen-X obs comprised 100 workspaces seen by DeepSMP during training and 10 workspaces not seen by DeepSMP during training, respectively. Both test datasets seen-X obs and unseen-X obs contained 200 and 2000 unseen start and goal configurations, respectively, for every workspace. Note that every combination of either seen or unseen environment with unseen start and goal pair constitutes a new planning problem, i.e., not seen by DeepSMP during training. For each planning problem, we ran 20 trials of all presented SMPs to calculate the mean computational time. Figs. 2-5 show different example scenarios named as simple 2D (s2D), complex 2D (c2D), complex 3D (c3D) and rigid-body (rigid) where DeepSMP with underling RRT* method is planning motions. The mean computational time (in seconds) and iterations took by DeepSMP for each scenario are denoted as t and n, respectively. Table I presents the mean computational time comparison of DeepSMP with an underlying RRT* SMP against Informed-RRT* and BIT* for computing near-optimal paths in different environments s2D, c2D, c3D and rigid. Note that, unbiased RRT* method is not included in the comparison as the computation time of RRT*, for computing near-optimal paths, is much higher than all presented algorithms. We report the mean (t mean ), maximum (t max ), and minimum (t min ) time taken by an algorithm in every environment. It can be seen that in all test cases, the mean computation time of DeepSMP:RRT* remained consistently around 2 seconds. However, the mean computation time of Informed-RRT* and BIT* increases significantly as the dimensionality of the planning problem increases slightly. Furthermore, the rightmost column presents the ratio of mean computational time of BIT* to DeepSMP, and it is observed that on average, our method is at least 7 times faster than BIT*, the current state-of-art motion planner.
From experiments presented so far, it is evident that BIT* outperforms Informed-RRT*, therefore, in the following experiments only DeepSMP and BIT* are compared. Fig. 6 compares the mean computation time of DeepSMP: RRT* and BIT* in two test cases, i.e., seen-X obs and unseen-X obs . It can be observed that the mean computation time of DeepSMP stays around 2 seconds irrespective of the given problem's dimensionality. Furthermore, the mean computational time of BIT* not only fluctuates but also increases significantly as the dimensionality of the planning problem increases slightly. Finally, Fig. 7 shows DeepSMP planning motions for a Universal 6-DOF robot. In Fig. 7 (a), the robotic manipulator is at the start configuration whereas its target configuration is symbolized as a shadowed region. Fig. 7 (b) shows the traces of a path planned by DeepSMP for the given start and goal pair. In this problem, the mean computational times taken by DeepSMP and BIT* are 1.7 and 48.8 seconds, respectively, which makes DeepSMP around 28 times faster than BIT*.
VI. DISCUSSION
A. Stochasticity through Dropout
Our stochastic feedforward DeepSampler uses Dropout [6] in every layer except the last two layers during both offline and online execution. Dropout is applied layer-wise to a neural network, and it drops each unit in the hidden layer with a probability p ∈ [0, 1]. In our models, the dropped out units are indicated as dotted circles in Fig. 1. Thus, the resulting neural network is a sliced version of the original deep model, and in every iteration during online execution, a different model emerges through randomly dropping some hidden units. These perturbations in DeepSampler through Dropout enables DeepSMP to generate different samples in the region likely to contain path solutions.
(a) Test-case 1: seen-X obs (b) Test-case 2: unseen-X obs
B. Bidirectional Sampling
Since our method incrementally generates samples, it can be easily extended to produce samples for bidirectional SMPs such as IB-RRT* [18]. To do so, treat both start and goal configuration as random variables x rand1 and x rand2 , respectively, and swap their roles by the end of every iteration in Algorithm 1. This way, two trees in bidirectional SMPs can be made to march towards each other to rapidly compute end-to-end collision-free paths.
C. Completeness
SMPs ensure probabilistic completeness. Let V SMP n denotes the tree vertices of SMP after n ∈ N iterations. Since all SMPs begin to build a tree from initial robot state x init i.e., V SMP 0 = x init , and randomly explore the entire configuration space by forming a connected tree as n approaches to infinity, they guarantee probabilistic completeness i.e.,
lim n→∞ P(V SMP n ∩ X goal = ∅) = 1(4)
DeepSMP also starts generating a connected tree from x init and after exploring a region that most likely contains a path solution for n limit iteration, it switches to uniform random sampling (see Algorithm 1). Therefore, if n limit n, DeepSMP also ensures probabilistic completeness i.e., as the number of iterations n approach to infinity, the probability of DeepSMP finding a path solution, if one exists, approaches to one:
lim n→∞ P(V DeepSMP n ∩ X goal = ∅) = 1(5)
D. Asymptotic Optimality
RRT* and its variants are known to ensure asymptotic optimality i.e., as the number of iterations n approaches to infinity/large-number, the probability of finding a minimum cost path solution reaches to one. This property comes from incrementally rewiring the RRT graph connections such that the shortest path is asymptotically guaranteed in RRT*. It is proposed that if the underlying SMP of DeepSMP is RRT* or any optimal variant of RRTs, DeepSMP is guaranteed to be asymptotic optimal. This follows from the fact that DeepSMP samples a selective region for fixed number of iterations and switches to uniform randoms sampling afterwards. Thus if the number of iterations goes infinity, through incremental rewiring of DeepSMP graph, the asymptotic optimality is also guaranteed.
E. Computational Complexity
A forward pass through a deep neural network is known to exhibit O(1) complexity. It can be seen in Algorithm 1 that adaptive samples are generated incrementally by forward passing through our stochastic DeepSampler. Hence, the proposed neural sampling method does not add any extra computational overhead to any underlying SMP for path generation. Thus, the computational complexity of DeepSMP method will essentially be the same as underlying SMP in Algorithm 1. For instance, as in our case, RRT* is an underlying SMP method, therefore, in presented experiments, Fig. 7: DeepSMP with RRT* planning motions for a 6 DOF manipulator. Fig (a) indicates the robot at start configuration and the goal configuration is indicated as a shadowed region . Fig (b) shows the path traces followed by the robot. In this problem, the mean computational times of DeepSMP and BIT* are 1.7 and 48.8 seconds, respectively, which makes DeepSMP about 28 times faster than BIT*.
the computational complexity of DeepSMP is O(nlogn), where n is the number of nodes in the tree.
VII. CONCLUSIONS AND FUTURE WORK
In this paper, we present a deep neural network based sampling method called DeepSMP which generates samples for Sampling-based Motion Planning algorithms to compute optimal paths rapidly and efficiently. The proposed method 1) adaptively samples a selective region of a configuration space that most likely contains an optimal path solution, 2) combined with SMP methods consistently demonstrate mean execution time of about 2 second in all presented experiments, and 3) generalizes to new unseen environments.
In our future work, we plan to propose an incremental online learning method that begins with an SMP method, and trains DeepSMP simultaneously to gradually switch from uniform sampling to adaptive sampling. To speed up the incremental online learning process, we plan to propose a method that prioritizes experiences to learn from selectively fewer training examples.
| 3,877 |
1809.10252
|
2951622944
|
Sampling-based Motion Planners (SMPs) have become increasingly popular as they provide collision-free path solutions regardless of obstacle geometry in a given environment. However, their computational complexity increases significantly with the dimensionality of the motion planning problem. Adaptive sampling is one of the ways to speed up SMPs by sampling a particular region of a configuration space that is more likely to contain an optimal path solution. Although there are a wide variety of algorithms for adaptive sampling, they rely on hand-crafted heuristics; furthermore, their performance decreases significantly in high-dimensional spaces. In this paper, we present a neural network-based adaptive sampler for motion planning called Deep Sampling-based Motion Planner (DeepSMP). DeepSMP generates samples for SMPs and enhances their overall speed significantly while exhibiting efficient scalability to higher-dimensional problems. DeepSMP's neural architecture comprises of a Contractive AutoEncoder which encodes given workspaces directly from a raw point cloud data, and a Dropout-based stochastic deep feedforward neural network which takes the workspace encoding, start and goal configuration, and iteratively generates feasible samples for SMPs to compute end-to-end collision-free optimal paths. DeepSMP is not only consistently computationally efficient in all tested environments but has also shown remarkable generalization to completely unseen environments. We evaluate DeepSMP on multiple planning problems including planning of a point-mass robot, rigid-body, 6-link robotic manipulator in various 2D and 3D environments. The results show that on average our method is at least 7 times faster in point-mass and rigid-body case and about 28 times faster in 6-link robot case than the existing state-of-the-art.
|
Many approaches exist that use learning to improve classical SMPs computationally. A recent method called a Lightning Framework @cite_17 stored paths into a lookup table and used a learned heuristic to write new paths as well as to read and repair old paths. Another similar framework by @cite_10 is an experience-based strategy to cache experiences in a graph instead of individual trajectories. Although these approaches exhibit superior performance in higher-dimensional spaces when compared to conventional planning methods, lookup tables are memory inefficient and incapable of generalizing well to new planning problems. @cite_13 proposed a reinforcement learning-based method to bias samples in discretized workspaces. However, reinforcement learning-based approaches are known for their slow convergence as they require a large number of interactive experiences.
|
{
"abstract": [
"The widespread success of sampling-based planning algorithms stems from their ability to rapidly discover the connectivity of a configuration space. Past research has found that non-uniform sampling in the configuration space can significantly outperform uniform sampling; one important strategy is to bias the sampling distribution based on features present in the underlying workspace. In this paper, we unite several previous approaches to workspace biasing into a general framework for automatically discovering useful sampling distributions. We present a novel algorithm, based on the REINFORCE family of stochastic policy gradient algorithms, which automatically discovers a locally-optimal weighting of workspace features to produce a distribution which performs well for a given class of sampling-based motion planning queries. We present as well a novel set of workspace features that our adaptive algorithm can leverage for improved configuration space sampling. Experimental results show our algorithm to be effective across a variety of robotic platforms and high- dimensional configuration spaces.",
"We present an experience-based planning framework called Thunder that learns to reduce computation time required to solve high-dimensional planning problems in varying environments. The approach is especially suited for large configuration spaces that include many invariant constraints, such as those found with whole body humanoid motion planning. Experiences are generated using probabilistic sampling and stored in a sparse roadmap spanner (SPARS), which provides asymptotically near-optimal coverage of the configuration space, making storing, retrieving, and repairing past experiences very efficient with respect to memory and time. The Thunder framework improves upon past experience-based planners by storing experiences in a graph rather than in individual paths, eliminating redundant information, providing more opportunities for path reuse, and providing a theoretical limit to the size of the experience graph. These properties also lead to improved handling of dynamically changing environments, reasoning about optimal paths, and reducing query resolution time. The approach is demonstrated on a 30 degrees of freedom humanoid robot and compared with the Lightning framework, an experience-based planner that uses individual paths to store past experiences. In environments with variable obstacles and stability constraints, experiments show that Thunder is on average an order of magnitude faster than Lightning and planning from scratch. Thunder also uses 98.8 less memory to store its experiences after 10,000 trials when compared to Lightning. Our framework is implemented and freely available in the Open Motion Planning Library.",
"We propose a framework, called Lightning, for planning paths in high-dimensional spaces that is able to learn from experience, with the aim of reducing computation time. This framework is intended for manipulation tasks that arise in applications ranging from domestic assistance to robot-assisted surgery. Our framework consists of two main modules, which run in parallel: a planning-from-scratch module, and a module that retrieves and repairs paths stored in a path library. After a path is generated for a new query, a library manager decides whether to store the path based on computation time and the generated path's similarity to the retrieved path. To retrieve an appropriate path from the library we use two heuristics that exploit two key aspects of the problem: (i) A correlation between the amount a path violates constraints and the amount of time needed to repair that path, and (ii) the implicit division of constraints into those that vary across environments in which the robot operates and those that do not. We evaluated an implementation of the framework on several tasks for the PR2 mobile manipulator and a minimally-invasive surgery robot in simulation. We found that the retrieve-and-repair module produced paths faster than planning-from-scratch in over 90 of test cases for the PR2 and in 58 of test cases for the minimally-invasive surgery robot."
],
"cite_N": [
"@cite_13",
"@cite_10",
"@cite_17"
],
"mid": [
"2038131922",
"1485661534",
"1971458750"
]
}
|
Deeply Informed Neural Sampling for Robot Motion Planning
|
Sampling-based Motion Planners (SMPs) have emerged as a promising framework for solving high-dimensional, constrained motion planning problems [1] [2]. SMPs ensure probabilistic completeness, which implies that a probability of finding a feasible path solution, if one exists, approaches to one as the limit of the number of randomly drawn samples from an obstacle-free space increases to infinity [2]. However, despite their ability to compute motion plans irrespective of the obstacles geometry, these methods exhibit slow convergence to computing path solutions due to their reliance on the extensive exploration of a given obstaclefree configuration space [3] [4]. Recent research shows that biasing a sample distribution towards the region with high probability of finding a path solution can considerably enhance the performance of classical single-query SMPs such as RRT and RRT* [3]. To the best of our knowledge, there does not exist any effective and reliable solution that uses the knowledge from the past planning problems to bias the In this paper, we propose a neural network-based adaptive sampler that generates samples in particular regions of a configuration space where there is likely to exist an optimal path solution. Our method consists of two neural models, i.e., an obstacle-space encoder and random samples generator. We use a Contractive AutoEncoder (CAE) [5] for the encoding of an obstacle-space into an invariant, robust feature space. A samples generator that comprises a Dropoutbased [6] stochastic Deep Neural Network (DNN) that takes the obstacle-space encoding, start and goal configuration as an input, and generates samples distributing over the region of configuration space containing the path solutions. We evaluate our method on various complex motion planning tasks such as planning of a rigid-body (piano-mover problem) and 6 degree-of-freedom (DOF) robotic arm (UR6), and planning through narrow passages. We also benchmark our method against existing biased-sampling based stateof-the-art SMPs including Informed-RRT* [7] and Batch Informed Trees (BIT*) [8]. The results show that our algorithm generates samples that enable unbiased SMPs such as RRT* to compute near-optimal paths in a considerably lesser computational time than BIT* and Informed-RRT*.
B. Learning-based Search Methods
Many approaches exist that use learning to improve classical SMPs computationally. A recent method called a Lightning Framework [13] stored paths into a lookup table and used a learned heuristic to write new paths as well as to read and repair old paths. Another similar framework by Coleman et al. [14] is an experience-based strategy to cache experiences in a graph instead of individual trajectories. Although these approaches exhibit superior performance in higher-dimensional spaces when compared to conventional planning methods, lookup tables are memory inefficient and incapable of generalizing well to new planning problems. Zucker et al. [15] proposed a reinforcement learning-based method to bias samples in discretized workspaces. However, reinforcement learning-based approaches are known for their slow convergence as they require a large number of interactive experiences.
III. PROBLEM DEFINITION
This section presents the notations we will be using in this paper, along with the definitions of fundamental motion planning problems addressed by our work.
Let S be a list of finite length N ∈ N then S i is a mapping from a given index i ∈ N to an element of S at i-th index. For algorithms described in our paper, S 0 and S T corresponds to the initial and last elements of a list, respectively. Let a given state space be denoted as X ⊂ R d , where d ∈ N ≥2 denotes the dimension of a state space. The collision and collision-free state spaces are denoted as X obs ⊂ X and X free = X\X obs , respectively. Let the initial state and goal region be represented as x init ∈ X free and X goal ⊂ X free , respectively. Let a trajectory be denoted as a non-empty finite-length list σ : [0, T ] ⊂ X. For a given path planning problem, a trajectory σ is said to be feasible if it connects x init and x ∈ X goal , i.e. σ 0 = x init and σ T ∈ X goal , and a path formed by connecting all consecutive states in σ lies entirely in the obstacle-free space X free i.e., Problem 1 (Feasible Path Planning) Given a triplet {X, X free , X obs }, an initial state x init and a goal region X goal ⊂ X free , find a path σ : [0, T ] → X free such that σ 0 = x init and σ T ∈ X goal .
Let a cost function c(·) computes a cost of a given path σ in terms of a summation of Euclidean distances between all the consecutive states in σ. Let a set of all feasible path solutions to a given planning problem be denoted as Π. The optimality problem of motion planning is then to find the optimal, feasible, path solution σ * ∈ Π that has a minimum cost among all other feasible path solutions i.e., Problem 2 (Optimal Path Planning) Assuming that multiple solutions to Problem 1 exists, find a path σ * ∈ Π such that c(σ * ) = {min σ∈Π c(σ)}.
Let Ω ⊂ X free be a potential region containing optimal/nearoptimal path solution. The problem of adaptive sampling, also known as biased sampling, is to generate collision-free samples x ∈ Ω such that SMPs compute the optimal path σ * in a least possible time t ∈ R. The problem of adaptive sampling is formalized as follow.
Problem 3 (Adaptive Sampling) Given a planning problem {x init , X goal , X}, generate samples x ∈ Ω, where Ω ⊂ X free , such that the sampling-based motion planning methods compute optimal path solution σ * in a least-possible time t ∈ R.
IV. INFORMED NEURAL SAMPLER
This section presents our novel informed neural sampling algorithm called DeepSMP 1 . It comprises two neural modules. The first module is an autoencoder which learns an invariant and robust feature space to embed a point cloud data from obstacle space. The second module is a stochastic DNN which takes obstacles encoding, start and goal configurations to generate samples incrementally for SMPs during online execution. Note that any SMP can utilize these informed samples for rapid convergence to the optimal solution and that the method works for unseen environments via the obstacle space encoding. The following sections describe both neural modules, online sample generation heuristic called DeepSMP, dataset collection, and hyper-parameters initialization.
A. Obstacle Encoding
A Contractive AutoEncoder (CAE) is used to learn a latent-space embedding Z of a raw point cloud data x obs ⊂ X obs (see Fig. 1 (a)). The encoder and decoder functions of CAE are denoted as f (x obs ; θ e ) and g(f (x obs ); θ d ), respectively, where θ e and θ d are parameters of their corresponding approximating functions. CAE is trained through unsupervised learning using the following objective function.
L CAE θ e , θ d = 1 N obs x∈D obs ||x − g(f (x))|| 2 + λ ij (θ e ij ) 2
(1) where ||x − g(f (x))|| 2 is a reconstruction loss, and λ ij (θ e ij ) 2 is a regularization term with a coefficient λ. Furthermore, D obs contains a dataset of point clouds x obs ⊂ X obs from N obs ∈ N different workspaces. The regularization term allows the feature space Z := f (x obs ) to be contractive in the neighborhood of the training data which results in an invariant and robust feature learning [5].
1) Model Architecture: Since the decoding function g(f (x obs )) is an inverse of encoding function f (x obs ), we present the architectural details of encoding unit only.
The encoding function consists of three fully-connected linear hidden layers followed by an output linear layer. The output from each hidden layer is passed through a Parametric Rectified Linear Unit (PReLU) [16].
For 2D workspaces, the input point cloud is of size 1400× 2 where three hidden layers transform the inputs to 512, 256 and 128 units, respectively. The output layer takes 128 units and transforms them to latent space embedding Z of size 28 units. The decoding function takes the latent space embedding Z to reconstruct the raw point cloud data.
For 3D workspaces, the hidden layers 1, 2 and 3 transform the input point cloud 1400 × 3 to 786, 512, and 256 hidden units, respectively. Finally, the output layer transforms the 256 units from preceding hidden layer to a latent space of size 60 units.
B. Deep Sampler
Deep Sampler is a stochastic feedforward deep neural network with parameters θ. It takes obstacles encoding Z, robot state x t at step t, and goal state x T to produce a next statex t+1 ∈ X free that would take a robot closer to the goal region (see Fig. 1(b)) i.e.,
x t+1 = DeepSampler((x t , x T , Z); θ)(2)
We use RRT* [2] to produce near-optimal paths to train DeepSMP. The training paths are in the form of a tuple i.e., σ * = {x 0 , x 1 , · · · , x T }, such that the path formed by connecting all following states in σ * is a feasible solution. The training objective is to minimize mean-squared-error (MSE) between the predicted statesx t+1 and the actual states x t+1 given by RRT*, i.e.,
L MSE (θ) = 1 N pN j T −1 i=0 ||x j,i+1 − x j,i+1 || 2 ,(3)
where N p ∈ N corresponds to the total number of pathsN times their path lengths. 1) Model Architecture: Deep Sampler is a twelve-layer deep neural network where each hidden layer is a sandwich of a linear layer, PReLU [16] and Dropout (p) [6] with an exception of last hidden layer which does not contain Dropout (p). The twelveth layer is an output layer which takes hidden units from preceding layer and transforms them to the desired output size which is equal to the dimension of robot configurations. The configurations for the 2D pointmass robot, 3D point-mass robot, rigid-body and 6 DOF robot have dimensions 2, 3, 3 and 6 respectively. For all presented problems, except planning of 6 DOF robot, the input to Deep Sampler is given by concatenating the obstacles' representation Z, robot's current state x t and goal state x T . For 6 DOF, we assume a single environment, therefore, the input to Deep Sampler comprises of current state x t and goal state x T only.
C. Online Execution of DeepSMP
During the online phase, we use our trained obstacle encoder and DeepSampler to generate random samples for a 4 for i ← 0 to n do 5 if i < n limit then 6 x rand ← DeepSampler Z, x rand , x goal 7 else 8
Algorithm 1: DeepSMP(x init , x goal , x obs ) 1 Initialize SMP(x init , x goal , X) 2 x rand ← x init 3 Z ← f (x obs )
x rand ← RandomSampler() 9 σ ← SMP x rand 10 if x rand ∈ X goal then 11 x rand ← x init 12 if σ T ∈ X goal then 13 return σ 14 return ∅ given SMP. Fig. 1 shows the flow of information between encoder f (x obs ) and DeepSampler. Algorithm 1 outlines DeepSMP which combines our informed neural sampler with any classical SMP such RRT*. Algorithm 1 starts by initializing a given SMP (Line 1). The obstacles encoder f (x obs ) provides an encoding Z of a raw point cloud data from X obs (Line 3). DeepSMP algorithm runs for n ∈ N iterations (Line 4). DeepSampler incrementally generates samples between given start and goal configurations until i < n limit (Line 5-6), where n limit < n. Upon reaching a given goal configuration, DeepSampler is executed again to produce samples from a given start configuration to the goal configuration by re-initializing random sample x rand to x init (Lines 10-11). After n limit iterations, DeepSMP switches to random sampling (Line 7-8) to ensure completeness guarantees of an underlying SMP. Note that σ is a feasible path solution returned by SMP. The path σ is continually optimized for a given cost function c(·) for a given number of iteration n. Finally, after n iterations, a feasible, optimized path solution σ, if one exists, is returned as a solution to a given planning problem (Lines 12-13).
D. Data Collection
The data collection consists of creating a random set of workspaces, sampling collision-free start and goal configurations in those workspaces, and generating paths using a classical motion planner for every start and goal pair. The following sections describe the procedure to create workspaces, start and goal pairs, and near-optimal paths. 1) Workspaces: Many different 2D and 3D workspaces were generated by randomly placing various quadrilateral blocks without repetition in the operating region of 40 × 40 and 40 × 40 × 40, respectively. Each random placement of the obstacle blocks led to a different workspace.
2) Start and goal configuration: For each generated workspace, a number of start and goal configurations were sampled randomly from its obstacle-free space. 3) Near-optimal paths: Finally, for each generated start and goal pair within all workspaces, a feasible, near-optimal path was generated using the RRT* motion planner.
Complete dataset comprised 110 different workspaces for the presented scenarios in the results section i.e., simple 2D (s2D), complex 2D (c2D), complex 3D (c3D), and rigidbody (rigid). The training dataset contained 100 workspaces with 4000 training paths in every workspace. There were two types of test datasets. The first test dataset comprised already seen 100 workspaces with 200 unseen start and goal configurations in each of the workspaces. The second test dataset comprised entirely unseen 10 workspaces where each contained 2000 unseen start and goal configurations. For rigid-body case, the range of angular configuration was scaled to the range of positional configurations, i.e., −20 to 20, for training and testing. In case of 6 DOF robot, we consider only a single environment thus no environment encoding is included, and only start and goal configurations are sampled to collect example trajectories (50,000) from collision-free space to train our feedforward neural network (DeepSampler). The test scenario for 6 DOF robot is to generate paths for unseen start and goal pairs.
E. Hyper-parameters
DeepSMP neural models were trained in mini-batches using Adagrad [17] optimizer with a learning rate of 0.1. CAE was trained on raw point cloud data from N obs = 30, 000 different workspaces which were generated randomly as described earlier. The regularization coefficient λ was set to 10 −3 . For DeepSampler, Dropout probability p was kept constant to 0.5 for both training and testing. The number n limit is set to the number of nodes in the longest path available in the training data. For RRT*, gamma of ballradius was set to 1.6 whereas tree extension step sizes for point-mass and rigid-body were kept at 0.01 and 0.9, respectively. Finally for the 6-DOF robot, we use OMPL's RRT* and ROS with their default parameter settings for path generation.
V. RESULTS
This section presents the results of DeepSMP for the motion planning of a point-mass robot, rigid-body, and Universal 6 DOF robot (UR6) in both 2D and 3D environments. All experiments were carried out on a computer with 3.40GHz× 8 Intel Core i7 processor with a 16 GB RAM and GeForce GTX 1080 GPU. DeepSMP, implemented in PyTorch, was compared against Informed-RRT* and BIT* implemented in Python. In the following results, the datasets seen-X obs and unseen-X obs comprised 100 workspaces seen by DeepSMP during training and 10 workspaces not seen by DeepSMP during training, respectively. Both test datasets seen-X obs and unseen-X obs contained 200 and 2000 unseen start and goal configurations, respectively, for every workspace. Note that every combination of either seen or unseen environment with unseen start and goal pair constitutes a new planning problem, i.e., not seen by DeepSMP during training. For each planning problem, we ran 20 trials of all presented SMPs to calculate the mean computational time. Figs. 2-5 show different example scenarios named as simple 2D (s2D), complex 2D (c2D), complex 3D (c3D) and rigid-body (rigid) where DeepSMP with underling RRT* method is planning motions. The mean computational time (in seconds) and iterations took by DeepSMP for each scenario are denoted as t and n, respectively. Table I presents the mean computational time comparison of DeepSMP with an underlying RRT* SMP against Informed-RRT* and BIT* for computing near-optimal paths in different environments s2D, c2D, c3D and rigid. Note that, unbiased RRT* method is not included in the comparison as the computation time of RRT*, for computing near-optimal paths, is much higher than all presented algorithms. We report the mean (t mean ), maximum (t max ), and minimum (t min ) time taken by an algorithm in every environment. It can be seen that in all test cases, the mean computation time of DeepSMP:RRT* remained consistently around 2 seconds. However, the mean computation time of Informed-RRT* and BIT* increases significantly as the dimensionality of the planning problem increases slightly. Furthermore, the rightmost column presents the ratio of mean computational time of BIT* to DeepSMP, and it is observed that on average, our method is at least 7 times faster than BIT*, the current state-of-art motion planner.
From experiments presented so far, it is evident that BIT* outperforms Informed-RRT*, therefore, in the following experiments only DeepSMP and BIT* are compared. Fig. 6 compares the mean computation time of DeepSMP: RRT* and BIT* in two test cases, i.e., seen-X obs and unseen-X obs . It can be observed that the mean computation time of DeepSMP stays around 2 seconds irrespective of the given problem's dimensionality. Furthermore, the mean computational time of BIT* not only fluctuates but also increases significantly as the dimensionality of the planning problem increases slightly. Finally, Fig. 7 shows DeepSMP planning motions for a Universal 6-DOF robot. In Fig. 7 (a), the robotic manipulator is at the start configuration whereas its target configuration is symbolized as a shadowed region. Fig. 7 (b) shows the traces of a path planned by DeepSMP for the given start and goal pair. In this problem, the mean computational times taken by DeepSMP and BIT* are 1.7 and 48.8 seconds, respectively, which makes DeepSMP around 28 times faster than BIT*.
VI. DISCUSSION
A. Stochasticity through Dropout
Our stochastic feedforward DeepSampler uses Dropout [6] in every layer except the last two layers during both offline and online execution. Dropout is applied layer-wise to a neural network, and it drops each unit in the hidden layer with a probability p ∈ [0, 1]. In our models, the dropped out units are indicated as dotted circles in Fig. 1. Thus, the resulting neural network is a sliced version of the original deep model, and in every iteration during online execution, a different model emerges through randomly dropping some hidden units. These perturbations in DeepSampler through Dropout enables DeepSMP to generate different samples in the region likely to contain path solutions.
(a) Test-case 1: seen-X obs (b) Test-case 2: unseen-X obs
B. Bidirectional Sampling
Since our method incrementally generates samples, it can be easily extended to produce samples for bidirectional SMPs such as IB-RRT* [18]. To do so, treat both start and goal configuration as random variables x rand1 and x rand2 , respectively, and swap their roles by the end of every iteration in Algorithm 1. This way, two trees in bidirectional SMPs can be made to march towards each other to rapidly compute end-to-end collision-free paths.
C. Completeness
SMPs ensure probabilistic completeness. Let V SMP n denotes the tree vertices of SMP after n ∈ N iterations. Since all SMPs begin to build a tree from initial robot state x init i.e., V SMP 0 = x init , and randomly explore the entire configuration space by forming a connected tree as n approaches to infinity, they guarantee probabilistic completeness i.e.,
lim n→∞ P(V SMP n ∩ X goal = ∅) = 1(4)
DeepSMP also starts generating a connected tree from x init and after exploring a region that most likely contains a path solution for n limit iteration, it switches to uniform random sampling (see Algorithm 1). Therefore, if n limit n, DeepSMP also ensures probabilistic completeness i.e., as the number of iterations n approach to infinity, the probability of DeepSMP finding a path solution, if one exists, approaches to one:
lim n→∞ P(V DeepSMP n ∩ X goal = ∅) = 1(5)
D. Asymptotic Optimality
RRT* and its variants are known to ensure asymptotic optimality i.e., as the number of iterations n approaches to infinity/large-number, the probability of finding a minimum cost path solution reaches to one. This property comes from incrementally rewiring the RRT graph connections such that the shortest path is asymptotically guaranteed in RRT*. It is proposed that if the underlying SMP of DeepSMP is RRT* or any optimal variant of RRTs, DeepSMP is guaranteed to be asymptotic optimal. This follows from the fact that DeepSMP samples a selective region for fixed number of iterations and switches to uniform randoms sampling afterwards. Thus if the number of iterations goes infinity, through incremental rewiring of DeepSMP graph, the asymptotic optimality is also guaranteed.
E. Computational Complexity
A forward pass through a deep neural network is known to exhibit O(1) complexity. It can be seen in Algorithm 1 that adaptive samples are generated incrementally by forward passing through our stochastic DeepSampler. Hence, the proposed neural sampling method does not add any extra computational overhead to any underlying SMP for path generation. Thus, the computational complexity of DeepSMP method will essentially be the same as underlying SMP in Algorithm 1. For instance, as in our case, RRT* is an underlying SMP method, therefore, in presented experiments, Fig. 7: DeepSMP with RRT* planning motions for a 6 DOF manipulator. Fig (a) indicates the robot at start configuration and the goal configuration is indicated as a shadowed region . Fig (b) shows the path traces followed by the robot. In this problem, the mean computational times of DeepSMP and BIT* are 1.7 and 48.8 seconds, respectively, which makes DeepSMP about 28 times faster than BIT*.
the computational complexity of DeepSMP is O(nlogn), where n is the number of nodes in the tree.
VII. CONCLUSIONS AND FUTURE WORK
In this paper, we present a deep neural network based sampling method called DeepSMP which generates samples for Sampling-based Motion Planning algorithms to compute optimal paths rapidly and efficiently. The proposed method 1) adaptively samples a selective region of a configuration space that most likely contains an optimal path solution, 2) combined with SMP methods consistently demonstrate mean execution time of about 2 second in all presented experiments, and 3) generalizes to new unseen environments.
In our future work, we plan to propose an incremental online learning method that begins with an SMP method, and trains DeepSMP simultaneously to gradually switch from uniform sampling to adaptive sampling. To speed up the incremental online learning process, we plan to propose a method that prioritizes experiences to learn from selectively fewer training examples.
| 3,877 |
1809.09299
|
2951122667
|
Joint object detection and semantic segmentation can be applied to many fields, such as self-driving cars and unmanned surface vessels. An initial and important progress towards this goal has been achieved by simply sharing the deep convolutional features for the two tasks. However, this simple scheme is unable to make full use of the fact that detection and segmentation are mutually beneficial. To overcome this drawback, we propose a framework called TripleNet where triple supervisions including detection-oriented supervision, class-aware segmentation supervision, and class-agnostic segmentation supervision are imposed on each layer of the decoder network. Class-agnostic segmentation supervision provides an objectness prior knowledge for both semantic segmentation and object detection. Besides the three types of supervisions, two light-weight modules (i.e., inner-connected module and attention skip-layer fusion) are also incorporated into each layer of the decoder. In the proposed framework, detection and segmentation can sufficiently boost each other. Moreover, class-agnostic and class-aware segmentation on each decoder layer are not performed at the test stage. Therefore, no extra computational costs are introduced at the test stage. Experimental results on the VOC2007 and VOC2012 datasets demonstrate that the proposed TripleNet is able to improve both the detection and segmentation accuracies without adding extra computational costs.
|
It aims to classify and locate objects in an image. Generally, the methods of object detection can be divided into two main classes: two-stage methods and one-stage methods. Two-stage methods firstly extract some candidate object proposals from an image and then classify these candidate proposals into the specific object categories. R-CNN @cite_9 and its variants (e.g., Fast RCNN @cite_14 and Faster RCNN @cite_17 ) are the most representative frameworks among the two-stage methods. Based on R-CNN series, researchers have done many improvements @cite_21 @cite_22 @cite_12 . To accelerate detection speed, Dai @cite_21 proposed R-FCN which uses position-sensitive feature maps for proposal classification and bounding box regression. To output multi-scale feature maps with strong semantics, Lin @cite_22 proposed feature pyramid network (FPN) based on skip-layer connection and top-down pathway. Recently, Cai @cite_12 trained a sequence of object detectors with increasing IoU thresholds to improve detection quality.
|
{
"abstract": [
"",
"Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But recent deep learning object detectors have avoided pyramid representations, in part because they are compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using FPN in a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available.",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
"We present region-based, fully convolutional networks for accurate and efficient object detection. In contrast to previous region-based detectors such as Fast Faster R-CNN that apply a costly per-region subnetwork hundreds of times, our region-based detector is fully convolutional with almost all computation shared on the entire image. To achieve this goal, we propose position-sensitive score maps to address a dilemma between translation-invariance in image classification and translation-variance in object detection. Our method can thus naturally adopt fully convolutional image classifier backbones, such as the latest Residual Networks (ResNets), for object detection. We show competitive results on the PASCAL VOC datasets (e.g., 83.6 mAP on the 2007 set) with the 101-layer ResNet. Meanwhile, our result is achieved at a test-time speed of 170ms per image, 2.5-20x faster than the Faster R-CNN counterpart. Code is made publicly available at: this https URL",
"In object detection, an intersection over union (IoU) threshold is required to define positives and negatives. An object detector, trained with low IoU threshold, e.g. 0.5, usually produces noisy detections. However, detection performance tends to degrade with increasing the IoU thresholds. Two main factors are responsible for this: 1) overfitting during training, due to exponentially vanishing positive samples, and 2) inference-time mismatch between the IoUs for which the detector is optimal and those of the input hypotheses. A multi-stage object detection architecture, the Cascade R-CNN, is proposed to address these problems. It consists of a sequence of detectors trained with increasing IoU thresholds, to be sequentially more selective against close false positives. The detectors are trained stage by stage, leveraging the observation that the output of a detector is a good distribution for training the next higher quality detector. The resampling of progressively improved hypotheses guarantees that all detectors have a positive set of examples of equivalent size, reducing the overfitting problem. The same cascade procedure is applied at inference, enabling a closer match between the hypotheses and the detector quality of each stage. A simple implementation of the Cascade R-CNN is shown to surpass all single-model object detectors on the challenging COCO dataset. Experiments also show that the Cascade R-CNN is widely applicable across detector architectures, achieving consistent gains independently of the baseline detector strength. The code will be made available at this https URL.",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available."
],
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_9",
"@cite_21",
"@cite_12",
"@cite_17"
],
"mid": [
"",
"2949533892",
"2102605133",
"2950800384",
"2772955562",
"2953106684"
]
}
|
Triply Supervised Decoder Networks for Joint Detection and Segmentation
|
Object detection and semantic segmentation are two fundamental and important tasks in the field of computer vision. In recent few years, object detection [36,29,26] and semantic segmentation [30,5,1] with deep convolutional networks [20,39,14,17] have achieved great progress, respectively. Most state-of-the-art methods only focus on one single task, which does not join object detection and semantic segmentation together. However, joint object detection and semantic segmentation is very necessary and im- The d featur
The en featur Figure 1. Some architectures of joint detection and segmentation. (a) The last layer of the encoder is used for detection and segmentation [2]. (b) The branch for detection is refined by the branch for segmentation [31,47]. (c) Each layer of the decoder detects objects of different scales, and the fused layer is for segmentation [7]. (d) The proposed PairNet. Each layer of the decoder is simultaneously for detection and segmentation. (e) The proposed TripleNet, which has three types of supervisions and some lightweight modules.
portant in many applications, such as self-driving cars and unmanned surface vessels. In fact, object detection and semantic segmentation are highly related. On the one hand, semantic segmentation usually used as a multi-task supervision can help improve object detection [31,24]. On the other hand, object detection can be used as a prior knowledge to help improve performance of semantic segmentation [14,34].
Due to application requirements and task relevance, joint object detection and semantic segmentation has gradually attracted the attention of researchers. Fig. 1 summarizes three typical methods of joint object detection and semantic segmentation. Fig. 1(a) shows the simplest and most naive way where one branch for object detection and one branch for semantic segmentation are in parallel attached to the last layers of the encoder [2]. In Fig. 1(b), the branch for object detection is further refined by the features from the branch for semantic segmentation [31,47]. Recently, the encoder-decoder network is further used for joint object de-tection and semantic segmentation. In Fig. 1(c), each layer of the decoder is used for multi-scale object detection, and the concatenated feature map from different layers of the decoder is used for semantic segmentation [7]. The above methods have achieved great success for detection and segmentation. However, the performance is still far from the strict demand of real applications such as self-driving cars and unmanned surface vessels. One possible reason is that the mutual benefit between the two tasks is not fully exploited.
To exploit mutual benefit for joint object detection and semantic segmentation tasks, in this paper, we propose to impose three types of supervisions (i.e., detection-oriented supervision, class-aware segmentation supervision, and class-agnostic segmentation supervision) on each layer of the decoder network. Meanwhile, the light-weight modules (i.e., the inner-connected module and attention skip-layer fusion) are also incorporated. The corresponding network is called TripleNet (see Fig. 1(e)). It is noted that we also propose to only impose the detection-oriented supervision and class-aware segmentation supervision on each layer of the decoder, which is called PairNet (see Fig. 1(d)). The contributions of this paper can be summarized as follows:
(1) Two novel frameworks (i.e., PairNet and TripleNet) for joint object detection and semantic segmentation are proposed. In TripleNet, the detection-oriented supervision, class-aware segmentation supervision, and class-agnostic segmentation supervision are imposed on each layer of the decoder. Meanwhile, two light-weight modules (i.e., the inner-connected module and attention skip-layer fusion) is also incorporated into each layer of the decoder.
(2) A lot of synergies are gained from TripleNet. Both detection and segmentation accuracies are significantly improved. The improvement is not at expense of incurring extra computational costs because the class-agnostic segmentation and class-aware segmentation are not performed in each layer of the decoder at the test stage .
(3) Experiments on the VOC 2007 and VOC 2012 datasets are conducted to demonstrate the effectiveness of the proposed TripleNet.
The rest of this paper is organized as follows. Section 2 reviews some related works of object detection and semantic segmentation. Section 3 introduces our proposed method in detail. Experiments are shown in Section 4. Finally, it is concluded in Section 5.
The proposed methods
In recent years, the fully convolutional networks (FCN) with encoder-decoder structure have achieved great success on object detection [26,9] and semantic segmentation [1], respectively. For example, DSSD [9,34] and RetinaNet [26] use different layers of the decoder to detect objects of different scales, respectively. By using the encoderdecoder structure, SegNet [1] and LargeKernel [33] generate high-resolution logits for semantic segmentation. Based on above observations, a very natural and simple idea is that FCN with encoder-decoder is suitable for joint object detection and semantic segmentation.
In this section, we give a detailed introduction about the proposed paired supervision decoder network (i.e., PairNet) and triply supervised decoder network (i.e., TripleNet) for joint object detection and semantic segmentation.
Paired supervision decoder network (PairNet)
Based on the encoder-decoder structure, a feature pyramid network is naturally proposed to join object detection and semantic segmentation. Namely, the supervision of object detection and semantic segmentation is added to each layer of the decoder, which is called PairNet. On the one hand, PairNet uses different layers of the decoder to detect objects of different scales. On the other hand, instead of using the last high-resolution layer for semantic segmentation which is adopted by most state-of-the-art methods [1,33], PairNet uses each layer of the decoder to respectively parse pixel semantic labels. Though the proposed PairNet is very simple and naive, it has not been explored for joint object detection and semantic segmentation to the best of our knowledge. Fig. 2(a) gives the detailed architecture of PairNet. The input image firstly goes through a fully convolutional network with encoder-decoder structure. The encoder gradually down-samples the feature map. In this paper, the famous ResNet-50 or ResNet101 [15] (i.e., res1-res4) and some new added residual blocks (i.e., res5-res7) construct the encoder. The decoder gradually maps the low-resolution feature map to the high-resolution feature map. To enhance context information, skip-layer fusion is used to fuse the feature map from the decoder and the corresponding feature map from the encoder. Fig. 2(b) skip-layer fusion. The feature maps in the decoder is firstly upsampled by bilinear interpolation and then concatenated with the corresponding feature maps of the same resolution in the encoder. After that, the concatenated feature maps go through a residual unit to generate the output feature maps.
To join object detection and semantic segmentation, each layer of the decoder is further split into two different branches. The branch of object detection consists of a 3 × 3 convolutional layer and two sibling 1 × 1 convolutional layers for object classification and bounding box regression. The branch of object detection at different layers is used to detect objects of different scales. Specifically, the branch at front layer of the decoder with low resolution is used to detect large-scale objects, while the branch at latter layer with high resolution is used to detect small-scale objects.
The branch of semantic segmentation consists of a 3 × 3 convolutional layer to generate the logits. There are two different ways to compute the segmentation loss. The first one is that the segmentation logits are upsampled to the same resolution of ground-truth, and the second one is that the ground-truth is downsampled to the same resolution of the logits. We found that the first strategy have a little better performance, which is adopted in the follows.
Triply supervised decoder network (TripleNet)
To further improve the performance of joint object detection and semantic segmentation, triply supervision decoder network (called TripleNet) is further proposed, where detection-oriented supervision, class-aware segmentation supervision, and class-agnostic segmentation supervision are added on each layer of the decoder. Fig. 3(a) gives the detailed architecture of TripleNet. Compared to PairNet, TripleNet add some new modules (i.e., multiscale fused segmentation, the inner-connected module, and class-agnostic segmentation supervision). In the following section, we introduce these modules in detailed.
Multiscale fused segmentation It has been demonstrated that multi-scale features are useful for semantic segmentation [7,49,46]. To use multi-scale features of different layers in the decoder for better semantic segmentation, the feature maps of different layers in the decoder are upsampled to the same spatial resolution and concatenated together. After that, a 3 × 3 convolutional layer is used to generate the segmentation logits. Compared to the segmentations based on one layer of the decoder, multilayer fused features can make better use of context information. Thus, multilayer fused segmentation is used for final prediction at the test stage. Meanwhile, the semantic segmentation based on each layer of the decoder can be seen as a deep supervision for feature learning.
The inner-connected module In Section 3.1, PairNet only shares the base network for object detection and semantic segmentation, while the branches of object detection and semantic segmentation have no cross. To further help object detection, an inner-connected module is proposed to refine object detection by the logits of semantic segmentation. Fig. 3(b) shows the inner-connected module in layer i. The feature map in layer i first goes through a 3 × 3 convolutional layer to produce the segmentation logits for the branch of semantic segmentation. Meanwhile, the segmentation logit goes through two 3 × 3 convolutional layers to generate new feature map which are further concatenated with feature maps in layer i. Based on concatenated feature maps, a 3 × 3 convolutional layer is used to generate the feature map for the branch of object detection.
Class-agnostic segmentation supervision Semantic segmentation mentioned above is class-aware, which aims to simultaneously identify specific object categories and the Table 1. Ablation experiments of PairNet and TripleNet on the VOC2012-val-seg set. The backbone model is ResNet50 [15], and the input image is rescaled to the size of 300 × 300. "MFS" means multiscale fused segmentation, "IC" means inner-connected module, "CAS" means class-aware segmentation, and "ASF" means attention skip-layer fusion.
background. We argue that class-aware semantic segmentation may ignore the discrimination between objects and the background. Therefore, class-agnostic segmentation supervision module is further added to each layer of the decoder. Specifically, a 3 × 3 convolutional layer is added to generate the logits of class-agnostic semantic segmentation. To generate the ground-truth of class-agnostic semantic segmentation, the objects of different categories are set as one category, and the background is set as another category. Attention skip-layer fusion In Section 3.1, PairNet simply fuses the feature maps of the decoder and the corresponding feature maps of the encoder. Generally, the features from the layer of the encoder have relatively low-level semantic, and that from the layer of decoder have relatively high-level semantic. To enhance informative features and suppress less useful features from the encoder by the features from the decoder, Squeeze-and-Excitation (SE) [16] block is used. The input of a SE block is the layer of the decoder, and the output of SE block is used to scale the layer of the encoder. After that, the layer of the decoder and the scaled layer of the encoder is concatenated for fusion.
Experiments
Datasets
To demonstrate the effectiveness of proposed methods and compare with same state-of-the-art methods, some experiments on the famous VOC 2007 and VOC 2012 datasets [8] are conducted in this section.
The PASCAL VOC challenge [8] has been held annually since 2006, which consists of three principal challenges (i.e., image classification, object detection, and semantic segmentation). Among these annual challenges, the VOC 2007 and VOC 2012 datasets are usually used to evaluate the performance of object detection and semantic segmentation, which have 20 object categories. The VOC 2007 dataset contains 5011 trainval images and 4952 test images. The VOC 2012 dataset is split into three subsets (i.e., train, val, and test). The train set con-tains 5717 images for detection and 1464 images for semantic segmentation (called VOC12-train-seg). The val set contains 5823 images for detection and 1449 images for segmentation (called VOC12-val-seg). The test set contains 10991 images for detection and 1456 for segmentation. To enlarge the training set for semantic segmentation, the additional segmentation data provided by [12] is used, which contains 10582 training images (called VOC12-trainaug-seg).
For object detection, mean average precision (i.e., mAP) is used to performance evaluation. On the PASCAL VOC datasets, mAP is calculated under the IoU threshold of 0.5. For semantic segmentation, mean intersection over union (i.e., mIoU) is used for performance evaluation.
Ablation experiments on the VOC 2012 dataset
In this subsection, experiments are conducted on the PASCAL VOC 2012 to validate the effectiveness of proposed method. On the PASCAL VOC 2012, the set of VOC12-trainaug-seg is used for training and the set of VOC12-val-seg is used for performance evaluation, where they have the ground truth of both object detection and semantic segmentation. The input images are rescaled to the size of 300 × 300, and the size of mini-batch is 32. The total number of iteration in the training stage is 40k, where the learning rate of first 25k iterations is 0.0001, that of following 10k iterations is 0.00001, and that of last 5k iterations is 0.000001.
The top part of Table 1 shows the ablation experiments of PairNet. When all different layers of the decoder are only used for multi-scale object detection (i.e., Table 1(a)), mAP of object detection is 78.0%. When all different layers of the decoder are used for semantic segmentation (i.e., Table 1(c)), mIoU of semantic segmentation is 72.5%. When all the different layer of the decoder is used for object detection and semantic segmentation together (i.e., Table 1(d)), mAP and mIoU of PairNet are 78.9% and 73.1%, respectively. Namely, PairNet can improve both object detection and semantic segmentation, which indicates that joint ob-
GT of det
GT of seg only det (Table 1(a)) only seg (Table 1(b)) Our PairNet (Table 1(d)) Our TripleNet (Table 1( Table 1 (i.e., "only det", "only seg", PairNet, and TripleNet).
(a) demonstrates that detection and segmentation can be both improved by PairNet and TripleNet. (b) demonstrates that detection is mainly improved by PairNet or TripleNet. (c) demonstrates that segmentation is mainly improved by PairNet or TripleNet.
ject detection and semantic segmentation on each layer of the decoder is useful. Meanwhile, the method using all the different layers of the decoder for segmentation (i.e., Table 1(c)) has better performance than the method only using the last layer of the decoder for segmentation (i.e., Table 2. Comparison of BlitzNet, the proposed PairNet, and the proposed TripleNet. All the methods are re-implemented in the same parameter settings.
decoder for semantic segmentation can give a much deeper supervision.
The bottom part of Table 1 shows the ablation experiments of TripleNet. Based on PairNet, TripleNet adds three modules (i.e., MFS, IC, CAS, and AFS). When adding the MFS module, TripleNet outperforms PairNet by 0.1% on object detection and 0.4% on semantic segmentation, respectively. When adding the MFS and IC modules, TripleNet outperforms PairNet by 0.6% on object detection and 0.5% on semantic segmentation. When adding all four modules, TripleNet has the best detection performance and segmentation performance. Table 1. The first two columns are ground-truth of detection and segmentation. The results of only detection in Table 1(a) and only segmentation in Table 1(c) are shown in the third and forth columns. The results of Pair-Net in Table 1(d) and TripleNet in Table 1(g) are shown in fifth to eighth columns. In Fig. 4(a), the examples of detection and segmentation both improved by joint detection and segmentation are given. For example, in the first row, "only det" and "only seg" both miss three potted plant, while PairNet only misses one potted plant and TripleNet does not miss any potted plant. In Fig. 4(b), the examples of detection result improved are shown. For example, in the first row, "only detect" can only detect one ship, PairNet can detect three ships, and TripleNet can detect four ships. In Fig. 4(c), the examples of segmentation results improved are shown. For example, in the second row, "only seg" recognize blue bag as motorbike, but PairNet and TripleNet can recognize the blue bag as background.
Meanwhile, the proposed PairNet and TripleNet are also compared to the related BlitzNet [7]. For fair comparison, BltizNet are re-implemented in the similar parameter settings as the proposed PairNet and TripleNet. PairNet which simply joins detection and segmentation in each layer of the decoder has been already comparable with BlitzNet. TripleNet outperforms BlitzNet on both object detection and semantic segmentation, which demonstrates that the proposed method can make full use of the mutual information to improve the two tasks.
Comparison with state-of-the-art methods on the VOC2012 test dataset
In this section, the proposed PairNet is compared with some state-of-the-art methods on the VOC 2007 dataset. Among these methods, SSD [29], RON [18], DSSD [9], DES [47], RefineDet [48], and DFPR [19] are only used for object detection, ParseNet [27], Deeplab V2 [5], DPN [28], RefineNet [25], PSPNet [49], DFPN [45] are only used for semantic segmentation. Table 2 shows object detect results (mAP) and semantic segmentation results (mIoU) of these methods on th VOC2012 test set. It can been seen most state-of-the-art methods can only output detection results (i.e., SSD, RON, DSSD, DES, RefineDet, and DFPR) or segmentation result (i.e., FCN, ParseNet, DeepLab, DPN, PSPNet, and DFPN). Only BlitzNet and our proposed TripleNet can simultaneously output the results of object detection and semantic segmentation. mAP and mIoU of BlitzNet are 79.0% and 75.6%, while mAP and mIoU of TripleNet are 81.0% and 82.9%. Thus, TripleNet outperforms BlitzNet by 2.0% on object detection and 7.3% on semantic segmentation. It can be also seen that TripleNet almost achieves state-of-the-art performance on both object detection and semantic segmentation.
Comparison with some state-of-the-art methods on the VOC 2007 test dataset
In this section, the proposed TripleNet and some stateof-the-art methods (i.e., SSD [29], DES [47], DSSD [9], STDN [50], BlitzNet [7], RefineDet [25]), and DFPR [19] are further compared on the VOC 2007 test set. Because only the ground-truth of object detection is provided, these methods are only evaluated on object detection. Table 4 shows mAP of these methods. mAP of TripleNet is 82.7%, which is higher than that of all state-of-the-art methods.
- - - - - - - - - - - - - - - - - - - RefineDet512 [48] VGG16 81.8 - - - - - - - - - - - - - - - - - - - -
Conclusion
In this paper, we proposed two fully convolutional networks (i.e., PairNet and TripleNet) for joint object detection and semantic segmentation. PairNet simultaneously predicts objects of different scales by different layers and parses pixel semantic labels by all different layers. TripleNet adds four modules (i.e, multiscale fused segmentation, inner-connected module, class-agnostic segmentation supervision, and attention skip-layer fusion) to PairNet. Experiments demonstrate that TripleNet can achieve stateof-the-art performance on both object detection and semantic segmentation.
| 3,116 |
1809.09299
|
2951122667
|
Joint object detection and semantic segmentation can be applied to many fields, such as self-driving cars and unmanned surface vessels. An initial and important progress towards this goal has been achieved by simply sharing the deep convolutional features for the two tasks. However, this simple scheme is unable to make full use of the fact that detection and segmentation are mutually beneficial. To overcome this drawback, we propose a framework called TripleNet where triple supervisions including detection-oriented supervision, class-aware segmentation supervision, and class-agnostic segmentation supervision are imposed on each layer of the decoder network. Class-agnostic segmentation supervision provides an objectness prior knowledge for both semantic segmentation and object detection. Besides the three types of supervisions, two light-weight modules (i.e., inner-connected module and attention skip-layer fusion) are also incorporated into each layer of the decoder. In the proposed framework, detection and segmentation can sufficiently boost each other. Moreover, class-agnostic and class-aware segmentation on each decoder layer are not performed at the test stage. Therefore, no extra computational costs are introduced at the test stage. Experimental results on the VOC2007 and VOC2012 datasets demonstrate that the proposed TripleNet is able to improve both the detection and segmentation accuracies without adding extra computational costs.
|
One-stage methods directly predict object class and bounding box in a single network. YOLO @cite_44 and SSD @cite_16 are two of the earliest proposed one-stage methods. After that, many variants are proposed @cite_32 @cite_29 @cite_20 @cite_19 . DSSD @cite_32 and RON @cite_29 use the encoder-decoder network to add context information for multi-scale object detection. To train object detector from scratch, DSOD @cite_20 uses dense layer-wise connections on SSD for deep supervision. Instead of using in-network feature maps of different resolutions for multi-scale object detection, STDN @cite_19 uses scale-transferrable module to generate different high-resolution feature maps from last feature map. To solve class imbalance in the training stage, RetinaNet @cite_23 introduces focal loss to downweight the contribution of easy samples.
|
{
"abstract": [
"We present RON, an efficient and effective framework for generic object detection. Our motivation is to smartly associate the best of the region-based (e.g., Faster R-CNN) and region-free (e.g., SSD) methodologies. Under fully convolutional architecture, RON mainly focuses on two fundamental problems: (a) multi-scale object localization and (b) negative sample mining. To address (a), we design the reverse connection, which enables the network to detect objects on multi-levels of CNNs. To deal with (b), we propose the objectness prior to significantly reduce the searching space of objects. We optimize the reverse connection, objectness prior and object detector jointly by a multi-task loss function, thus RON can directly predict final detection results from all locations of various feature maps. Extensive experiments on the challenging PASCAL VOC 2007, PASCAL VOC 2012 and MS COCO benchmarks demonstrate the competitive performance of RON. Specifically, with VGG-16 and low resolution 384×384 input size, the network gets 81.3 mAP on PASCAL VOC 2007, 80.7 mAP on PASCAL VOC 2012 datasets. Its superiority increases when datasets become larger and more difficult, as demonstrated by the results on the MS COCO dataset. With 1.5G GPU memory at test phase, the speed of the network is 15 FPS, 3 times faster than the Faster R-CNN counterpart. Code will be made publicly available.",
"The main contribution of this paper is an approach for introducing additional context into state-of-the-art general object detection. To achieve this we first combine a state-of-the-art classifier (Residual-101[14]) with a fast detection framework (SSD[18]). We then augment SSD+Residual-101 with deconvolution layers to introduce additional large-scale context in object detection and improve accuracy, especially for small objects, calling our resulting system DSSD for deconvolutional single shot detector. While these two contributions are easily described at a high-level, a naive implementation does not succeed. Instead we show that carefully adding additional stages of learned transformations, specifically a module for feed-forward connections in deconvolution and a new output module, enables this new approach and forms a potential way forward for further detection research. Results are shown on both PASCAL VOC and COCO detection. Our DSSD with @math input achieves 81.5 mAP on VOC2007 test, 80.0 mAP on VOC2012 test, and 33.2 mAP on COCO, outperforming a state-of-the-art method R-FCN[3] on each dataset.",
"",
"",
"The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: this https URL",
"We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.",
"We present Deeply Supervised Object Detector (DSOD), a framework that can learn object detectors from scratch. State-of-the-art object objectors rely heavily on the off-the-shelf networks pre-trained on large-scale classification datasets like ImageNet, which incurs learning bias due to the difference on both the loss functions and the category distributions between classification and detection tasks. Model fine-tuning for the detection task could alleviate this bias to some extent but not fundamentally. Besides, transferring pre-trained models from classification to detection between discrepant domains is even more difficult (e.g. RGB to depth images). A better solution to tackle these two critical problems is to train object detectors from scratch, which motivates our proposed DSOD. Previous efforts in this direction mostly failed due to much more complicated loss functions and limited training data in object detection. In DSOD, we contribute a set of design principles for training object detectors from scratch. One of the key findings is that deep supervision, enabled by dense layer-wise connections, plays a critical role in learning a good detector. Combining with several other principles, we develop DSOD following the single-shot detection (SSD) framework. Experiments on PASCAL VOC 2007, 2012 and MS COCO datasets demonstrate that DSOD can achieve better results than the state-of-the-art solutions with much more compact models. For instance, DSOD outperforms SSD on all three benchmarks with real-time detection speed, while requires only 1 2 parameters to SSD and 1 10 parameters to Faster RCNN. Our code and models are available at: this https URL ."
],
"cite_N": [
"@cite_29",
"@cite_32",
"@cite_44",
"@cite_19",
"@cite_23",
"@cite_16",
"@cite_20"
],
"mid": [
"2962917547",
"2579985080",
"",
"",
"2743473392",
"2193145675",
"2743388417"
]
}
|
Triply Supervised Decoder Networks for Joint Detection and Segmentation
|
Object detection and semantic segmentation are two fundamental and important tasks in the field of computer vision. In recent few years, object detection [36,29,26] and semantic segmentation [30,5,1] with deep convolutional networks [20,39,14,17] have achieved great progress, respectively. Most state-of-the-art methods only focus on one single task, which does not join object detection and semantic segmentation together. However, joint object detection and semantic segmentation is very necessary and im- The d featur
The en featur Figure 1. Some architectures of joint detection and segmentation. (a) The last layer of the encoder is used for detection and segmentation [2]. (b) The branch for detection is refined by the branch for segmentation [31,47]. (c) Each layer of the decoder detects objects of different scales, and the fused layer is for segmentation [7]. (d) The proposed PairNet. Each layer of the decoder is simultaneously for detection and segmentation. (e) The proposed TripleNet, which has three types of supervisions and some lightweight modules.
portant in many applications, such as self-driving cars and unmanned surface vessels. In fact, object detection and semantic segmentation are highly related. On the one hand, semantic segmentation usually used as a multi-task supervision can help improve object detection [31,24]. On the other hand, object detection can be used as a prior knowledge to help improve performance of semantic segmentation [14,34].
Due to application requirements and task relevance, joint object detection and semantic segmentation has gradually attracted the attention of researchers. Fig. 1 summarizes three typical methods of joint object detection and semantic segmentation. Fig. 1(a) shows the simplest and most naive way where one branch for object detection and one branch for semantic segmentation are in parallel attached to the last layers of the encoder [2]. In Fig. 1(b), the branch for object detection is further refined by the features from the branch for semantic segmentation [31,47]. Recently, the encoder-decoder network is further used for joint object de-tection and semantic segmentation. In Fig. 1(c), each layer of the decoder is used for multi-scale object detection, and the concatenated feature map from different layers of the decoder is used for semantic segmentation [7]. The above methods have achieved great success for detection and segmentation. However, the performance is still far from the strict demand of real applications such as self-driving cars and unmanned surface vessels. One possible reason is that the mutual benefit between the two tasks is not fully exploited.
To exploit mutual benefit for joint object detection and semantic segmentation tasks, in this paper, we propose to impose three types of supervisions (i.e., detection-oriented supervision, class-aware segmentation supervision, and class-agnostic segmentation supervision) on each layer of the decoder network. Meanwhile, the light-weight modules (i.e., the inner-connected module and attention skip-layer fusion) are also incorporated. The corresponding network is called TripleNet (see Fig. 1(e)). It is noted that we also propose to only impose the detection-oriented supervision and class-aware segmentation supervision on each layer of the decoder, which is called PairNet (see Fig. 1(d)). The contributions of this paper can be summarized as follows:
(1) Two novel frameworks (i.e., PairNet and TripleNet) for joint object detection and semantic segmentation are proposed. In TripleNet, the detection-oriented supervision, class-aware segmentation supervision, and class-agnostic segmentation supervision are imposed on each layer of the decoder. Meanwhile, two light-weight modules (i.e., the inner-connected module and attention skip-layer fusion) is also incorporated into each layer of the decoder.
(2) A lot of synergies are gained from TripleNet. Both detection and segmentation accuracies are significantly improved. The improvement is not at expense of incurring extra computational costs because the class-agnostic segmentation and class-aware segmentation are not performed in each layer of the decoder at the test stage .
(3) Experiments on the VOC 2007 and VOC 2012 datasets are conducted to demonstrate the effectiveness of the proposed TripleNet.
The rest of this paper is organized as follows. Section 2 reviews some related works of object detection and semantic segmentation. Section 3 introduces our proposed method in detail. Experiments are shown in Section 4. Finally, it is concluded in Section 5.
The proposed methods
In recent years, the fully convolutional networks (FCN) with encoder-decoder structure have achieved great success on object detection [26,9] and semantic segmentation [1], respectively. For example, DSSD [9,34] and RetinaNet [26] use different layers of the decoder to detect objects of different scales, respectively. By using the encoderdecoder structure, SegNet [1] and LargeKernel [33] generate high-resolution logits for semantic segmentation. Based on above observations, a very natural and simple idea is that FCN with encoder-decoder is suitable for joint object detection and semantic segmentation.
In this section, we give a detailed introduction about the proposed paired supervision decoder network (i.e., PairNet) and triply supervised decoder network (i.e., TripleNet) for joint object detection and semantic segmentation.
Paired supervision decoder network (PairNet)
Based on the encoder-decoder structure, a feature pyramid network is naturally proposed to join object detection and semantic segmentation. Namely, the supervision of object detection and semantic segmentation is added to each layer of the decoder, which is called PairNet. On the one hand, PairNet uses different layers of the decoder to detect objects of different scales. On the other hand, instead of using the last high-resolution layer for semantic segmentation which is adopted by most state-of-the-art methods [1,33], PairNet uses each layer of the decoder to respectively parse pixel semantic labels. Though the proposed PairNet is very simple and naive, it has not been explored for joint object detection and semantic segmentation to the best of our knowledge. Fig. 2(a) gives the detailed architecture of PairNet. The input image firstly goes through a fully convolutional network with encoder-decoder structure. The encoder gradually down-samples the feature map. In this paper, the famous ResNet-50 or ResNet101 [15] (i.e., res1-res4) and some new added residual blocks (i.e., res5-res7) construct the encoder. The decoder gradually maps the low-resolution feature map to the high-resolution feature map. To enhance context information, skip-layer fusion is used to fuse the feature map from the decoder and the corresponding feature map from the encoder. Fig. 2(b) skip-layer fusion. The feature maps in the decoder is firstly upsampled by bilinear interpolation and then concatenated with the corresponding feature maps of the same resolution in the encoder. After that, the concatenated feature maps go through a residual unit to generate the output feature maps.
To join object detection and semantic segmentation, each layer of the decoder is further split into two different branches. The branch of object detection consists of a 3 × 3 convolutional layer and two sibling 1 × 1 convolutional layers for object classification and bounding box regression. The branch of object detection at different layers is used to detect objects of different scales. Specifically, the branch at front layer of the decoder with low resolution is used to detect large-scale objects, while the branch at latter layer with high resolution is used to detect small-scale objects.
The branch of semantic segmentation consists of a 3 × 3 convolutional layer to generate the logits. There are two different ways to compute the segmentation loss. The first one is that the segmentation logits are upsampled to the same resolution of ground-truth, and the second one is that the ground-truth is downsampled to the same resolution of the logits. We found that the first strategy have a little better performance, which is adopted in the follows.
Triply supervised decoder network (TripleNet)
To further improve the performance of joint object detection and semantic segmentation, triply supervision decoder network (called TripleNet) is further proposed, where detection-oriented supervision, class-aware segmentation supervision, and class-agnostic segmentation supervision are added on each layer of the decoder. Fig. 3(a) gives the detailed architecture of TripleNet. Compared to PairNet, TripleNet add some new modules (i.e., multiscale fused segmentation, the inner-connected module, and class-agnostic segmentation supervision). In the following section, we introduce these modules in detailed.
Multiscale fused segmentation It has been demonstrated that multi-scale features are useful for semantic segmentation [7,49,46]. To use multi-scale features of different layers in the decoder for better semantic segmentation, the feature maps of different layers in the decoder are upsampled to the same spatial resolution and concatenated together. After that, a 3 × 3 convolutional layer is used to generate the segmentation logits. Compared to the segmentations based on one layer of the decoder, multilayer fused features can make better use of context information. Thus, multilayer fused segmentation is used for final prediction at the test stage. Meanwhile, the semantic segmentation based on each layer of the decoder can be seen as a deep supervision for feature learning.
The inner-connected module In Section 3.1, PairNet only shares the base network for object detection and semantic segmentation, while the branches of object detection and semantic segmentation have no cross. To further help object detection, an inner-connected module is proposed to refine object detection by the logits of semantic segmentation. Fig. 3(b) shows the inner-connected module in layer i. The feature map in layer i first goes through a 3 × 3 convolutional layer to produce the segmentation logits for the branch of semantic segmentation. Meanwhile, the segmentation logit goes through two 3 × 3 convolutional layers to generate new feature map which are further concatenated with feature maps in layer i. Based on concatenated feature maps, a 3 × 3 convolutional layer is used to generate the feature map for the branch of object detection.
Class-agnostic segmentation supervision Semantic segmentation mentioned above is class-aware, which aims to simultaneously identify specific object categories and the Table 1. Ablation experiments of PairNet and TripleNet on the VOC2012-val-seg set. The backbone model is ResNet50 [15], and the input image is rescaled to the size of 300 × 300. "MFS" means multiscale fused segmentation, "IC" means inner-connected module, "CAS" means class-aware segmentation, and "ASF" means attention skip-layer fusion.
background. We argue that class-aware semantic segmentation may ignore the discrimination between objects and the background. Therefore, class-agnostic segmentation supervision module is further added to each layer of the decoder. Specifically, a 3 × 3 convolutional layer is added to generate the logits of class-agnostic semantic segmentation. To generate the ground-truth of class-agnostic semantic segmentation, the objects of different categories are set as one category, and the background is set as another category. Attention skip-layer fusion In Section 3.1, PairNet simply fuses the feature maps of the decoder and the corresponding feature maps of the encoder. Generally, the features from the layer of the encoder have relatively low-level semantic, and that from the layer of decoder have relatively high-level semantic. To enhance informative features and suppress less useful features from the encoder by the features from the decoder, Squeeze-and-Excitation (SE) [16] block is used. The input of a SE block is the layer of the decoder, and the output of SE block is used to scale the layer of the encoder. After that, the layer of the decoder and the scaled layer of the encoder is concatenated for fusion.
Experiments
Datasets
To demonstrate the effectiveness of proposed methods and compare with same state-of-the-art methods, some experiments on the famous VOC 2007 and VOC 2012 datasets [8] are conducted in this section.
The PASCAL VOC challenge [8] has been held annually since 2006, which consists of three principal challenges (i.e., image classification, object detection, and semantic segmentation). Among these annual challenges, the VOC 2007 and VOC 2012 datasets are usually used to evaluate the performance of object detection and semantic segmentation, which have 20 object categories. The VOC 2007 dataset contains 5011 trainval images and 4952 test images. The VOC 2012 dataset is split into three subsets (i.e., train, val, and test). The train set con-tains 5717 images for detection and 1464 images for semantic segmentation (called VOC12-train-seg). The val set contains 5823 images for detection and 1449 images for segmentation (called VOC12-val-seg). The test set contains 10991 images for detection and 1456 for segmentation. To enlarge the training set for semantic segmentation, the additional segmentation data provided by [12] is used, which contains 10582 training images (called VOC12-trainaug-seg).
For object detection, mean average precision (i.e., mAP) is used to performance evaluation. On the PASCAL VOC datasets, mAP is calculated under the IoU threshold of 0.5. For semantic segmentation, mean intersection over union (i.e., mIoU) is used for performance evaluation.
Ablation experiments on the VOC 2012 dataset
In this subsection, experiments are conducted on the PASCAL VOC 2012 to validate the effectiveness of proposed method. On the PASCAL VOC 2012, the set of VOC12-trainaug-seg is used for training and the set of VOC12-val-seg is used for performance evaluation, where they have the ground truth of both object detection and semantic segmentation. The input images are rescaled to the size of 300 × 300, and the size of mini-batch is 32. The total number of iteration in the training stage is 40k, where the learning rate of first 25k iterations is 0.0001, that of following 10k iterations is 0.00001, and that of last 5k iterations is 0.000001.
The top part of Table 1 shows the ablation experiments of PairNet. When all different layers of the decoder are only used for multi-scale object detection (i.e., Table 1(a)), mAP of object detection is 78.0%. When all different layers of the decoder are used for semantic segmentation (i.e., Table 1(c)), mIoU of semantic segmentation is 72.5%. When all the different layer of the decoder is used for object detection and semantic segmentation together (i.e., Table 1(d)), mAP and mIoU of PairNet are 78.9% and 73.1%, respectively. Namely, PairNet can improve both object detection and semantic segmentation, which indicates that joint ob-
GT of det
GT of seg only det (Table 1(a)) only seg (Table 1(b)) Our PairNet (Table 1(d)) Our TripleNet (Table 1( Table 1 (i.e., "only det", "only seg", PairNet, and TripleNet).
(a) demonstrates that detection and segmentation can be both improved by PairNet and TripleNet. (b) demonstrates that detection is mainly improved by PairNet or TripleNet. (c) demonstrates that segmentation is mainly improved by PairNet or TripleNet.
ject detection and semantic segmentation on each layer of the decoder is useful. Meanwhile, the method using all the different layers of the decoder for segmentation (i.e., Table 1(c)) has better performance than the method only using the last layer of the decoder for segmentation (i.e., Table 2. Comparison of BlitzNet, the proposed PairNet, and the proposed TripleNet. All the methods are re-implemented in the same parameter settings.
decoder for semantic segmentation can give a much deeper supervision.
The bottom part of Table 1 shows the ablation experiments of TripleNet. Based on PairNet, TripleNet adds three modules (i.e., MFS, IC, CAS, and AFS). When adding the MFS module, TripleNet outperforms PairNet by 0.1% on object detection and 0.4% on semantic segmentation, respectively. When adding the MFS and IC modules, TripleNet outperforms PairNet by 0.6% on object detection and 0.5% on semantic segmentation. When adding all four modules, TripleNet has the best detection performance and segmentation performance. Table 1. The first two columns are ground-truth of detection and segmentation. The results of only detection in Table 1(a) and only segmentation in Table 1(c) are shown in the third and forth columns. The results of Pair-Net in Table 1(d) and TripleNet in Table 1(g) are shown in fifth to eighth columns. In Fig. 4(a), the examples of detection and segmentation both improved by joint detection and segmentation are given. For example, in the first row, "only det" and "only seg" both miss three potted plant, while PairNet only misses one potted plant and TripleNet does not miss any potted plant. In Fig. 4(b), the examples of detection result improved are shown. For example, in the first row, "only detect" can only detect one ship, PairNet can detect three ships, and TripleNet can detect four ships. In Fig. 4(c), the examples of segmentation results improved are shown. For example, in the second row, "only seg" recognize blue bag as motorbike, but PairNet and TripleNet can recognize the blue bag as background.
Meanwhile, the proposed PairNet and TripleNet are also compared to the related BlitzNet [7]. For fair comparison, BltizNet are re-implemented in the similar parameter settings as the proposed PairNet and TripleNet. PairNet which simply joins detection and segmentation in each layer of the decoder has been already comparable with BlitzNet. TripleNet outperforms BlitzNet on both object detection and semantic segmentation, which demonstrates that the proposed method can make full use of the mutual information to improve the two tasks.
Comparison with state-of-the-art methods on the VOC2012 test dataset
In this section, the proposed PairNet is compared with some state-of-the-art methods on the VOC 2007 dataset. Among these methods, SSD [29], RON [18], DSSD [9], DES [47], RefineDet [48], and DFPR [19] are only used for object detection, ParseNet [27], Deeplab V2 [5], DPN [28], RefineNet [25], PSPNet [49], DFPN [45] are only used for semantic segmentation. Table 2 shows object detect results (mAP) and semantic segmentation results (mIoU) of these methods on th VOC2012 test set. It can been seen most state-of-the-art methods can only output detection results (i.e., SSD, RON, DSSD, DES, RefineDet, and DFPR) or segmentation result (i.e., FCN, ParseNet, DeepLab, DPN, PSPNet, and DFPN). Only BlitzNet and our proposed TripleNet can simultaneously output the results of object detection and semantic segmentation. mAP and mIoU of BlitzNet are 79.0% and 75.6%, while mAP and mIoU of TripleNet are 81.0% and 82.9%. Thus, TripleNet outperforms BlitzNet by 2.0% on object detection and 7.3% on semantic segmentation. It can be also seen that TripleNet almost achieves state-of-the-art performance on both object detection and semantic segmentation.
Comparison with some state-of-the-art methods on the VOC 2007 test dataset
In this section, the proposed TripleNet and some stateof-the-art methods (i.e., SSD [29], DES [47], DSSD [9], STDN [50], BlitzNet [7], RefineDet [25]), and DFPR [19] are further compared on the VOC 2007 test set. Because only the ground-truth of object detection is provided, these methods are only evaluated on object detection. Table 4 shows mAP of these methods. mAP of TripleNet is 82.7%, which is higher than that of all state-of-the-art methods.
- - - - - - - - - - - - - - - - - - - RefineDet512 [48] VGG16 81.8 - - - - - - - - - - - - - - - - - - - -
Conclusion
In this paper, we proposed two fully convolutional networks (i.e., PairNet and TripleNet) for joint object detection and semantic segmentation. PairNet simultaneously predicts objects of different scales by different layers and parses pixel semantic labels by all different layers. TripleNet adds four modules (i.e, multiscale fused segmentation, inner-connected module, class-agnostic segmentation supervision, and attention skip-layer fusion) to PairNet. Experiments demonstrate that TripleNet can achieve stateof-the-art performance on both object detection and semantic segmentation.
| 3,116 |
1809.09299
|
2951122667
|
Joint object detection and semantic segmentation can be applied to many fields, such as self-driving cars and unmanned surface vessels. An initial and important progress towards this goal has been achieved by simply sharing the deep convolutional features for the two tasks. However, this simple scheme is unable to make full use of the fact that detection and segmentation are mutually beneficial. To overcome this drawback, we propose a framework called TripleNet where triple supervisions including detection-oriented supervision, class-aware segmentation supervision, and class-agnostic segmentation supervision are imposed on each layer of the decoder network. Class-agnostic segmentation supervision provides an objectness prior knowledge for both semantic segmentation and object detection. Besides the three types of supervisions, two light-weight modules (i.e., inner-connected module and attention skip-layer fusion) are also incorporated into each layer of the decoder. In the proposed framework, detection and segmentation can sufficiently boost each other. Moreover, class-agnostic and class-aware segmentation on each decoder layer are not performed at the test stage. Therefore, no extra computational costs are introduced at the test stage. Experimental results on the VOC2007 and VOC2012 datasets demonstrate that the proposed TripleNet is able to improve both the detection and segmentation accuracies without adding extra computational costs.
|
It aims to predict the semantic label of each pixel in an image, which has achieved significant progress based on fully convolutional networks (i.e., FCN @cite_47 ). Generally, the methods of semantic segmentation can be also divided into two main classes: encoder-decoder methods and spatial pyramid methods. Encoder-decoder methods contain two subnetworks: an encoder subnetwork and a decoder subnetwork. The encoder subnetwork extracts strong semantic features and reduces spatial resolution of feature maps, which is usually based on the classical CNN models (e.g., VGG @cite_2 , ResNet @cite_45 , DenseNet @cite_28 ) pre-trained on ImageNet @cite_8 . The decoder subnetwork gradually upsamples the feature maps of encoder subnetwork. DeconvNet @cite_25 and SegNet @cite_31 use max-pooling indices of the encoder subnetwork to upsample the feature maps. To extract context information, some methods @cite_13 @cite_33 @cite_3 adopt skip-layer connection to combine the feature maps from the encoder and decoder subnetworks.
|
{
"abstract": [
"We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1] . The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN [2] and also with the well known DeepLab-LargeFOV [3] , DeconvNet [4] architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures and can be trained end-to-end using stochastic gradient descent. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. These quantitative assessments show that SegNet provides good performance with competitive inference time and most efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at http: mi.eng.cam.ac.uk projects segnet .",
"",
"The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.",
"Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1) 2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at this https URL .",
"Most existing methods of semantic segmentation still suffer from two aspects of challenges: intra-class inconsistency and inter-class indistinction. To tackle these two problems, we propose a Discriminative Feature Network (DFN), which contains two sub-networks: Smooth Network and Border Network. Specifically, to handle the intra-class inconsistency problem, we specially design a Smooth Network with Channel Attention Block and global average pooling to select the more discriminative features. Furthermore, we propose a Border Network to make the bilateral features of boundary distinguishable with deep semantic boundary supervision. Based on our proposed DFN, we achieve state-of-the-art performance 86.2 mean IOU on PASCAL VOC 2012 and 80.3 mean IOU on Cityscapes dataset.",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.",
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.",
"One of recent trends [31, 32, 14] in network architecture design is stacking small filters (e.g., 1x1 or 3x3) in the entire network because the stacked small filters is more efficient than a large kernel, given the same computational complexity. However, in the field of semantic segmentation, where we need to perform dense per-pixel prediction, we find that the large kernel (and effective receptive field) plays an important role when we have to perform the classification and localization tasks simultaneously. Following our design principle, we propose a Global Convolutional Network to address both the classification and localization issues for the semantic segmentation. We also suggest a residual-based boundary refinement to further refine the object boundaries. Our approach achieves state-of-art performance on two public benchmarks and significantly outperforms previous results, 82.2 (vs 80.2 ) on PASCAL VOC 2012 dataset and 76.9 (vs 71.8 ) on Cityscapes dataset.",
"We propose a novel semantic segmentation algorithm by learning a deconvolution network. We learn the network on top of the convolutional layers adopted from VGG 16-layer net. The deconvolution network is composed of deconvolution and unpooling layers, which identify pixel-wise class labels and predict segmentation masks. We apply the trained network to each proposal in an input image, and construct the final semantic segmentation map by combining the results from all proposals in a simple manner. The proposed algorithm mitigates the limitations of the existing methods based on fully convolutional networks by integrating deep deconvolution network and proposal-wise prediction; our segmentation method typically identifies detailed structures and handles objects in multiple scales naturally. Our network demonstrates outstanding performance in PASCAL VOC 2012 dataset, and we achieve the best accuracy (72.5 ) among the methods trained with no external data through ensemble with the fully convolutional network."
],
"cite_N": [
"@cite_31",
"@cite_33",
"@cite_8",
"@cite_28",
"@cite_3",
"@cite_45",
"@cite_2",
"@cite_47",
"@cite_13",
"@cite_25"
],
"mid": [
"2963881378",
"",
"2117539524",
"2511730936",
"2799166040",
"2949650786",
"1686810756",
"2952632681",
"2598666589",
"2952637581"
]
}
|
Triply Supervised Decoder Networks for Joint Detection and Segmentation
|
Object detection and semantic segmentation are two fundamental and important tasks in the field of computer vision. In recent few years, object detection [36,29,26] and semantic segmentation [30,5,1] with deep convolutional networks [20,39,14,17] have achieved great progress, respectively. Most state-of-the-art methods only focus on one single task, which does not join object detection and semantic segmentation together. However, joint object detection and semantic segmentation is very necessary and im- The d featur
The en featur Figure 1. Some architectures of joint detection and segmentation. (a) The last layer of the encoder is used for detection and segmentation [2]. (b) The branch for detection is refined by the branch for segmentation [31,47]. (c) Each layer of the decoder detects objects of different scales, and the fused layer is for segmentation [7]. (d) The proposed PairNet. Each layer of the decoder is simultaneously for detection and segmentation. (e) The proposed TripleNet, which has three types of supervisions and some lightweight modules.
portant in many applications, such as self-driving cars and unmanned surface vessels. In fact, object detection and semantic segmentation are highly related. On the one hand, semantic segmentation usually used as a multi-task supervision can help improve object detection [31,24]. On the other hand, object detection can be used as a prior knowledge to help improve performance of semantic segmentation [14,34].
Due to application requirements and task relevance, joint object detection and semantic segmentation has gradually attracted the attention of researchers. Fig. 1 summarizes three typical methods of joint object detection and semantic segmentation. Fig. 1(a) shows the simplest and most naive way where one branch for object detection and one branch for semantic segmentation are in parallel attached to the last layers of the encoder [2]. In Fig. 1(b), the branch for object detection is further refined by the features from the branch for semantic segmentation [31,47]. Recently, the encoder-decoder network is further used for joint object de-tection and semantic segmentation. In Fig. 1(c), each layer of the decoder is used for multi-scale object detection, and the concatenated feature map from different layers of the decoder is used for semantic segmentation [7]. The above methods have achieved great success for detection and segmentation. However, the performance is still far from the strict demand of real applications such as self-driving cars and unmanned surface vessels. One possible reason is that the mutual benefit between the two tasks is not fully exploited.
To exploit mutual benefit for joint object detection and semantic segmentation tasks, in this paper, we propose to impose three types of supervisions (i.e., detection-oriented supervision, class-aware segmentation supervision, and class-agnostic segmentation supervision) on each layer of the decoder network. Meanwhile, the light-weight modules (i.e., the inner-connected module and attention skip-layer fusion) are also incorporated. The corresponding network is called TripleNet (see Fig. 1(e)). It is noted that we also propose to only impose the detection-oriented supervision and class-aware segmentation supervision on each layer of the decoder, which is called PairNet (see Fig. 1(d)). The contributions of this paper can be summarized as follows:
(1) Two novel frameworks (i.e., PairNet and TripleNet) for joint object detection and semantic segmentation are proposed. In TripleNet, the detection-oriented supervision, class-aware segmentation supervision, and class-agnostic segmentation supervision are imposed on each layer of the decoder. Meanwhile, two light-weight modules (i.e., the inner-connected module and attention skip-layer fusion) is also incorporated into each layer of the decoder.
(2) A lot of synergies are gained from TripleNet. Both detection and segmentation accuracies are significantly improved. The improvement is not at expense of incurring extra computational costs because the class-agnostic segmentation and class-aware segmentation are not performed in each layer of the decoder at the test stage .
(3) Experiments on the VOC 2007 and VOC 2012 datasets are conducted to demonstrate the effectiveness of the proposed TripleNet.
The rest of this paper is organized as follows. Section 2 reviews some related works of object detection and semantic segmentation. Section 3 introduces our proposed method in detail. Experiments are shown in Section 4. Finally, it is concluded in Section 5.
The proposed methods
In recent years, the fully convolutional networks (FCN) with encoder-decoder structure have achieved great success on object detection [26,9] and semantic segmentation [1], respectively. For example, DSSD [9,34] and RetinaNet [26] use different layers of the decoder to detect objects of different scales, respectively. By using the encoderdecoder structure, SegNet [1] and LargeKernel [33] generate high-resolution logits for semantic segmentation. Based on above observations, a very natural and simple idea is that FCN with encoder-decoder is suitable for joint object detection and semantic segmentation.
In this section, we give a detailed introduction about the proposed paired supervision decoder network (i.e., PairNet) and triply supervised decoder network (i.e., TripleNet) for joint object detection and semantic segmentation.
Paired supervision decoder network (PairNet)
Based on the encoder-decoder structure, a feature pyramid network is naturally proposed to join object detection and semantic segmentation. Namely, the supervision of object detection and semantic segmentation is added to each layer of the decoder, which is called PairNet. On the one hand, PairNet uses different layers of the decoder to detect objects of different scales. On the other hand, instead of using the last high-resolution layer for semantic segmentation which is adopted by most state-of-the-art methods [1,33], PairNet uses each layer of the decoder to respectively parse pixel semantic labels. Though the proposed PairNet is very simple and naive, it has not been explored for joint object detection and semantic segmentation to the best of our knowledge. Fig. 2(a) gives the detailed architecture of PairNet. The input image firstly goes through a fully convolutional network with encoder-decoder structure. The encoder gradually down-samples the feature map. In this paper, the famous ResNet-50 or ResNet101 [15] (i.e., res1-res4) and some new added residual blocks (i.e., res5-res7) construct the encoder. The decoder gradually maps the low-resolution feature map to the high-resolution feature map. To enhance context information, skip-layer fusion is used to fuse the feature map from the decoder and the corresponding feature map from the encoder. Fig. 2(b) skip-layer fusion. The feature maps in the decoder is firstly upsampled by bilinear interpolation and then concatenated with the corresponding feature maps of the same resolution in the encoder. After that, the concatenated feature maps go through a residual unit to generate the output feature maps.
To join object detection and semantic segmentation, each layer of the decoder is further split into two different branches. The branch of object detection consists of a 3 × 3 convolutional layer and two sibling 1 × 1 convolutional layers for object classification and bounding box regression. The branch of object detection at different layers is used to detect objects of different scales. Specifically, the branch at front layer of the decoder with low resolution is used to detect large-scale objects, while the branch at latter layer with high resolution is used to detect small-scale objects.
The branch of semantic segmentation consists of a 3 × 3 convolutional layer to generate the logits. There are two different ways to compute the segmentation loss. The first one is that the segmentation logits are upsampled to the same resolution of ground-truth, and the second one is that the ground-truth is downsampled to the same resolution of the logits. We found that the first strategy have a little better performance, which is adopted in the follows.
Triply supervised decoder network (TripleNet)
To further improve the performance of joint object detection and semantic segmentation, triply supervision decoder network (called TripleNet) is further proposed, where detection-oriented supervision, class-aware segmentation supervision, and class-agnostic segmentation supervision are added on each layer of the decoder. Fig. 3(a) gives the detailed architecture of TripleNet. Compared to PairNet, TripleNet add some new modules (i.e., multiscale fused segmentation, the inner-connected module, and class-agnostic segmentation supervision). In the following section, we introduce these modules in detailed.
Multiscale fused segmentation It has been demonstrated that multi-scale features are useful for semantic segmentation [7,49,46]. To use multi-scale features of different layers in the decoder for better semantic segmentation, the feature maps of different layers in the decoder are upsampled to the same spatial resolution and concatenated together. After that, a 3 × 3 convolutional layer is used to generate the segmentation logits. Compared to the segmentations based on one layer of the decoder, multilayer fused features can make better use of context information. Thus, multilayer fused segmentation is used for final prediction at the test stage. Meanwhile, the semantic segmentation based on each layer of the decoder can be seen as a deep supervision for feature learning.
The inner-connected module In Section 3.1, PairNet only shares the base network for object detection and semantic segmentation, while the branches of object detection and semantic segmentation have no cross. To further help object detection, an inner-connected module is proposed to refine object detection by the logits of semantic segmentation. Fig. 3(b) shows the inner-connected module in layer i. The feature map in layer i first goes through a 3 × 3 convolutional layer to produce the segmentation logits for the branch of semantic segmentation. Meanwhile, the segmentation logit goes through two 3 × 3 convolutional layers to generate new feature map which are further concatenated with feature maps in layer i. Based on concatenated feature maps, a 3 × 3 convolutional layer is used to generate the feature map for the branch of object detection.
Class-agnostic segmentation supervision Semantic segmentation mentioned above is class-aware, which aims to simultaneously identify specific object categories and the Table 1. Ablation experiments of PairNet and TripleNet on the VOC2012-val-seg set. The backbone model is ResNet50 [15], and the input image is rescaled to the size of 300 × 300. "MFS" means multiscale fused segmentation, "IC" means inner-connected module, "CAS" means class-aware segmentation, and "ASF" means attention skip-layer fusion.
background. We argue that class-aware semantic segmentation may ignore the discrimination between objects and the background. Therefore, class-agnostic segmentation supervision module is further added to each layer of the decoder. Specifically, a 3 × 3 convolutional layer is added to generate the logits of class-agnostic semantic segmentation. To generate the ground-truth of class-agnostic semantic segmentation, the objects of different categories are set as one category, and the background is set as another category. Attention skip-layer fusion In Section 3.1, PairNet simply fuses the feature maps of the decoder and the corresponding feature maps of the encoder. Generally, the features from the layer of the encoder have relatively low-level semantic, and that from the layer of decoder have relatively high-level semantic. To enhance informative features and suppress less useful features from the encoder by the features from the decoder, Squeeze-and-Excitation (SE) [16] block is used. The input of a SE block is the layer of the decoder, and the output of SE block is used to scale the layer of the encoder. After that, the layer of the decoder and the scaled layer of the encoder is concatenated for fusion.
Experiments
Datasets
To demonstrate the effectiveness of proposed methods and compare with same state-of-the-art methods, some experiments on the famous VOC 2007 and VOC 2012 datasets [8] are conducted in this section.
The PASCAL VOC challenge [8] has been held annually since 2006, which consists of three principal challenges (i.e., image classification, object detection, and semantic segmentation). Among these annual challenges, the VOC 2007 and VOC 2012 datasets are usually used to evaluate the performance of object detection and semantic segmentation, which have 20 object categories. The VOC 2007 dataset contains 5011 trainval images and 4952 test images. The VOC 2012 dataset is split into three subsets (i.e., train, val, and test). The train set con-tains 5717 images for detection and 1464 images for semantic segmentation (called VOC12-train-seg). The val set contains 5823 images for detection and 1449 images for segmentation (called VOC12-val-seg). The test set contains 10991 images for detection and 1456 for segmentation. To enlarge the training set for semantic segmentation, the additional segmentation data provided by [12] is used, which contains 10582 training images (called VOC12-trainaug-seg).
For object detection, mean average precision (i.e., mAP) is used to performance evaluation. On the PASCAL VOC datasets, mAP is calculated under the IoU threshold of 0.5. For semantic segmentation, mean intersection over union (i.e., mIoU) is used for performance evaluation.
Ablation experiments on the VOC 2012 dataset
In this subsection, experiments are conducted on the PASCAL VOC 2012 to validate the effectiveness of proposed method. On the PASCAL VOC 2012, the set of VOC12-trainaug-seg is used for training and the set of VOC12-val-seg is used for performance evaluation, where they have the ground truth of both object detection and semantic segmentation. The input images are rescaled to the size of 300 × 300, and the size of mini-batch is 32. The total number of iteration in the training stage is 40k, where the learning rate of first 25k iterations is 0.0001, that of following 10k iterations is 0.00001, and that of last 5k iterations is 0.000001.
The top part of Table 1 shows the ablation experiments of PairNet. When all different layers of the decoder are only used for multi-scale object detection (i.e., Table 1(a)), mAP of object detection is 78.0%. When all different layers of the decoder are used for semantic segmentation (i.e., Table 1(c)), mIoU of semantic segmentation is 72.5%. When all the different layer of the decoder is used for object detection and semantic segmentation together (i.e., Table 1(d)), mAP and mIoU of PairNet are 78.9% and 73.1%, respectively. Namely, PairNet can improve both object detection and semantic segmentation, which indicates that joint ob-
GT of det
GT of seg only det (Table 1(a)) only seg (Table 1(b)) Our PairNet (Table 1(d)) Our TripleNet (Table 1( Table 1 (i.e., "only det", "only seg", PairNet, and TripleNet).
(a) demonstrates that detection and segmentation can be both improved by PairNet and TripleNet. (b) demonstrates that detection is mainly improved by PairNet or TripleNet. (c) demonstrates that segmentation is mainly improved by PairNet or TripleNet.
ject detection and semantic segmentation on each layer of the decoder is useful. Meanwhile, the method using all the different layers of the decoder for segmentation (i.e., Table 1(c)) has better performance than the method only using the last layer of the decoder for segmentation (i.e., Table 2. Comparison of BlitzNet, the proposed PairNet, and the proposed TripleNet. All the methods are re-implemented in the same parameter settings.
decoder for semantic segmentation can give a much deeper supervision.
The bottom part of Table 1 shows the ablation experiments of TripleNet. Based on PairNet, TripleNet adds three modules (i.e., MFS, IC, CAS, and AFS). When adding the MFS module, TripleNet outperforms PairNet by 0.1% on object detection and 0.4% on semantic segmentation, respectively. When adding the MFS and IC modules, TripleNet outperforms PairNet by 0.6% on object detection and 0.5% on semantic segmentation. When adding all four modules, TripleNet has the best detection performance and segmentation performance. Table 1. The first two columns are ground-truth of detection and segmentation. The results of only detection in Table 1(a) and only segmentation in Table 1(c) are shown in the third and forth columns. The results of Pair-Net in Table 1(d) and TripleNet in Table 1(g) are shown in fifth to eighth columns. In Fig. 4(a), the examples of detection and segmentation both improved by joint detection and segmentation are given. For example, in the first row, "only det" and "only seg" both miss three potted plant, while PairNet only misses one potted plant and TripleNet does not miss any potted plant. In Fig. 4(b), the examples of detection result improved are shown. For example, in the first row, "only detect" can only detect one ship, PairNet can detect three ships, and TripleNet can detect four ships. In Fig. 4(c), the examples of segmentation results improved are shown. For example, in the second row, "only seg" recognize blue bag as motorbike, but PairNet and TripleNet can recognize the blue bag as background.
Meanwhile, the proposed PairNet and TripleNet are also compared to the related BlitzNet [7]. For fair comparison, BltizNet are re-implemented in the similar parameter settings as the proposed PairNet and TripleNet. PairNet which simply joins detection and segmentation in each layer of the decoder has been already comparable with BlitzNet. TripleNet outperforms BlitzNet on both object detection and semantic segmentation, which demonstrates that the proposed method can make full use of the mutual information to improve the two tasks.
Comparison with state-of-the-art methods on the VOC2012 test dataset
In this section, the proposed PairNet is compared with some state-of-the-art methods on the VOC 2007 dataset. Among these methods, SSD [29], RON [18], DSSD [9], DES [47], RefineDet [48], and DFPR [19] are only used for object detection, ParseNet [27], Deeplab V2 [5], DPN [28], RefineNet [25], PSPNet [49], DFPN [45] are only used for semantic segmentation. Table 2 shows object detect results (mAP) and semantic segmentation results (mIoU) of these methods on th VOC2012 test set. It can been seen most state-of-the-art methods can only output detection results (i.e., SSD, RON, DSSD, DES, RefineDet, and DFPR) or segmentation result (i.e., FCN, ParseNet, DeepLab, DPN, PSPNet, and DFPN). Only BlitzNet and our proposed TripleNet can simultaneously output the results of object detection and semantic segmentation. mAP and mIoU of BlitzNet are 79.0% and 75.6%, while mAP and mIoU of TripleNet are 81.0% and 82.9%. Thus, TripleNet outperforms BlitzNet by 2.0% on object detection and 7.3% on semantic segmentation. It can be also seen that TripleNet almost achieves state-of-the-art performance on both object detection and semantic segmentation.
Comparison with some state-of-the-art methods on the VOC 2007 test dataset
In this section, the proposed TripleNet and some stateof-the-art methods (i.e., SSD [29], DES [47], DSSD [9], STDN [50], BlitzNet [7], RefineDet [25]), and DFPR [19] are further compared on the VOC 2007 test set. Because only the ground-truth of object detection is provided, these methods are only evaluated on object detection. Table 4 shows mAP of these methods. mAP of TripleNet is 82.7%, which is higher than that of all state-of-the-art methods.
- - - - - - - - - - - - - - - - - - - RefineDet512 [48] VGG16 81.8 - - - - - - - - - - - - - - - - - - - -
Conclusion
In this paper, we proposed two fully convolutional networks (i.e., PairNet and TripleNet) for joint object detection and semantic segmentation. PairNet simultaneously predicts objects of different scales by different layers and parses pixel semantic labels by all different layers. TripleNet adds four modules (i.e, multiscale fused segmentation, inner-connected module, class-agnostic segmentation supervision, and attention skip-layer fusion) to PairNet. Experiments demonstrate that TripleNet can achieve stateof-the-art performance on both object detection and semantic segmentation.
| 3,116 |
1809.09299
|
2951122667
|
Joint object detection and semantic segmentation can be applied to many fields, such as self-driving cars and unmanned surface vessels. An initial and important progress towards this goal has been achieved by simply sharing the deep convolutional features for the two tasks. However, this simple scheme is unable to make full use of the fact that detection and segmentation are mutually beneficial. To overcome this drawback, we propose a framework called TripleNet where triple supervisions including detection-oriented supervision, class-aware segmentation supervision, and class-agnostic segmentation supervision are imposed on each layer of the decoder network. Class-agnostic segmentation supervision provides an objectness prior knowledge for both semantic segmentation and object detection. Besides the three types of supervisions, two light-weight modules (i.e., inner-connected module and attention skip-layer fusion) are also incorporated into each layer of the decoder. In the proposed framework, detection and segmentation can sufficiently boost each other. Moreover, class-agnostic and class-aware segmentation on each decoder layer are not performed at the test stage. Therefore, no extra computational costs are introduced at the test stage. Experimental results on the VOC2007 and VOC2012 datasets demonstrate that the proposed TripleNet is able to improve both the detection and segmentation accuracies without adding extra computational costs.
|
Spatial pyramid methods adopt the idea of spatial pyramid pooling @cite_39 to extract multi-scale information from the last output feature maps. Chen @cite_46 @cite_1 @cite_42 @cite_40 proposed to use multiple convolutional layers of different atrous rates in parallel (called ASPP) to extract multi-scale features. Instead of using convolutional layers of different atrous rates, Zhao @cite_30 proposed pyramid pooling module (called PSPnet), which downsamples and upsamples the feature maps in parallel. Yang @cite_40 proposed to use dense connection to cover object scale range densely.
|
{
"abstract": [
"Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction tasks. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields new record of mIoU accuracy 85.4 on PASCAL VOC 2012 and accuracy 80.2 on Cityscapes.",
"Recent advances in deep learning, especially deep convolutional neural networks (CNNs), have led to significant improvement over previous semantic segmentation systems. Here we show how to improve pixel-wise semantic segmentation by manipulating convolution-related operations that are of both theoretical and practical value. First, we design dense upsampling convolution (DUC) to generate pixel-level prediction, which is able to capture and decode more detailed information that is generally missing in bilinear upsampling. Second, we propose a hybrid dilated convolution (HDC) framework in the encoding phase. This framework 1) effectively enlarges the receptive fields (RF) of the network to aggregate global information; 2) alleviates what we call the \"gridding issue\"caused by the standard dilated convolution operation. We evaluate our approaches thoroughly on the Cityscapes dataset, and achieve a state-of-art result of 80.1 mIOU in the test set at the time of submission. We also have achieved state-of-theart overall on the KITTI road estimation benchmark and the PASCAL VOC2012 segmentation task. Our source code can be found at https: github.com TuSimple TuSimple-DUC.",
"In this work, we revisit atrous convolution, a powerful tool to explicitly adjust filter's field-of-view as well as control the resolution of feature responses computed by Deep Convolutional Neural Networks, in the application of semantic image segmentation. To handle the problem of segmenting objects at multiple scales, we design modules which employ atrous convolution in cascade or in parallel to capture multi-scale context by adopting multiple atrous rates. Furthermore, we propose to augment our previously proposed Atrous Spatial Pyramid Pooling module, which probes convolutional features at multiple scales, with image-level features encoding global context and further boost performance. We also elaborate on implementation details and share our experience on training our system. The proposed DeepLabv3' system significantly improves over our previous DeepLab versions without DenseCRF post-processing and attains comparable performance with other state-of-art models on the PASCAL VOC 2012 semantic image segmentation benchmark.",
"Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g. 224×224) input image. This requirement is “artificial” and may hurt the recognition accuracy for the images or sub-images of an arbitrary size scale. In this work, we equip the networks with a more principled pooling strategy, “spatial pyramid pooling”, to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size scale. By removing the fixed-size limitation, we can improve all CNN-based image classification methods in general. Our SPP-net achieves state-of-the-art accuracy on the datasets of ImageNet 2012, Pascal VOC 2007, and Caltech101.",
"Semantic image segmentation is a basic street scene understanding task in autonomous driving, where each pixel in a high resolution image is categorized into a set of semantic labels. Unlike other scenarios, objects in autonomous driving scene exhibit very large scale changes, which poses great challenges for high-level feature representation in a sense that multi-scale information must be correctly encoded. To remedy this problem, atrous convolution[14]was introduced to generate features with larger receptive fields without sacrificing spatial resolution. Built upon atrous convolution, Atrous Spatial Pyramid Pooling (ASPP)[2] was proposed to concatenate multiple atrous-convolved features using different dilation rates into a final feature representation. Although ASPP is able to generate multi-scale features, we argue the feature resolution in the scale-axis is not dense enough for the autonomous driving scenario. To this end, we propose Densely connected Atrous Spatial Pyramid Pooling (DenseASPP), which connects a set of atrous convolutional layers in a dense way, such that it generates multi-scale features that not only cover a larger scale range, but also cover that scale range densely, without significantly increasing the model size. We evaluate DenseASPP on the street scene benchmark Cityscapes[4] and achieve state-of-the-art performance.",
"In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online."
],
"cite_N": [
"@cite_30",
"@cite_42",
"@cite_1",
"@cite_39",
"@cite_40",
"@cite_46"
],
"mid": [
"2952596663",
"2592939477",
"2630837129",
"2179352600",
"2799213142",
"2412782625"
]
}
|
Triply Supervised Decoder Networks for Joint Detection and Segmentation
|
Object detection and semantic segmentation are two fundamental and important tasks in the field of computer vision. In recent few years, object detection [36,29,26] and semantic segmentation [30,5,1] with deep convolutional networks [20,39,14,17] have achieved great progress, respectively. Most state-of-the-art methods only focus on one single task, which does not join object detection and semantic segmentation together. However, joint object detection and semantic segmentation is very necessary and im- The d featur
The en featur Figure 1. Some architectures of joint detection and segmentation. (a) The last layer of the encoder is used for detection and segmentation [2]. (b) The branch for detection is refined by the branch for segmentation [31,47]. (c) Each layer of the decoder detects objects of different scales, and the fused layer is for segmentation [7]. (d) The proposed PairNet. Each layer of the decoder is simultaneously for detection and segmentation. (e) The proposed TripleNet, which has three types of supervisions and some lightweight modules.
portant in many applications, such as self-driving cars and unmanned surface vessels. In fact, object detection and semantic segmentation are highly related. On the one hand, semantic segmentation usually used as a multi-task supervision can help improve object detection [31,24]. On the other hand, object detection can be used as a prior knowledge to help improve performance of semantic segmentation [14,34].
Due to application requirements and task relevance, joint object detection and semantic segmentation has gradually attracted the attention of researchers. Fig. 1 summarizes three typical methods of joint object detection and semantic segmentation. Fig. 1(a) shows the simplest and most naive way where one branch for object detection and one branch for semantic segmentation are in parallel attached to the last layers of the encoder [2]. In Fig. 1(b), the branch for object detection is further refined by the features from the branch for semantic segmentation [31,47]. Recently, the encoder-decoder network is further used for joint object de-tection and semantic segmentation. In Fig. 1(c), each layer of the decoder is used for multi-scale object detection, and the concatenated feature map from different layers of the decoder is used for semantic segmentation [7]. The above methods have achieved great success for detection and segmentation. However, the performance is still far from the strict demand of real applications such as self-driving cars and unmanned surface vessels. One possible reason is that the mutual benefit between the two tasks is not fully exploited.
To exploit mutual benefit for joint object detection and semantic segmentation tasks, in this paper, we propose to impose three types of supervisions (i.e., detection-oriented supervision, class-aware segmentation supervision, and class-agnostic segmentation supervision) on each layer of the decoder network. Meanwhile, the light-weight modules (i.e., the inner-connected module and attention skip-layer fusion) are also incorporated. The corresponding network is called TripleNet (see Fig. 1(e)). It is noted that we also propose to only impose the detection-oriented supervision and class-aware segmentation supervision on each layer of the decoder, which is called PairNet (see Fig. 1(d)). The contributions of this paper can be summarized as follows:
(1) Two novel frameworks (i.e., PairNet and TripleNet) for joint object detection and semantic segmentation are proposed. In TripleNet, the detection-oriented supervision, class-aware segmentation supervision, and class-agnostic segmentation supervision are imposed on each layer of the decoder. Meanwhile, two light-weight modules (i.e., the inner-connected module and attention skip-layer fusion) is also incorporated into each layer of the decoder.
(2) A lot of synergies are gained from TripleNet. Both detection and segmentation accuracies are significantly improved. The improvement is not at expense of incurring extra computational costs because the class-agnostic segmentation and class-aware segmentation are not performed in each layer of the decoder at the test stage .
(3) Experiments on the VOC 2007 and VOC 2012 datasets are conducted to demonstrate the effectiveness of the proposed TripleNet.
The rest of this paper is organized as follows. Section 2 reviews some related works of object detection and semantic segmentation. Section 3 introduces our proposed method in detail. Experiments are shown in Section 4. Finally, it is concluded in Section 5.
The proposed methods
In recent years, the fully convolutional networks (FCN) with encoder-decoder structure have achieved great success on object detection [26,9] and semantic segmentation [1], respectively. For example, DSSD [9,34] and RetinaNet [26] use different layers of the decoder to detect objects of different scales, respectively. By using the encoderdecoder structure, SegNet [1] and LargeKernel [33] generate high-resolution logits for semantic segmentation. Based on above observations, a very natural and simple idea is that FCN with encoder-decoder is suitable for joint object detection and semantic segmentation.
In this section, we give a detailed introduction about the proposed paired supervision decoder network (i.e., PairNet) and triply supervised decoder network (i.e., TripleNet) for joint object detection and semantic segmentation.
Paired supervision decoder network (PairNet)
Based on the encoder-decoder structure, a feature pyramid network is naturally proposed to join object detection and semantic segmentation. Namely, the supervision of object detection and semantic segmentation is added to each layer of the decoder, which is called PairNet. On the one hand, PairNet uses different layers of the decoder to detect objects of different scales. On the other hand, instead of using the last high-resolution layer for semantic segmentation which is adopted by most state-of-the-art methods [1,33], PairNet uses each layer of the decoder to respectively parse pixel semantic labels. Though the proposed PairNet is very simple and naive, it has not been explored for joint object detection and semantic segmentation to the best of our knowledge. Fig. 2(a) gives the detailed architecture of PairNet. The input image firstly goes through a fully convolutional network with encoder-decoder structure. The encoder gradually down-samples the feature map. In this paper, the famous ResNet-50 or ResNet101 [15] (i.e., res1-res4) and some new added residual blocks (i.e., res5-res7) construct the encoder. The decoder gradually maps the low-resolution feature map to the high-resolution feature map. To enhance context information, skip-layer fusion is used to fuse the feature map from the decoder and the corresponding feature map from the encoder. Fig. 2(b) skip-layer fusion. The feature maps in the decoder is firstly upsampled by bilinear interpolation and then concatenated with the corresponding feature maps of the same resolution in the encoder. After that, the concatenated feature maps go through a residual unit to generate the output feature maps.
To join object detection and semantic segmentation, each layer of the decoder is further split into two different branches. The branch of object detection consists of a 3 × 3 convolutional layer and two sibling 1 × 1 convolutional layers for object classification and bounding box regression. The branch of object detection at different layers is used to detect objects of different scales. Specifically, the branch at front layer of the decoder with low resolution is used to detect large-scale objects, while the branch at latter layer with high resolution is used to detect small-scale objects.
The branch of semantic segmentation consists of a 3 × 3 convolutional layer to generate the logits. There are two different ways to compute the segmentation loss. The first one is that the segmentation logits are upsampled to the same resolution of ground-truth, and the second one is that the ground-truth is downsampled to the same resolution of the logits. We found that the first strategy have a little better performance, which is adopted in the follows.
Triply supervised decoder network (TripleNet)
To further improve the performance of joint object detection and semantic segmentation, triply supervision decoder network (called TripleNet) is further proposed, where detection-oriented supervision, class-aware segmentation supervision, and class-agnostic segmentation supervision are added on each layer of the decoder. Fig. 3(a) gives the detailed architecture of TripleNet. Compared to PairNet, TripleNet add some new modules (i.e., multiscale fused segmentation, the inner-connected module, and class-agnostic segmentation supervision). In the following section, we introduce these modules in detailed.
Multiscale fused segmentation It has been demonstrated that multi-scale features are useful for semantic segmentation [7,49,46]. To use multi-scale features of different layers in the decoder for better semantic segmentation, the feature maps of different layers in the decoder are upsampled to the same spatial resolution and concatenated together. After that, a 3 × 3 convolutional layer is used to generate the segmentation logits. Compared to the segmentations based on one layer of the decoder, multilayer fused features can make better use of context information. Thus, multilayer fused segmentation is used for final prediction at the test stage. Meanwhile, the semantic segmentation based on each layer of the decoder can be seen as a deep supervision for feature learning.
The inner-connected module In Section 3.1, PairNet only shares the base network for object detection and semantic segmentation, while the branches of object detection and semantic segmentation have no cross. To further help object detection, an inner-connected module is proposed to refine object detection by the logits of semantic segmentation. Fig. 3(b) shows the inner-connected module in layer i. The feature map in layer i first goes through a 3 × 3 convolutional layer to produce the segmentation logits for the branch of semantic segmentation. Meanwhile, the segmentation logit goes through two 3 × 3 convolutional layers to generate new feature map which are further concatenated with feature maps in layer i. Based on concatenated feature maps, a 3 × 3 convolutional layer is used to generate the feature map for the branch of object detection.
Class-agnostic segmentation supervision Semantic segmentation mentioned above is class-aware, which aims to simultaneously identify specific object categories and the Table 1. Ablation experiments of PairNet and TripleNet on the VOC2012-val-seg set. The backbone model is ResNet50 [15], and the input image is rescaled to the size of 300 × 300. "MFS" means multiscale fused segmentation, "IC" means inner-connected module, "CAS" means class-aware segmentation, and "ASF" means attention skip-layer fusion.
background. We argue that class-aware semantic segmentation may ignore the discrimination between objects and the background. Therefore, class-agnostic segmentation supervision module is further added to each layer of the decoder. Specifically, a 3 × 3 convolutional layer is added to generate the logits of class-agnostic semantic segmentation. To generate the ground-truth of class-agnostic semantic segmentation, the objects of different categories are set as one category, and the background is set as another category. Attention skip-layer fusion In Section 3.1, PairNet simply fuses the feature maps of the decoder and the corresponding feature maps of the encoder. Generally, the features from the layer of the encoder have relatively low-level semantic, and that from the layer of decoder have relatively high-level semantic. To enhance informative features and suppress less useful features from the encoder by the features from the decoder, Squeeze-and-Excitation (SE) [16] block is used. The input of a SE block is the layer of the decoder, and the output of SE block is used to scale the layer of the encoder. After that, the layer of the decoder and the scaled layer of the encoder is concatenated for fusion.
Experiments
Datasets
To demonstrate the effectiveness of proposed methods and compare with same state-of-the-art methods, some experiments on the famous VOC 2007 and VOC 2012 datasets [8] are conducted in this section.
The PASCAL VOC challenge [8] has been held annually since 2006, which consists of three principal challenges (i.e., image classification, object detection, and semantic segmentation). Among these annual challenges, the VOC 2007 and VOC 2012 datasets are usually used to evaluate the performance of object detection and semantic segmentation, which have 20 object categories. The VOC 2007 dataset contains 5011 trainval images and 4952 test images. The VOC 2012 dataset is split into three subsets (i.e., train, val, and test). The train set con-tains 5717 images for detection and 1464 images for semantic segmentation (called VOC12-train-seg). The val set contains 5823 images for detection and 1449 images for segmentation (called VOC12-val-seg). The test set contains 10991 images for detection and 1456 for segmentation. To enlarge the training set for semantic segmentation, the additional segmentation data provided by [12] is used, which contains 10582 training images (called VOC12-trainaug-seg).
For object detection, mean average precision (i.e., mAP) is used to performance evaluation. On the PASCAL VOC datasets, mAP is calculated under the IoU threshold of 0.5. For semantic segmentation, mean intersection over union (i.e., mIoU) is used for performance evaluation.
Ablation experiments on the VOC 2012 dataset
In this subsection, experiments are conducted on the PASCAL VOC 2012 to validate the effectiveness of proposed method. On the PASCAL VOC 2012, the set of VOC12-trainaug-seg is used for training and the set of VOC12-val-seg is used for performance evaluation, where they have the ground truth of both object detection and semantic segmentation. The input images are rescaled to the size of 300 × 300, and the size of mini-batch is 32. The total number of iteration in the training stage is 40k, where the learning rate of first 25k iterations is 0.0001, that of following 10k iterations is 0.00001, and that of last 5k iterations is 0.000001.
The top part of Table 1 shows the ablation experiments of PairNet. When all different layers of the decoder are only used for multi-scale object detection (i.e., Table 1(a)), mAP of object detection is 78.0%. When all different layers of the decoder are used for semantic segmentation (i.e., Table 1(c)), mIoU of semantic segmentation is 72.5%. When all the different layer of the decoder is used for object detection and semantic segmentation together (i.e., Table 1(d)), mAP and mIoU of PairNet are 78.9% and 73.1%, respectively. Namely, PairNet can improve both object detection and semantic segmentation, which indicates that joint ob-
GT of det
GT of seg only det (Table 1(a)) only seg (Table 1(b)) Our PairNet (Table 1(d)) Our TripleNet (Table 1( Table 1 (i.e., "only det", "only seg", PairNet, and TripleNet).
(a) demonstrates that detection and segmentation can be both improved by PairNet and TripleNet. (b) demonstrates that detection is mainly improved by PairNet or TripleNet. (c) demonstrates that segmentation is mainly improved by PairNet or TripleNet.
ject detection and semantic segmentation on each layer of the decoder is useful. Meanwhile, the method using all the different layers of the decoder for segmentation (i.e., Table 1(c)) has better performance than the method only using the last layer of the decoder for segmentation (i.e., Table 2. Comparison of BlitzNet, the proposed PairNet, and the proposed TripleNet. All the methods are re-implemented in the same parameter settings.
decoder for semantic segmentation can give a much deeper supervision.
The bottom part of Table 1 shows the ablation experiments of TripleNet. Based on PairNet, TripleNet adds three modules (i.e., MFS, IC, CAS, and AFS). When adding the MFS module, TripleNet outperforms PairNet by 0.1% on object detection and 0.4% on semantic segmentation, respectively. When adding the MFS and IC modules, TripleNet outperforms PairNet by 0.6% on object detection and 0.5% on semantic segmentation. When adding all four modules, TripleNet has the best detection performance and segmentation performance. Table 1. The first two columns are ground-truth of detection and segmentation. The results of only detection in Table 1(a) and only segmentation in Table 1(c) are shown in the third and forth columns. The results of Pair-Net in Table 1(d) and TripleNet in Table 1(g) are shown in fifth to eighth columns. In Fig. 4(a), the examples of detection and segmentation both improved by joint detection and segmentation are given. For example, in the first row, "only det" and "only seg" both miss three potted plant, while PairNet only misses one potted plant and TripleNet does not miss any potted plant. In Fig. 4(b), the examples of detection result improved are shown. For example, in the first row, "only detect" can only detect one ship, PairNet can detect three ships, and TripleNet can detect four ships. In Fig. 4(c), the examples of segmentation results improved are shown. For example, in the second row, "only seg" recognize blue bag as motorbike, but PairNet and TripleNet can recognize the blue bag as background.
Meanwhile, the proposed PairNet and TripleNet are also compared to the related BlitzNet [7]. For fair comparison, BltizNet are re-implemented in the similar parameter settings as the proposed PairNet and TripleNet. PairNet which simply joins detection and segmentation in each layer of the decoder has been already comparable with BlitzNet. TripleNet outperforms BlitzNet on both object detection and semantic segmentation, which demonstrates that the proposed method can make full use of the mutual information to improve the two tasks.
Comparison with state-of-the-art methods on the VOC2012 test dataset
In this section, the proposed PairNet is compared with some state-of-the-art methods on the VOC 2007 dataset. Among these methods, SSD [29], RON [18], DSSD [9], DES [47], RefineDet [48], and DFPR [19] are only used for object detection, ParseNet [27], Deeplab V2 [5], DPN [28], RefineNet [25], PSPNet [49], DFPN [45] are only used for semantic segmentation. Table 2 shows object detect results (mAP) and semantic segmentation results (mIoU) of these methods on th VOC2012 test set. It can been seen most state-of-the-art methods can only output detection results (i.e., SSD, RON, DSSD, DES, RefineDet, and DFPR) or segmentation result (i.e., FCN, ParseNet, DeepLab, DPN, PSPNet, and DFPN). Only BlitzNet and our proposed TripleNet can simultaneously output the results of object detection and semantic segmentation. mAP and mIoU of BlitzNet are 79.0% and 75.6%, while mAP and mIoU of TripleNet are 81.0% and 82.9%. Thus, TripleNet outperforms BlitzNet by 2.0% on object detection and 7.3% on semantic segmentation. It can be also seen that TripleNet almost achieves state-of-the-art performance on both object detection and semantic segmentation.
Comparison with some state-of-the-art methods on the VOC 2007 test dataset
In this section, the proposed TripleNet and some stateof-the-art methods (i.e., SSD [29], DES [47], DSSD [9], STDN [50], BlitzNet [7], RefineDet [25]), and DFPR [19] are further compared on the VOC 2007 test set. Because only the ground-truth of object detection is provided, these methods are only evaluated on object detection. Table 4 shows mAP of these methods. mAP of TripleNet is 82.7%, which is higher than that of all state-of-the-art methods.
- - - - - - - - - - - - - - - - - - - RefineDet512 [48] VGG16 81.8 - - - - - - - - - - - - - - - - - - - -
Conclusion
In this paper, we proposed two fully convolutional networks (i.e., PairNet and TripleNet) for joint object detection and semantic segmentation. PairNet simultaneously predicts objects of different scales by different layers and parses pixel semantic labels by all different layers. TripleNet adds four modules (i.e, multiscale fused segmentation, inner-connected module, class-agnostic segmentation supervision, and attention skip-layer fusion) to PairNet. Experiments demonstrate that TripleNet can achieve stateof-the-art performance on both object detection and semantic segmentation.
| 3,116 |
1809.09299
|
2951122667
|
Joint object detection and semantic segmentation can be applied to many fields, such as self-driving cars and unmanned surface vessels. An initial and important progress towards this goal has been achieved by simply sharing the deep convolutional features for the two tasks. However, this simple scheme is unable to make full use of the fact that detection and segmentation are mutually beneficial. To overcome this drawback, we propose a framework called TripleNet where triple supervisions including detection-oriented supervision, class-aware segmentation supervision, and class-agnostic segmentation supervision are imposed on each layer of the decoder network. Class-agnostic segmentation supervision provides an objectness prior knowledge for both semantic segmentation and object detection. Besides the three types of supervisions, two light-weight modules (i.e., inner-connected module and attention skip-layer fusion) are also incorporated into each layer of the decoder. In the proposed framework, detection and segmentation can sufficiently boost each other. Moreover, class-agnostic and class-aware segmentation on each decoder layer are not performed at the test stage. Therefore, no extra computational costs are introduced at the test stage. Experimental results on the VOC2007 and VOC2012 datasets demonstrate that the proposed TripleNet is able to improve both the detection and segmentation accuracies without adding extra computational costs.
|
It aims to simultaneously detect objects and predict pixel semantic labels by a single network. Recently, researchers have done some attempts. Yao @cite_36 proposed to use the graphical model to holistic scene understanding. Teichmann @cite_11 proposed to join object detection and semantic segmentation by sharing the encoder subnetwork. Kokkinos @cite_4 also proposed to integrate multiple computer vision tasks together. Mao @cite_18 found that joint semantic segmentation and pedestrian detection can help improve performance of pedestrian detection. The similar conclusion is also demonstrated by SDS-RCNN @cite_0 . Meanwhile, joint instance semantic segmentation and object detection is also proposed @cite_41 . Recently, Dvornik @cite_41 proposed a real-time framework (called BlitzNet) for joint object detection and semantic segmentation. It is based on the encoder-decoder network, where each layer of the decoder is used to detect objects of different scales and multi-scale fused layer is used for semantic segmentation.
|
{
"abstract": [
"Aggregating extra features has been considered as an effective approach to boost traditional pedestrian detection methods. However, there is still a lack of studies on whether and how CNN-based pedestrian detectors can benefit from these extra features. The first contribution of this paper is exploring this issue by aggregating extra features into CNN-based pedestrian detection framework. Through extensive experiments, we evaluate the effects of different kinds of extra features quantitatively. Moreover, we propose a novel network architecture, namely HyperLearner, to jointly learn pedestrian detection as well as the given extra feature. By multi-task training, HyperLearner is able to utilize the information of given features and improve detection performance without extra inputs in inference. The experimental results on multiple pedestrian benchmarks validate the effectiveness of the proposed HyperLearner.",
"In this work we train in an end-to-end manner a convolutional neural network (CNN) that jointly handles low-, mid-, and high-level vision tasks in a unified architecture. Such a network can act like a swiss knife for vision tasks, we call it an UberNet to indicate its overarching nature. The main contribution of this work consists in handling challenges that emerge when scaling up to many tasks. We introduce techniques that facilitate (i) training a deep architecture while relying on diverse training sets and (ii) training many (potentially unlimited) tasks with a limited memory budget. This allows us to train in an end-to-end manner a unified CNN architecture that jointly handles (a) boundary detection (b) normal estimation (c) saliency estimation (d) semantic segmentation (e) human part segmentation (f) semantic boundary detection, (g) region proposal generation and object detection. We obtain competitive performance while jointly addressing all tasks in 0.7 seconds on a GPU. Our system will be made publicly available.",
"In this paper we propose an approach to holistic scene understanding that reasons jointly about regions, location, class and spatial extent of objects, presence of a class in the image, as well as the scene type. Learning and inference in our model are efficient as we reason at the segment level, and introduce auxiliary variables that allow us to decompose the inherent high-order potentials into pairwise potentials between a few variables with small number of states (at most the number of classes). Inference is done via a convergent message-passing algorithm, which, unlike graph-cuts inference, has no submodularity restrictions and does not require potential specific moves. We believe this is very important, as it allows us to encode our ideas and prior knowledge about the problem without the need to change the inference engine every time we introduce a new potential. Our approach outperforms the state-of-the-art on the MSRC-21 benchmark, while being much faster. Importantly, our holistic model is able to improve performance in all tasks.",
"Real-time scene understanding has become crucial in many applications such as autonomous driving. In this paper, we propose a deep architecture, called BlitzNet, that jointly performs object detection and semantic segmentation in one forward pass, allowing real-time computations. Besides the computational gain of having a single network to perform several tasks, we show that object detection and semantic segmentation benefit from each other in terms of accuracy. Experimental results for VOC and COCO datasets show state-of-the-art performance for object detection and segmentation among real time systems.",
"Pedestrian detection is a critical problem in computer vision with significant impact on safety in urban autonomous driving. In this work, we explore how semantic segmentation can be used to boost pedestrian detection accuracy while having little to no impact on network efficiency. We propose a segmentation infusion network to enable joint supervision on semantic segmentation and pedestrian detection. When placed properly, the additional supervision helps guide features in shared layers to become more sophisticated and helpful for the downstream pedestrian detector. Using this approach, we find weakly annotated boxes to be sufficient for considerable performance gains. We provide an in-depth analysis to demonstrate how shared layers are shaped by the segmentation supervision. In doing so, we show that the resulting feature maps become more semantically meaningful and robust to shape and occlusion. Overall, our simultaneous detection and segmentation framework achieves a considerable gain over the state-of-the-art on the Caltech pedestrian dataset, competitive performance on KITTI, and executes 2x faster than competitive methods.",
"While most approaches to semantic reasoning have focused on improving performance, in this paper we argue that computational times are very important in order to enable real time applications such as autonomous driving. Towards this goal, we present an approach to joint classification, detection and semantic segmentation via a unified architecture where the encoder is shared amongst the three tasks. Our approach is very simple, can be trained end-to-end and performs extremely well in the challenging KITTI dataset, outperforming the state-of-the-art in the road segmentation task. Our approach is also very efficient, taking less than 100 ms to perform all tasks."
],
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_36",
"@cite_41",
"@cite_0",
"@cite_11"
],
"mid": [
"2613599172",
"2963498646",
"2137881638",
"2745261679",
"2732843069",
"2951916398"
]
}
|
Triply Supervised Decoder Networks for Joint Detection and Segmentation
|
Object detection and semantic segmentation are two fundamental and important tasks in the field of computer vision. In recent few years, object detection [36,29,26] and semantic segmentation [30,5,1] with deep convolutional networks [20,39,14,17] have achieved great progress, respectively. Most state-of-the-art methods only focus on one single task, which does not join object detection and semantic segmentation together. However, joint object detection and semantic segmentation is very necessary and im- The d featur
The en featur Figure 1. Some architectures of joint detection and segmentation. (a) The last layer of the encoder is used for detection and segmentation [2]. (b) The branch for detection is refined by the branch for segmentation [31,47]. (c) Each layer of the decoder detects objects of different scales, and the fused layer is for segmentation [7]. (d) The proposed PairNet. Each layer of the decoder is simultaneously for detection and segmentation. (e) The proposed TripleNet, which has three types of supervisions and some lightweight modules.
portant in many applications, such as self-driving cars and unmanned surface vessels. In fact, object detection and semantic segmentation are highly related. On the one hand, semantic segmentation usually used as a multi-task supervision can help improve object detection [31,24]. On the other hand, object detection can be used as a prior knowledge to help improve performance of semantic segmentation [14,34].
Due to application requirements and task relevance, joint object detection and semantic segmentation has gradually attracted the attention of researchers. Fig. 1 summarizes three typical methods of joint object detection and semantic segmentation. Fig. 1(a) shows the simplest and most naive way where one branch for object detection and one branch for semantic segmentation are in parallel attached to the last layers of the encoder [2]. In Fig. 1(b), the branch for object detection is further refined by the features from the branch for semantic segmentation [31,47]. Recently, the encoder-decoder network is further used for joint object de-tection and semantic segmentation. In Fig. 1(c), each layer of the decoder is used for multi-scale object detection, and the concatenated feature map from different layers of the decoder is used for semantic segmentation [7]. The above methods have achieved great success for detection and segmentation. However, the performance is still far from the strict demand of real applications such as self-driving cars and unmanned surface vessels. One possible reason is that the mutual benefit between the two tasks is not fully exploited.
To exploit mutual benefit for joint object detection and semantic segmentation tasks, in this paper, we propose to impose three types of supervisions (i.e., detection-oriented supervision, class-aware segmentation supervision, and class-agnostic segmentation supervision) on each layer of the decoder network. Meanwhile, the light-weight modules (i.e., the inner-connected module and attention skip-layer fusion) are also incorporated. The corresponding network is called TripleNet (see Fig. 1(e)). It is noted that we also propose to only impose the detection-oriented supervision and class-aware segmentation supervision on each layer of the decoder, which is called PairNet (see Fig. 1(d)). The contributions of this paper can be summarized as follows:
(1) Two novel frameworks (i.e., PairNet and TripleNet) for joint object detection and semantic segmentation are proposed. In TripleNet, the detection-oriented supervision, class-aware segmentation supervision, and class-agnostic segmentation supervision are imposed on each layer of the decoder. Meanwhile, two light-weight modules (i.e., the inner-connected module and attention skip-layer fusion) is also incorporated into each layer of the decoder.
(2) A lot of synergies are gained from TripleNet. Both detection and segmentation accuracies are significantly improved. The improvement is not at expense of incurring extra computational costs because the class-agnostic segmentation and class-aware segmentation are not performed in each layer of the decoder at the test stage .
(3) Experiments on the VOC 2007 and VOC 2012 datasets are conducted to demonstrate the effectiveness of the proposed TripleNet.
The rest of this paper is organized as follows. Section 2 reviews some related works of object detection and semantic segmentation. Section 3 introduces our proposed method in detail. Experiments are shown in Section 4. Finally, it is concluded in Section 5.
The proposed methods
In recent years, the fully convolutional networks (FCN) with encoder-decoder structure have achieved great success on object detection [26,9] and semantic segmentation [1], respectively. For example, DSSD [9,34] and RetinaNet [26] use different layers of the decoder to detect objects of different scales, respectively. By using the encoderdecoder structure, SegNet [1] and LargeKernel [33] generate high-resolution logits for semantic segmentation. Based on above observations, a very natural and simple idea is that FCN with encoder-decoder is suitable for joint object detection and semantic segmentation.
In this section, we give a detailed introduction about the proposed paired supervision decoder network (i.e., PairNet) and triply supervised decoder network (i.e., TripleNet) for joint object detection and semantic segmentation.
Paired supervision decoder network (PairNet)
Based on the encoder-decoder structure, a feature pyramid network is naturally proposed to join object detection and semantic segmentation. Namely, the supervision of object detection and semantic segmentation is added to each layer of the decoder, which is called PairNet. On the one hand, PairNet uses different layers of the decoder to detect objects of different scales. On the other hand, instead of using the last high-resolution layer for semantic segmentation which is adopted by most state-of-the-art methods [1,33], PairNet uses each layer of the decoder to respectively parse pixel semantic labels. Though the proposed PairNet is very simple and naive, it has not been explored for joint object detection and semantic segmentation to the best of our knowledge. Fig. 2(a) gives the detailed architecture of PairNet. The input image firstly goes through a fully convolutional network with encoder-decoder structure. The encoder gradually down-samples the feature map. In this paper, the famous ResNet-50 or ResNet101 [15] (i.e., res1-res4) and some new added residual blocks (i.e., res5-res7) construct the encoder. The decoder gradually maps the low-resolution feature map to the high-resolution feature map. To enhance context information, skip-layer fusion is used to fuse the feature map from the decoder and the corresponding feature map from the encoder. Fig. 2(b) skip-layer fusion. The feature maps in the decoder is firstly upsampled by bilinear interpolation and then concatenated with the corresponding feature maps of the same resolution in the encoder. After that, the concatenated feature maps go through a residual unit to generate the output feature maps.
To join object detection and semantic segmentation, each layer of the decoder is further split into two different branches. The branch of object detection consists of a 3 × 3 convolutional layer and two sibling 1 × 1 convolutional layers for object classification and bounding box regression. The branch of object detection at different layers is used to detect objects of different scales. Specifically, the branch at front layer of the decoder with low resolution is used to detect large-scale objects, while the branch at latter layer with high resolution is used to detect small-scale objects.
The branch of semantic segmentation consists of a 3 × 3 convolutional layer to generate the logits. There are two different ways to compute the segmentation loss. The first one is that the segmentation logits are upsampled to the same resolution of ground-truth, and the second one is that the ground-truth is downsampled to the same resolution of the logits. We found that the first strategy have a little better performance, which is adopted in the follows.
Triply supervised decoder network (TripleNet)
To further improve the performance of joint object detection and semantic segmentation, triply supervision decoder network (called TripleNet) is further proposed, where detection-oriented supervision, class-aware segmentation supervision, and class-agnostic segmentation supervision are added on each layer of the decoder. Fig. 3(a) gives the detailed architecture of TripleNet. Compared to PairNet, TripleNet add some new modules (i.e., multiscale fused segmentation, the inner-connected module, and class-agnostic segmentation supervision). In the following section, we introduce these modules in detailed.
Multiscale fused segmentation It has been demonstrated that multi-scale features are useful for semantic segmentation [7,49,46]. To use multi-scale features of different layers in the decoder for better semantic segmentation, the feature maps of different layers in the decoder are upsampled to the same spatial resolution and concatenated together. After that, a 3 × 3 convolutional layer is used to generate the segmentation logits. Compared to the segmentations based on one layer of the decoder, multilayer fused features can make better use of context information. Thus, multilayer fused segmentation is used for final prediction at the test stage. Meanwhile, the semantic segmentation based on each layer of the decoder can be seen as a deep supervision for feature learning.
The inner-connected module In Section 3.1, PairNet only shares the base network for object detection and semantic segmentation, while the branches of object detection and semantic segmentation have no cross. To further help object detection, an inner-connected module is proposed to refine object detection by the logits of semantic segmentation. Fig. 3(b) shows the inner-connected module in layer i. The feature map in layer i first goes through a 3 × 3 convolutional layer to produce the segmentation logits for the branch of semantic segmentation. Meanwhile, the segmentation logit goes through two 3 × 3 convolutional layers to generate new feature map which are further concatenated with feature maps in layer i. Based on concatenated feature maps, a 3 × 3 convolutional layer is used to generate the feature map for the branch of object detection.
Class-agnostic segmentation supervision Semantic segmentation mentioned above is class-aware, which aims to simultaneously identify specific object categories and the Table 1. Ablation experiments of PairNet and TripleNet on the VOC2012-val-seg set. The backbone model is ResNet50 [15], and the input image is rescaled to the size of 300 × 300. "MFS" means multiscale fused segmentation, "IC" means inner-connected module, "CAS" means class-aware segmentation, and "ASF" means attention skip-layer fusion.
background. We argue that class-aware semantic segmentation may ignore the discrimination between objects and the background. Therefore, class-agnostic segmentation supervision module is further added to each layer of the decoder. Specifically, a 3 × 3 convolutional layer is added to generate the logits of class-agnostic semantic segmentation. To generate the ground-truth of class-agnostic semantic segmentation, the objects of different categories are set as one category, and the background is set as another category. Attention skip-layer fusion In Section 3.1, PairNet simply fuses the feature maps of the decoder and the corresponding feature maps of the encoder. Generally, the features from the layer of the encoder have relatively low-level semantic, and that from the layer of decoder have relatively high-level semantic. To enhance informative features and suppress less useful features from the encoder by the features from the decoder, Squeeze-and-Excitation (SE) [16] block is used. The input of a SE block is the layer of the decoder, and the output of SE block is used to scale the layer of the encoder. After that, the layer of the decoder and the scaled layer of the encoder is concatenated for fusion.
Experiments
Datasets
To demonstrate the effectiveness of proposed methods and compare with same state-of-the-art methods, some experiments on the famous VOC 2007 and VOC 2012 datasets [8] are conducted in this section.
The PASCAL VOC challenge [8] has been held annually since 2006, which consists of three principal challenges (i.e., image classification, object detection, and semantic segmentation). Among these annual challenges, the VOC 2007 and VOC 2012 datasets are usually used to evaluate the performance of object detection and semantic segmentation, which have 20 object categories. The VOC 2007 dataset contains 5011 trainval images and 4952 test images. The VOC 2012 dataset is split into three subsets (i.e., train, val, and test). The train set con-tains 5717 images for detection and 1464 images for semantic segmentation (called VOC12-train-seg). The val set contains 5823 images for detection and 1449 images for segmentation (called VOC12-val-seg). The test set contains 10991 images for detection and 1456 for segmentation. To enlarge the training set for semantic segmentation, the additional segmentation data provided by [12] is used, which contains 10582 training images (called VOC12-trainaug-seg).
For object detection, mean average precision (i.e., mAP) is used to performance evaluation. On the PASCAL VOC datasets, mAP is calculated under the IoU threshold of 0.5. For semantic segmentation, mean intersection over union (i.e., mIoU) is used for performance evaluation.
Ablation experiments on the VOC 2012 dataset
In this subsection, experiments are conducted on the PASCAL VOC 2012 to validate the effectiveness of proposed method. On the PASCAL VOC 2012, the set of VOC12-trainaug-seg is used for training and the set of VOC12-val-seg is used for performance evaluation, where they have the ground truth of both object detection and semantic segmentation. The input images are rescaled to the size of 300 × 300, and the size of mini-batch is 32. The total number of iteration in the training stage is 40k, where the learning rate of first 25k iterations is 0.0001, that of following 10k iterations is 0.00001, and that of last 5k iterations is 0.000001.
The top part of Table 1 shows the ablation experiments of PairNet. When all different layers of the decoder are only used for multi-scale object detection (i.e., Table 1(a)), mAP of object detection is 78.0%. When all different layers of the decoder are used for semantic segmentation (i.e., Table 1(c)), mIoU of semantic segmentation is 72.5%. When all the different layer of the decoder is used for object detection and semantic segmentation together (i.e., Table 1(d)), mAP and mIoU of PairNet are 78.9% and 73.1%, respectively. Namely, PairNet can improve both object detection and semantic segmentation, which indicates that joint ob-
GT of det
GT of seg only det (Table 1(a)) only seg (Table 1(b)) Our PairNet (Table 1(d)) Our TripleNet (Table 1( Table 1 (i.e., "only det", "only seg", PairNet, and TripleNet).
(a) demonstrates that detection and segmentation can be both improved by PairNet and TripleNet. (b) demonstrates that detection is mainly improved by PairNet or TripleNet. (c) demonstrates that segmentation is mainly improved by PairNet or TripleNet.
ject detection and semantic segmentation on each layer of the decoder is useful. Meanwhile, the method using all the different layers of the decoder for segmentation (i.e., Table 1(c)) has better performance than the method only using the last layer of the decoder for segmentation (i.e., Table 2. Comparison of BlitzNet, the proposed PairNet, and the proposed TripleNet. All the methods are re-implemented in the same parameter settings.
decoder for semantic segmentation can give a much deeper supervision.
The bottom part of Table 1 shows the ablation experiments of TripleNet. Based on PairNet, TripleNet adds three modules (i.e., MFS, IC, CAS, and AFS). When adding the MFS module, TripleNet outperforms PairNet by 0.1% on object detection and 0.4% on semantic segmentation, respectively. When adding the MFS and IC modules, TripleNet outperforms PairNet by 0.6% on object detection and 0.5% on semantic segmentation. When adding all four modules, TripleNet has the best detection performance and segmentation performance. Table 1. The first two columns are ground-truth of detection and segmentation. The results of only detection in Table 1(a) and only segmentation in Table 1(c) are shown in the third and forth columns. The results of Pair-Net in Table 1(d) and TripleNet in Table 1(g) are shown in fifth to eighth columns. In Fig. 4(a), the examples of detection and segmentation both improved by joint detection and segmentation are given. For example, in the first row, "only det" and "only seg" both miss three potted plant, while PairNet only misses one potted plant and TripleNet does not miss any potted plant. In Fig. 4(b), the examples of detection result improved are shown. For example, in the first row, "only detect" can only detect one ship, PairNet can detect three ships, and TripleNet can detect four ships. In Fig. 4(c), the examples of segmentation results improved are shown. For example, in the second row, "only seg" recognize blue bag as motorbike, but PairNet and TripleNet can recognize the blue bag as background.
Meanwhile, the proposed PairNet and TripleNet are also compared to the related BlitzNet [7]. For fair comparison, BltizNet are re-implemented in the similar parameter settings as the proposed PairNet and TripleNet. PairNet which simply joins detection and segmentation in each layer of the decoder has been already comparable with BlitzNet. TripleNet outperforms BlitzNet on both object detection and semantic segmentation, which demonstrates that the proposed method can make full use of the mutual information to improve the two tasks.
Comparison with state-of-the-art methods on the VOC2012 test dataset
In this section, the proposed PairNet is compared with some state-of-the-art methods on the VOC 2007 dataset. Among these methods, SSD [29], RON [18], DSSD [9], DES [47], RefineDet [48], and DFPR [19] are only used for object detection, ParseNet [27], Deeplab V2 [5], DPN [28], RefineNet [25], PSPNet [49], DFPN [45] are only used for semantic segmentation. Table 2 shows object detect results (mAP) and semantic segmentation results (mIoU) of these methods on th VOC2012 test set. It can been seen most state-of-the-art methods can only output detection results (i.e., SSD, RON, DSSD, DES, RefineDet, and DFPR) or segmentation result (i.e., FCN, ParseNet, DeepLab, DPN, PSPNet, and DFPN). Only BlitzNet and our proposed TripleNet can simultaneously output the results of object detection and semantic segmentation. mAP and mIoU of BlitzNet are 79.0% and 75.6%, while mAP and mIoU of TripleNet are 81.0% and 82.9%. Thus, TripleNet outperforms BlitzNet by 2.0% on object detection and 7.3% on semantic segmentation. It can be also seen that TripleNet almost achieves state-of-the-art performance on both object detection and semantic segmentation.
Comparison with some state-of-the-art methods on the VOC 2007 test dataset
In this section, the proposed TripleNet and some stateof-the-art methods (i.e., SSD [29], DES [47], DSSD [9], STDN [50], BlitzNet [7], RefineDet [25]), and DFPR [19] are further compared on the VOC 2007 test set. Because only the ground-truth of object detection is provided, these methods are only evaluated on object detection. Table 4 shows mAP of these methods. mAP of TripleNet is 82.7%, which is higher than that of all state-of-the-art methods.
- - - - - - - - - - - - - - - - - - - RefineDet512 [48] VGG16 81.8 - - - - - - - - - - - - - - - - - - - -
Conclusion
In this paper, we proposed two fully convolutional networks (i.e., PairNet and TripleNet) for joint object detection and semantic segmentation. PairNet simultaneously predicts objects of different scales by different layers and parses pixel semantic labels by all different layers. TripleNet adds four modules (i.e, multiscale fused segmentation, inner-connected module, class-agnostic segmentation supervision, and attention skip-layer fusion) to PairNet. Experiments demonstrate that TripleNet can achieve stateof-the-art performance on both object detection and semantic segmentation.
| 3,116 |
1809.09419
|
2950669295
|
Procedural content generation via Machine Learning (PCGML) is the umbrella term for approaches that generate content for games via machine learning. One of the benefits of PCGML is that, unlike search or grammar-based PCG, it does not require hand authoring of initial content or rules. Instead, PCGML relies on existing content and black box models, which can be difficult to tune or tweak without expert knowledge. This is especially problematic when a human designer needs to understand how to manipulate their data or models to achieve desired results. We present an approach to Explainable PCGML via Design Patterns in which the design patterns act as a vocabulary and mode of interaction between user and model. We demonstrate that our technique outperforms non-explainable versions of our system in interactions with five expert designers, four of whom lack any machine learning expertise.
|
Procedural content generation via Machine Learning @cite_10 is a relatively new field, focused on generating content through machine learning methods. The majority of PCGML approaches represent black box methods, without any prior approach focused on explainability or co-creativity. We note some discussion in the survey paper on potential collaborative approaches. Summerville explored adapting levels to players, but no work to our knowledge looks at adapting models to individual designers.
|
{
"abstract": [
"This survey explores Procedural Content Generation via Machine Learning (PCGML), defined as the generation of game content using machine learning models trained on existing content. As the importance of PCG for game development increases, researchers explore new avenues for generating high-quality content with or without human involvement; this paper addresses the relatively new paradigm of using machine learning (in contrast with search-based, solver-based, and constructive methods). We focus on what is most often considered functional game content such as platformer levels, game maps, interactive fiction stories, and cards in collectible card games, as opposed to cosmetic content such as sprites and sound effects. In addition to using PCG for autonomous generation, co-creativity, mixed-initiative design, and compression, PCGML is suited for repair, critique, and content analysis because of its focus on modeling existing content. We discuss various data sources and representations that affect the resulting generated content. Multiple PCGML methods are covered, including neural networks, long short-term memory (LSTM) networks, autoencoders, and deep convolutional networks; Markov models, @math -grams, and multi-dimensional Markov chains; clustering; and matrix factorization. Finally, we discuss open problems in the application of PCGML, including learning from small datasets, lack of training data, multi-layered learning, style-transfer, parameter tuning, and PCG as a game mechanic."
],
"cite_N": [
"@cite_10"
],
"mid": [
"2586544230"
]
}
|
Explainable PCGML via Game Design Patterns
|
Procedural Content Generation (PCG), represents a field of research into, and a set of techniques for, generating game content algorithmically. PCG historically requires a significant amount of human-authored knowledge to generate content, such as rules, heuristics, and individual components, creating a time and design expertise burden. Procedural Content Generation via Machine Learning (PCGML) attempts to solve these issues by applying machine learning to extract this design knowledge from existing corpora of game content (Summerville et al. 2017). However, this approach has its own weaknesses; Applied naively, these models require machine learning literacy to understand and debug. Machine learning literacy is uncommon, especially among those designers who might most benefit from PCGML.
Explainable AI represents a field of research into opening up black box Artificial Intelligence and Machine Learning models to users (Biran and Cotton 2017). The promise of explainable AI is not just that it will help users understand such models, but also tweak these models to their needs (Olah et al. 2018). If we could include some representation of an individual game designer's knowledge into a model, we could help designers without ML expertise better understand and alter these models to their needs.
Copyright c 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
Design patterns (Bjork and Holopainen 2004) represent one popular way to represent game design knowledge. A design pattern is a category of game structure that serves a general design purpose across similar games. Researchers tend to derive design patterns via subjective application of design expertise (Hullett and Whitehead 2010), which makes it difficult to broadly apply one set of patterns across different designers and games. The same subjective limitation also means that an individual set of design patterns can serve to clarify what elements of a game matter to an individual designer. Given a set of design patterns specialized to a particular designer one could leverage these design patterns in an Explainable PCGML system to help a designer understand and tweak a model to their needs. We note our usage of the term pattern differs from the literature. Typically, a design pattern generalizes across designers, whereas we apply it to indicate the unique structures across a game important to an individual designer.
We present an underlying system for a potential cocreative PCGML tool, intended for designers without ML expertise. This system takes user-defined design patterns for a target level, and outputs a PCGML model. The design patterns provided by designers and generated by our system can be understood as labels on level structure, which allow our PCGML model to better represent and reflect the design values of an individual designer. This system has two major components: (1) a classification system that learns to classify level structures with the user-specified design pattern labels. This system ensures a user does not have to label all existing content to train (2) a level generation system that incorporates the user's level design patterns, and can use these patterns as a vocabulary with which to interact with the user. For example, generating labels on level structure to represent the model's interpretation of that structure to the user.
The rest of this paper is organized as follows. First, we relate our work to prior, related work. Second, we describe our Explainable PCGML (XPCGML) system in terms of the two major components. Third, we discuss the three evaluations we ran with five expert designers. We end with a discussion of the systems limitations, future work, and conclusions. Our major contributions are the first application of explainable AI to PCGML, the use of a random forest classifier to minimize user effort, and the results of our evaluations. Our results demonstrate both the promise of these pattern labels in improving user interaction and a positive impact on the underlying model's performance.
System Overview
The approach presented in this paper builds an Explainable PCGML model based on existing level structure and an expert labeling design patterns upon that structure. We chose Super Mario Bros. as a domain given its familiarity to the game designers who took part in our evaluation. The general process for building a final model is as follows: First, users label existing game levels with the game design patterns they want to use for communicating with the system. For example, one might label both areas with large amounts of enemies and areas that require precise jumps as "challenges". The exact label can be anything as long as it is used consistently. Given this initial user labeling of level structure, we train a random forest classifier to classify additional level structure according to the labeled level chunks (Liaw, Wiener, and others 2002), which we then use to label all available levels with the user design pattern labels. Given this now larger training set of both level structure and labels, we train a convolutional neural network-based autoencoder on both levels structure and its associated labels (Lang 1988;LeCun et al. 1989), which can then be used to generate new level structure and label its generated content with these design pattern labels (Jain et al. 2016).
We make use of Super Mario Bros. as our domain, and, in particular, we utilize those Super Mario Bros levels present in the Video Game Level Corpus (Summerville et al. 2016b). We do not include underwater or boss/castle Super Mario Bros. levels. We made this choice as we perceived these two level types to be significantly different from all other level types. Further, while we make use of the VGLC levels, we do not make use of any of the VGLC Super Mario Bros. representations, which abstract away level components into higher order groups. Instead, we draw on the image parsing approach introduced in (Guzdial and Riedl 2016), using a spritesheet and OpenCV (Bradski and Kaehler 2000) to parse images of each level for a richer representation.
In total we identified thirty unique classes of level components, and make use of a matrix representation for each level section of size 8 × 8 × 30. The first two dimensions determine the tiles in the x and y axes, while the last dimension represents a one-hot vector of length 30 expressing component class. This vector is all 0's for any empty tile of a Super Mario Bros. level, and otherwise has 1's at the index associated with that particular level component. Thus, we can represent all level components, including background decoration. We note that we treat palette swaps of the same component as equivalent in class.
We make use of the SciPy random forest classifier (Jones, Oliphant, and Peterson 2014) and tensorflow for the autoencoder (Abadi et al. 2016).
Design Pattern Label Classifier
Our goal for the design pattern label classifier is to minimize the amount of work and time costs for a potential user of the system. Users have to label level structure with the design patterns they would like to use, but the label classifier ensures they do not have to hand-label all available levels. The classifier for this task must be able to perform given access to whatever small amount of training data a designer is willing to label for it, along with being able to easily update its model given potential feedback from a user. We anticipate the exact amount of training data the system has access to will differ widely between users, but we do not wish to overburden authors with long data labeling tasks. Random forest classifiers are known to perform reasonably under these constraints (Michalski, Carbonell, and Mitchell 2013).
The random forest model takes in an eight by eight level section and returns a level design pattern (either a userdefined design pattern or none). We train the random forest model based on the design pattern labels submitted by the user. We use a forest of size 100 with a maximum depth of 100 in order to encourage generality.
In the case of an interactive, iterative system the random forest can be easily retrained. In the case where the random forest classifier correctly classifies any new design pattern there is no need for retraining. Otherwise, we can delete a subset of the trees of the random forest that incorrectly classified the design pattern, and retrain an appropriate number of trees to return to the maximum forest size on the existing labels and any additional new information.
Even with the design pattern level classifier this system requires the somewhat unusual step of labeling existing level structure with design patterns a user finds important. However, this is a necessary step for the benefit of a shared vo-cabulary, and labeling content is much easier than designing new content. Further, we note that when two humans collaborate they must negotiate a shared vocabulary.
Generator
The existing level generation system is based on an autoencoder, and we visualize its architecture in Figure 1. The input comes in the form of a chunk of level content and the associated design patterns label, such as "intro" in the figure. This chunk is represented as an eight by eight by thirty input tensor plus a tensor of size n where n indicates the total number of design pattern labels given by the user. This last vector of size n is a one-hot encoded vector of level design pattern labels.
After input, the level structure and design pattern label vector are separated. The level structure passes through a two layer convolutional neural network (CNN). We note that we placed a dropout layer in between the two CNN layers to allow better generalization. After the CNN layers the output of this section and the design patterns vector recombine and pass through a fully connected layer with relu activation to an embedded vector of size 512. We note that, while large, this is much smaller than the 1920+n features of the input layer. The decoder section is an inverse of the encoder section of the architecture, starting with a relu fully connected layer, then deconvolutional neural network layers with upsampling handling the level structure. We implemented this model with an adam optimizer and mean square loss. Note that for the purposes of evaluation this is a standard autoencoder. We intend to make use of a variational autoencoder in future work (Kingma and Welling 2013).
Evaluation
Our system has two major parts: (1) a random forest classifier that attempts to label additional content with userprovided design patterns to learn the designer's vocabulary and (2) an autoencoder over level structure and associated patterns for generation. In this section we present three evaluations of our system. The first addresses the random forest classifier of labels, the second the entirety of the system, and the third addresses the limiting factor of time in human computer interactions. For all three evaluations we make use of a dataset of levels from Super Mario Bros. labeled by five expert designers.
Dataset Collection
We reached out to ten design experts to label three or more Super Mario Bros. levels of their choice to serve as a dataset for this evaluation. We do not include prior, published academic patterns of Super Mario Bros. levels (e.g. (Dahlskog and Togelius 2012)) as these patterns were designed for general automated design instead of explainable co-creation. Our goals for choosing these ten designers were to get as diverse a pool of labels as possible. Of these ten, five responded and took part in this study.
• Adam Le Doux: Le Doux is a game developer and designer best known for his Bitsy game engine. He is currently a Narrative Tool Developer at Bungie. • Dee Del Rosario: Del Rosario is an events and community organizer in games with organizations such as Different Games Collective and Seattle Indies, along with being a gamedev hobbyist. They currently work as a web developer and educator.
• Kartik Kini: Kini is an indie game developer through his studio Finite Reflection, and an associate producer at Cartoon Network Games.
• Gillian Smith: Smith is an Assistant Professor at WPI. She focuses on game design, AI, craft, and generative design.
• Kelly Snyder: Snyder is an Art Producer at Bethesda and previously a Technical Producer at Bungie.
All five of these experts were asked to label their choice of three levels with labels that established "a common language/vocabulary that you'd use if you were designing levels like this with another human". Of these experts only Smith had any knowledge of the underlying system. She produced two sets of design patterns for the levels she labeled, one including only those patterns she felt the system could understand and the second including all patterns that matched the above criteria. We refer to these labels as Smith and Smith-Naive through the rest of this section, respectively.
These experts labeled static images of non-boss and nonunderwater Super Mario Bros. levels present in the Video Game Level Corpus (VGLC) (Summerville et al. 2016b). The experts labeled these images by drawing a rectangle over the level structure in which the design pattern occurred with some string to define the pattern. These rectangles could be of arbitrary size, but we translated each into either a single training example centered on the eight by eight chunk our system requires, or multiple training examples if it was larger than eight by eight.
We include some summarizing information about these six sets of design pattern labels in Table 1. Specifically, we include the total number of labels and the top three labels, sorted by frequency and alphabetically, of each set. Each ex- pert produced very distinct labels, with less than one percent of labels shared between different experts. We include the first example for the top label for each set of design patterns in Figure 2. Even in the case of Kini and Del Rosario, where there is a similar area and design pattern label, the focus differs. We train six separate models, one for each set of design pattern labels (Smith has two).
Label Classifier Evaluation
In this section we seek to understand how well our random forest classifier is able to identify design patterns in level structure. For the purposes of this evaluation we made use of AlexNet as a baseline (Szegedy et al. 2016), given that a convolutional neural network would be the naive way one might anticipate solving this problem. We chose AlexNet given its popularity and success at similar image recognition tasks. In all instances we trained the AlexNet until its error converged. We make use of a three-fold cross validation on the labels for this and the remaining evaluations. We make use of a three-fold validation to address the variance across even a single expert's labels and due to the small set of labels available for some experts. Our major focus is training and test accuracy across the folds. We summarize the results of this evaluation in Table 2, giving the average training and test accuracies across all folds along with the standard deviation. We note that in all instances our random forest (RF) approach outperformed AlexNet CNN in terms of training accuracy, and nearly always in terms of test accuracy. We note that given more training time AlexNet's training accuracy might improve, but at the cost of test accuracy. We further note that AlexNet was on average one and a half times slower than the random forest in terms of training time. These results indicate that our random forest produces a more general classifier compared to AlexNet.
We note that our random forest performed fairly consistently in terms of training accuracy, at around 85%, but that the test accuracy varied significantly. Notably, the test accuracy did not vary according to the the number of training samples or number of labels per expert. This indicates that individual experts identify patterns that are more or less easy to classify automatically. Further we note that Snyder and Del Rosario had very low testing error across the board, which indicates a large amount of variance between tagged examples. Despite this, we demonstrate the utility of this approach in the next section.
Autoencoder Structure Evaluation
We hypothesize that the inclusion of design pattern labels into our autoencoder network will improve its overall representative quality. Further, that the use of an automatic label classifier will allow us to gather sufficient training data to train the autoencoder. This evaluation addresses both these hypotheses. We draw upon the same dataset and the same three folds from the prior evaluation and create three variations of our system. The first autoencoder variation has no design pattern labels and is trained on all 8 × 8 chunks of level instead of only those chunks labeled or autolabeled with a design pattern. Given that this means fewer features and smaller input and output tensors, this model should outperform our full model unless the design pattern labels im-prove overall representative quality. The second autoencoder variation does not make use of the automatic design pattern label classifier, thus greatly reducing the training data. The last variation is simply our full system. For all approaches we trained till training error converged. We note that we trained a single 'no labels' variation and tested it on each expert, but trained models for the no automatic classifier and full versions of our approach for each expert.
Given these three variations, we chose to measure the difference in structure when the autoencoder was fed the test portions of each of the three folds. Specifically we capture the number of incorrect structure features predicted. This can be understood as a stand in for representation quality, given that the output of the autoencoder for the test sample will be the closest thing the autoencoder can represent to the test sample.
We give the average number and standard deviation of incorrect structural features/tiles over all three folds in Table 2. We note that the minimum value here would be 0 errors and the maximum value would be 8 × 8 × 30 or 1920 incorrect structural feature values. For every expert except for Kini, who authored the smallest number of labels, our full system outperformed both variations. While some of these numbers are fairly close between the full and no labels variation, the values in bold were significantly lower according to the paired Wilcoxon Mann Whitney U test (p < 0.001).
Given the results in Table 3. We argue that both our hypotheses were shown to be correct, granted that the expert gives sufficient labels, with the cut-off appearing to be between Kini's 28 and Del Rosario's 38. Specifically the representation quality is improved when labels are used, and the label classifier improves performance over not applying the label classifier.
Transfer Evaluation
A major concern for any co-creative tool based on Machine Learning is training time. In the prior autoencoder evaluation, both the no labels and full versions of our system took hours to train to convergence. This represents a major weakness, given that in some co-creative contexts designers may not want to wait for an offline training process, especially when we anticipate authors wanting to rapidly update their set of labels. Given these concerns, we evaluate a variation of our approach utilizing transfer learning. This drastically speeds up training time by adapting the weights of a pretrained network on one task to a new task.
We make use of student-teacher or born again neural networks, a transfer learning approach in which the weights of a pre-trained neural network are copied into another network of a different size. In this case we take the weights from our no labels autoencoder from the prior evaluation, copy them into our full architecture, and train from there. We construct two variations of this approach, once again depending on the use of the random forest label classifier or not. We compare both variations to the full and no labels system from the prior evaluation, using the same metric.
We present the results of this evaluation in Table 4. We note that, while the best performing variation did not change from the prior variation, in all cases except for the Kini models, the transfer approaches got closer to the full variation approach, sometimes off by as little as a fraction of one structure feature. Further, these approaches were significantly faster to train, with the no automatic labeling transfer approach training in an average of 4.48 seconds and the automatic labeler transfer approach training in an average of 144.92 seconds, compared to the average of roughly five hours of the full approach on the same computer. This points to a clear breakdown in when it makes sense to apply what variation of our approach, depending on time requirements and processing power. In addition, it continues to support our hypotheses concerning the use of automatic labeler and personal level design pattern labels.
Qualitative Example
We do not present a front-end or interaction paradigm for the use of this Explainable PCGML system, as we feel such implementation details will depend upon the intended audience. However, it is illustrative to give an example of how the system could be used. In Figure 3 we present an example of the two training examples of the pattern "completionist reward" labeled by the expert Dee Del Rosario. The full system, including the random forest classifier, trains on these examples (and the other labels from Del Rosario), and is then given as input the eight by eight chunk with only the floating bar within it on the left of the image along with the desired label "completionist reward". One can imagine that Del Rosario as a user wants to add a reward to this section, but doesn't have any strong ideas. Given this input the system outputs the image on the right. We asked Del Rosario what they thought of the performance of the system and whether they considered this output matched their definition of completionist reward. They replied "Yes -I think? I would because I'm focusing on the position of the coins." We note that Del Rosario did not see the most decisive patch when making this statement, which we extracted as in (Olah et al. 2018). This clearly demonstrates some harmony between the learned model and the design intent. However, Del Rosario went on to say "I think if I were to go... more strict with the definition/phrase, I'd think of some other configuration that would make you think, 'oooooh, what a tricky design!!' ". This indicates a desire to further clarify the model. Thus, we imagine an iterative model is necessary for a tool utilizing this system and a user to reach a state of harmonious interaction.
Conclusions
In this paper, we present an approach to explainable PCGML (XPCGML) through user-authored design pattern labels over existing level structure. We evaluate our autoencoder and random forest labeler components on levels labeled by game design experts. These labels serve as a shared language between the user and level design agent, which allows for the possibility of explainability and meaningful collaborative interaction. We intend to take our system and incorporate it into a co-creative tool for novice and expert level designers. To the best of our knowledge this represents the first approach to explainable PCGML.
| 3,846 |
1809.09419
|
2950669295
|
Procedural content generation via Machine Learning (PCGML) is the umbrella term for approaches that generate content for games via machine learning. One of the benefits of PCGML is that, unlike search or grammar-based PCG, it does not require hand authoring of initial content or rules. Instead, PCGML relies on existing content and black box models, which can be difficult to tune or tweak without expert knowledge. This is especially problematic when a human designer needs to understand how to manipulate their data or models to achieve desired results. We present an approach to Explainable PCGML via Design Patterns in which the design patterns act as a vocabulary and mode of interaction between user and model. We demonstrate that our technique outperforms non-explainable versions of our system in interactions with five expert designers, four of whom lack any machine learning expertise.
|
Super Mario Bros. (SMB) represents a common area of research into PCGML @cite_24 @cite_14 @cite_8 @cite_17 . Beyond explainability, our approach differs from prior SMB PCGML approaches in terms of representation quality and the size of generated content. We focus on the generation of individual level sections instead of entire levels in order to better afford collaborative level building @cite_15 . Second, prior approaches have abstracted away the possible level components into higher order groups. For example, treating all enemy types as equivalent and ignoring decorative elements. We make use of a rich representation of all possible level components and an ordering that allows our approach to place decorative elements appropriately.
|
{
"abstract": [
"The procedural generation of video game levels has existed for at least 30 years, but only recently have machine learning approaches been used to generate levels without specifying the rules for generation. A number of these have looked at platformer levels as a sequence of characters and performed generation using Markov chains. In this paper we examine the use of Long Short-Term Memory recurrent neural networks (LSTMs) for the purpose of generating levels trained from a corpus of Super Mario Brothers levels. We analyze a number of different data representations and how the generated levels fit into the space of human authored Super Mario Brothers levels.",
"",
"Procedural content generation and design patterns could potentially be combined in several different ways in game design. This paper discusses how to combine the two, using automatic platform game level design as an example. The paper also present work towards a pattern-based level generator for Super Mario Bros. (SMB), which is based on an analysis of the levels of the original SMB game where we found 23 different patterns.",
"Tanagra is a mixed-initiative tool for level design, allowing a human and a computer to work together to produce a level for a 2-D platformer. An underlying, reactive level generator ensures that all levels created in the environment are playable, and provides the ability for a human designer to rapidly view many different levels that meet their specifications. The human designer can iteratively refine the level by placing and moving level geometry, as well as through directly manipulating the pacing of the level. This paper presents the design environment, its underlying architecture that integrates reactive planning and numerical constraint solving, and an evaluation of Tanagra's expressive range.",
"Procedural content generation has become a popular research topic in recent years. However, most content generation systems are specialized to a single game. We are interested in methods that can generate content for a wide variety of games without a game-specific algorithm design. Statistical approaches are a promising avenue for such generators and, more specifically, map generators. In this paper, we explore Markov models as a means of modeling and generating content for multiple domains. We apply our Markov models to Super Mario Bros. , Loderunner , and Kid Icarus in order to determine how well our models perform in terms of the playability of the content generated, the expressive ranges of the models, and the effects of training data on those expressive ranges."
],
"cite_N": [
"@cite_14",
"@cite_8",
"@cite_24",
"@cite_15",
"@cite_17"
],
"mid": [
"2290393232",
"",
"2027451374",
"2108806781",
"2546680972"
]
}
|
Explainable PCGML via Game Design Patterns
|
Procedural Content Generation (PCG), represents a field of research into, and a set of techniques for, generating game content algorithmically. PCG historically requires a significant amount of human-authored knowledge to generate content, such as rules, heuristics, and individual components, creating a time and design expertise burden. Procedural Content Generation via Machine Learning (PCGML) attempts to solve these issues by applying machine learning to extract this design knowledge from existing corpora of game content (Summerville et al. 2017). However, this approach has its own weaknesses; Applied naively, these models require machine learning literacy to understand and debug. Machine learning literacy is uncommon, especially among those designers who might most benefit from PCGML.
Explainable AI represents a field of research into opening up black box Artificial Intelligence and Machine Learning models to users (Biran and Cotton 2017). The promise of explainable AI is not just that it will help users understand such models, but also tweak these models to their needs (Olah et al. 2018). If we could include some representation of an individual game designer's knowledge into a model, we could help designers without ML expertise better understand and alter these models to their needs.
Copyright c 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
Design patterns (Bjork and Holopainen 2004) represent one popular way to represent game design knowledge. A design pattern is a category of game structure that serves a general design purpose across similar games. Researchers tend to derive design patterns via subjective application of design expertise (Hullett and Whitehead 2010), which makes it difficult to broadly apply one set of patterns across different designers and games. The same subjective limitation also means that an individual set of design patterns can serve to clarify what elements of a game matter to an individual designer. Given a set of design patterns specialized to a particular designer one could leverage these design patterns in an Explainable PCGML system to help a designer understand and tweak a model to their needs. We note our usage of the term pattern differs from the literature. Typically, a design pattern generalizes across designers, whereas we apply it to indicate the unique structures across a game important to an individual designer.
We present an underlying system for a potential cocreative PCGML tool, intended for designers without ML expertise. This system takes user-defined design patterns for a target level, and outputs a PCGML model. The design patterns provided by designers and generated by our system can be understood as labels on level structure, which allow our PCGML model to better represent and reflect the design values of an individual designer. This system has two major components: (1) a classification system that learns to classify level structures with the user-specified design pattern labels. This system ensures a user does not have to label all existing content to train (2) a level generation system that incorporates the user's level design patterns, and can use these patterns as a vocabulary with which to interact with the user. For example, generating labels on level structure to represent the model's interpretation of that structure to the user.
The rest of this paper is organized as follows. First, we relate our work to prior, related work. Second, we describe our Explainable PCGML (XPCGML) system in terms of the two major components. Third, we discuss the three evaluations we ran with five expert designers. We end with a discussion of the systems limitations, future work, and conclusions. Our major contributions are the first application of explainable AI to PCGML, the use of a random forest classifier to minimize user effort, and the results of our evaluations. Our results demonstrate both the promise of these pattern labels in improving user interaction and a positive impact on the underlying model's performance.
System Overview
The approach presented in this paper builds an Explainable PCGML model based on existing level structure and an expert labeling design patterns upon that structure. We chose Super Mario Bros. as a domain given its familiarity to the game designers who took part in our evaluation. The general process for building a final model is as follows: First, users label existing game levels with the game design patterns they want to use for communicating with the system. For example, one might label both areas with large amounts of enemies and areas that require precise jumps as "challenges". The exact label can be anything as long as it is used consistently. Given this initial user labeling of level structure, we train a random forest classifier to classify additional level structure according to the labeled level chunks (Liaw, Wiener, and others 2002), which we then use to label all available levels with the user design pattern labels. Given this now larger training set of both level structure and labels, we train a convolutional neural network-based autoencoder on both levels structure and its associated labels (Lang 1988;LeCun et al. 1989), which can then be used to generate new level structure and label its generated content with these design pattern labels (Jain et al. 2016).
We make use of Super Mario Bros. as our domain, and, in particular, we utilize those Super Mario Bros levels present in the Video Game Level Corpus (Summerville et al. 2016b). We do not include underwater or boss/castle Super Mario Bros. levels. We made this choice as we perceived these two level types to be significantly different from all other level types. Further, while we make use of the VGLC levels, we do not make use of any of the VGLC Super Mario Bros. representations, which abstract away level components into higher order groups. Instead, we draw on the image parsing approach introduced in (Guzdial and Riedl 2016), using a spritesheet and OpenCV (Bradski and Kaehler 2000) to parse images of each level for a richer representation.
In total we identified thirty unique classes of level components, and make use of a matrix representation for each level section of size 8 × 8 × 30. The first two dimensions determine the tiles in the x and y axes, while the last dimension represents a one-hot vector of length 30 expressing component class. This vector is all 0's for any empty tile of a Super Mario Bros. level, and otherwise has 1's at the index associated with that particular level component. Thus, we can represent all level components, including background decoration. We note that we treat palette swaps of the same component as equivalent in class.
We make use of the SciPy random forest classifier (Jones, Oliphant, and Peterson 2014) and tensorflow for the autoencoder (Abadi et al. 2016).
Design Pattern Label Classifier
Our goal for the design pattern label classifier is to minimize the amount of work and time costs for a potential user of the system. Users have to label level structure with the design patterns they would like to use, but the label classifier ensures they do not have to hand-label all available levels. The classifier for this task must be able to perform given access to whatever small amount of training data a designer is willing to label for it, along with being able to easily update its model given potential feedback from a user. We anticipate the exact amount of training data the system has access to will differ widely between users, but we do not wish to overburden authors with long data labeling tasks. Random forest classifiers are known to perform reasonably under these constraints (Michalski, Carbonell, and Mitchell 2013).
The random forest model takes in an eight by eight level section and returns a level design pattern (either a userdefined design pattern or none). We train the random forest model based on the design pattern labels submitted by the user. We use a forest of size 100 with a maximum depth of 100 in order to encourage generality.
In the case of an interactive, iterative system the random forest can be easily retrained. In the case where the random forest classifier correctly classifies any new design pattern there is no need for retraining. Otherwise, we can delete a subset of the trees of the random forest that incorrectly classified the design pattern, and retrain an appropriate number of trees to return to the maximum forest size on the existing labels and any additional new information.
Even with the design pattern level classifier this system requires the somewhat unusual step of labeling existing level structure with design patterns a user finds important. However, this is a necessary step for the benefit of a shared vo-cabulary, and labeling content is much easier than designing new content. Further, we note that when two humans collaborate they must negotiate a shared vocabulary.
Generator
The existing level generation system is based on an autoencoder, and we visualize its architecture in Figure 1. The input comes in the form of a chunk of level content and the associated design patterns label, such as "intro" in the figure. This chunk is represented as an eight by eight by thirty input tensor plus a tensor of size n where n indicates the total number of design pattern labels given by the user. This last vector of size n is a one-hot encoded vector of level design pattern labels.
After input, the level structure and design pattern label vector are separated. The level structure passes through a two layer convolutional neural network (CNN). We note that we placed a dropout layer in between the two CNN layers to allow better generalization. After the CNN layers the output of this section and the design patterns vector recombine and pass through a fully connected layer with relu activation to an embedded vector of size 512. We note that, while large, this is much smaller than the 1920+n features of the input layer. The decoder section is an inverse of the encoder section of the architecture, starting with a relu fully connected layer, then deconvolutional neural network layers with upsampling handling the level structure. We implemented this model with an adam optimizer and mean square loss. Note that for the purposes of evaluation this is a standard autoencoder. We intend to make use of a variational autoencoder in future work (Kingma and Welling 2013).
Evaluation
Our system has two major parts: (1) a random forest classifier that attempts to label additional content with userprovided design patterns to learn the designer's vocabulary and (2) an autoencoder over level structure and associated patterns for generation. In this section we present three evaluations of our system. The first addresses the random forest classifier of labels, the second the entirety of the system, and the third addresses the limiting factor of time in human computer interactions. For all three evaluations we make use of a dataset of levels from Super Mario Bros. labeled by five expert designers.
Dataset Collection
We reached out to ten design experts to label three or more Super Mario Bros. levels of their choice to serve as a dataset for this evaluation. We do not include prior, published academic patterns of Super Mario Bros. levels (e.g. (Dahlskog and Togelius 2012)) as these patterns were designed for general automated design instead of explainable co-creation. Our goals for choosing these ten designers were to get as diverse a pool of labels as possible. Of these ten, five responded and took part in this study.
• Adam Le Doux: Le Doux is a game developer and designer best known for his Bitsy game engine. He is currently a Narrative Tool Developer at Bungie. • Dee Del Rosario: Del Rosario is an events and community organizer in games with organizations such as Different Games Collective and Seattle Indies, along with being a gamedev hobbyist. They currently work as a web developer and educator.
• Kartik Kini: Kini is an indie game developer through his studio Finite Reflection, and an associate producer at Cartoon Network Games.
• Gillian Smith: Smith is an Assistant Professor at WPI. She focuses on game design, AI, craft, and generative design.
• Kelly Snyder: Snyder is an Art Producer at Bethesda and previously a Technical Producer at Bungie.
All five of these experts were asked to label their choice of three levels with labels that established "a common language/vocabulary that you'd use if you were designing levels like this with another human". Of these experts only Smith had any knowledge of the underlying system. She produced two sets of design patterns for the levels she labeled, one including only those patterns she felt the system could understand and the second including all patterns that matched the above criteria. We refer to these labels as Smith and Smith-Naive through the rest of this section, respectively.
These experts labeled static images of non-boss and nonunderwater Super Mario Bros. levels present in the Video Game Level Corpus (VGLC) (Summerville et al. 2016b). The experts labeled these images by drawing a rectangle over the level structure in which the design pattern occurred with some string to define the pattern. These rectangles could be of arbitrary size, but we translated each into either a single training example centered on the eight by eight chunk our system requires, or multiple training examples if it was larger than eight by eight.
We include some summarizing information about these six sets of design pattern labels in Table 1. Specifically, we include the total number of labels and the top three labels, sorted by frequency and alphabetically, of each set. Each ex- pert produced very distinct labels, with less than one percent of labels shared between different experts. We include the first example for the top label for each set of design patterns in Figure 2. Even in the case of Kini and Del Rosario, where there is a similar area and design pattern label, the focus differs. We train six separate models, one for each set of design pattern labels (Smith has two).
Label Classifier Evaluation
In this section we seek to understand how well our random forest classifier is able to identify design patterns in level structure. For the purposes of this evaluation we made use of AlexNet as a baseline (Szegedy et al. 2016), given that a convolutional neural network would be the naive way one might anticipate solving this problem. We chose AlexNet given its popularity and success at similar image recognition tasks. In all instances we trained the AlexNet until its error converged. We make use of a three-fold cross validation on the labels for this and the remaining evaluations. We make use of a three-fold validation to address the variance across even a single expert's labels and due to the small set of labels available for some experts. Our major focus is training and test accuracy across the folds. We summarize the results of this evaluation in Table 2, giving the average training and test accuracies across all folds along with the standard deviation. We note that in all instances our random forest (RF) approach outperformed AlexNet CNN in terms of training accuracy, and nearly always in terms of test accuracy. We note that given more training time AlexNet's training accuracy might improve, but at the cost of test accuracy. We further note that AlexNet was on average one and a half times slower than the random forest in terms of training time. These results indicate that our random forest produces a more general classifier compared to AlexNet.
We note that our random forest performed fairly consistently in terms of training accuracy, at around 85%, but that the test accuracy varied significantly. Notably, the test accuracy did not vary according to the the number of training samples or number of labels per expert. This indicates that individual experts identify patterns that are more or less easy to classify automatically. Further we note that Snyder and Del Rosario had very low testing error across the board, which indicates a large amount of variance between tagged examples. Despite this, we demonstrate the utility of this approach in the next section.
Autoencoder Structure Evaluation
We hypothesize that the inclusion of design pattern labels into our autoencoder network will improve its overall representative quality. Further, that the use of an automatic label classifier will allow us to gather sufficient training data to train the autoencoder. This evaluation addresses both these hypotheses. We draw upon the same dataset and the same three folds from the prior evaluation and create three variations of our system. The first autoencoder variation has no design pattern labels and is trained on all 8 × 8 chunks of level instead of only those chunks labeled or autolabeled with a design pattern. Given that this means fewer features and smaller input and output tensors, this model should outperform our full model unless the design pattern labels im-prove overall representative quality. The second autoencoder variation does not make use of the automatic design pattern label classifier, thus greatly reducing the training data. The last variation is simply our full system. For all approaches we trained till training error converged. We note that we trained a single 'no labels' variation and tested it on each expert, but trained models for the no automatic classifier and full versions of our approach for each expert.
Given these three variations, we chose to measure the difference in structure when the autoencoder was fed the test portions of each of the three folds. Specifically we capture the number of incorrect structure features predicted. This can be understood as a stand in for representation quality, given that the output of the autoencoder for the test sample will be the closest thing the autoencoder can represent to the test sample.
We give the average number and standard deviation of incorrect structural features/tiles over all three folds in Table 2. We note that the minimum value here would be 0 errors and the maximum value would be 8 × 8 × 30 or 1920 incorrect structural feature values. For every expert except for Kini, who authored the smallest number of labels, our full system outperformed both variations. While some of these numbers are fairly close between the full and no labels variation, the values in bold were significantly lower according to the paired Wilcoxon Mann Whitney U test (p < 0.001).
Given the results in Table 3. We argue that both our hypotheses were shown to be correct, granted that the expert gives sufficient labels, with the cut-off appearing to be between Kini's 28 and Del Rosario's 38. Specifically the representation quality is improved when labels are used, and the label classifier improves performance over not applying the label classifier.
Transfer Evaluation
A major concern for any co-creative tool based on Machine Learning is training time. In the prior autoencoder evaluation, both the no labels and full versions of our system took hours to train to convergence. This represents a major weakness, given that in some co-creative contexts designers may not want to wait for an offline training process, especially when we anticipate authors wanting to rapidly update their set of labels. Given these concerns, we evaluate a variation of our approach utilizing transfer learning. This drastically speeds up training time by adapting the weights of a pretrained network on one task to a new task.
We make use of student-teacher or born again neural networks, a transfer learning approach in which the weights of a pre-trained neural network are copied into another network of a different size. In this case we take the weights from our no labels autoencoder from the prior evaluation, copy them into our full architecture, and train from there. We construct two variations of this approach, once again depending on the use of the random forest label classifier or not. We compare both variations to the full and no labels system from the prior evaluation, using the same metric.
We present the results of this evaluation in Table 4. We note that, while the best performing variation did not change from the prior variation, in all cases except for the Kini models, the transfer approaches got closer to the full variation approach, sometimes off by as little as a fraction of one structure feature. Further, these approaches were significantly faster to train, with the no automatic labeling transfer approach training in an average of 4.48 seconds and the automatic labeler transfer approach training in an average of 144.92 seconds, compared to the average of roughly five hours of the full approach on the same computer. This points to a clear breakdown in when it makes sense to apply what variation of our approach, depending on time requirements and processing power. In addition, it continues to support our hypotheses concerning the use of automatic labeler and personal level design pattern labels.
Qualitative Example
We do not present a front-end or interaction paradigm for the use of this Explainable PCGML system, as we feel such implementation details will depend upon the intended audience. However, it is illustrative to give an example of how the system could be used. In Figure 3 we present an example of the two training examples of the pattern "completionist reward" labeled by the expert Dee Del Rosario. The full system, including the random forest classifier, trains on these examples (and the other labels from Del Rosario), and is then given as input the eight by eight chunk with only the floating bar within it on the left of the image along with the desired label "completionist reward". One can imagine that Del Rosario as a user wants to add a reward to this section, but doesn't have any strong ideas. Given this input the system outputs the image on the right. We asked Del Rosario what they thought of the performance of the system and whether they considered this output matched their definition of completionist reward. They replied "Yes -I think? I would because I'm focusing on the position of the coins." We note that Del Rosario did not see the most decisive patch when making this statement, which we extracted as in (Olah et al. 2018). This clearly demonstrates some harmony between the learned model and the design intent. However, Del Rosario went on to say "I think if I were to go... more strict with the definition/phrase, I'd think of some other configuration that would make you think, 'oooooh, what a tricky design!!' ". This indicates a desire to further clarify the model. Thus, we imagine an iterative model is necessary for a tool utilizing this system and a user to reach a state of harmonious interaction.
Conclusions
In this paper, we present an approach to explainable PCGML (XPCGML) through user-authored design pattern labels over existing level structure. We evaluate our autoencoder and random forest labeler components on levels labeled by game design experts. These labels serve as a shared language between the user and level design agent, which allows for the possibility of explainability and meaningful collaborative interaction. We intend to take our system and incorporate it into a co-creative tool for novice and expert level designers. To the best of our knowledge this represents the first approach to explainable PCGML.
| 3,846 |
1809.09419
|
2950669295
|
Procedural content generation via Machine Learning (PCGML) is the umbrella term for approaches that generate content for games via machine learning. One of the benefits of PCGML is that, unlike search or grammar-based PCG, it does not require hand authoring of initial content or rules. Instead, PCGML relies on existing content and black box models, which can be difficult to tune or tweak without expert knowledge. This is especially problematic when a human designer needs to understand how to manipulate their data or models to achieve desired results. We present an approach to Explainable PCGML via Design Patterns in which the design patterns act as a vocabulary and mode of interaction between user and model. We demonstrate that our technique outperforms non-explainable versions of our system in interactions with five expert designers, four of whom lack any machine learning expertise.
|
Design patterns represent a well-researched approach to game design @cite_3 . In theory, game design patterns describe general solutions to game design problems that occur across many different games. Game Design patterns have been used as heuristics in evolutionary PCG systems including in the domain of Super Mario Bros. @cite_24 . Researchers tend to derive game design patterns through either rigorous, cross-domain analysis @cite_21 or based upon their subjective interpretation of game structure. We embrace this subjectivity in our work by having designers create a language of game design patterns unique to themselves with which to interact with a PCGML system.
|
{
"abstract": [
"Procedural content generation and design patterns could potentially be combined in several different ways in game design. This paper discusses how to combine the two, using automatic platform game level design as an example. The paper also present work towards a pattern-based level generator for Super Mario Bros. (SMB), which is based on an analysis of the levels of the original SMB game where we found 23 different patterns.",
"Video games today increasingly situate play in imaginative 3D worlds. As a result, the industry devotes much time and effort to level design. However, this subject has received very little research. Documentation on the process of level design or how designers push or pull players through a level within a video game is very sparse. In this paper, we propose a set of design patterns for level design. The patterns were developed based on a process involving interviews with game designers as well as gameplay analysis of different games. We established face validity of these patterns through expert review; we also established reliability using inter-rater agreement. In addition, we also developed a timeline video annotation method based on these patterns. This visualization method provides a very effective approach to view players' play style and preference as well as level design problems. The patterns as well as the visualization method will be discussed in the paper.",
"PART I BACKGROUND 1 Introduction 2 An Activity-Based Framework for Describing Games 3 Game Design Patterns PART II THE PATTERN 4 Using Design Patterns 5 Game Design Patterns for Game Elements 6 Game Design Patterns for Resource and Resource Management 7 Game Design Patterns for Information, Communication, and Presentation 8 Actions and Events Patterns 9 Game Design Patterns for Narrative Structures, Predictability, and Immersion Patterns 10 Game Design Patterns for Social Interaction 11 Game Design Patterns for Goals 12 Game Design Patterns for Goal Structures 13 Game Design Patterns for Game Sessions 14 Game Design Patterns for Game Mastery and Balancing 15 Game Design Patterns for Meta Games, Replayability, and Learning Curves"
],
"cite_N": [
"@cite_24",
"@cite_21",
"@cite_3"
],
"mid": [
"2027451374",
"1989846335",
"283812555"
]
}
|
Explainable PCGML via Game Design Patterns
|
Procedural Content Generation (PCG), represents a field of research into, and a set of techniques for, generating game content algorithmically. PCG historically requires a significant amount of human-authored knowledge to generate content, such as rules, heuristics, and individual components, creating a time and design expertise burden. Procedural Content Generation via Machine Learning (PCGML) attempts to solve these issues by applying machine learning to extract this design knowledge from existing corpora of game content (Summerville et al. 2017). However, this approach has its own weaknesses; Applied naively, these models require machine learning literacy to understand and debug. Machine learning literacy is uncommon, especially among those designers who might most benefit from PCGML.
Explainable AI represents a field of research into opening up black box Artificial Intelligence and Machine Learning models to users (Biran and Cotton 2017). The promise of explainable AI is not just that it will help users understand such models, but also tweak these models to their needs (Olah et al. 2018). If we could include some representation of an individual game designer's knowledge into a model, we could help designers without ML expertise better understand and alter these models to their needs.
Copyright c 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
Design patterns (Bjork and Holopainen 2004) represent one popular way to represent game design knowledge. A design pattern is a category of game structure that serves a general design purpose across similar games. Researchers tend to derive design patterns via subjective application of design expertise (Hullett and Whitehead 2010), which makes it difficult to broadly apply one set of patterns across different designers and games. The same subjective limitation also means that an individual set of design patterns can serve to clarify what elements of a game matter to an individual designer. Given a set of design patterns specialized to a particular designer one could leverage these design patterns in an Explainable PCGML system to help a designer understand and tweak a model to their needs. We note our usage of the term pattern differs from the literature. Typically, a design pattern generalizes across designers, whereas we apply it to indicate the unique structures across a game important to an individual designer.
We present an underlying system for a potential cocreative PCGML tool, intended for designers without ML expertise. This system takes user-defined design patterns for a target level, and outputs a PCGML model. The design patterns provided by designers and generated by our system can be understood as labels on level structure, which allow our PCGML model to better represent and reflect the design values of an individual designer. This system has two major components: (1) a classification system that learns to classify level structures with the user-specified design pattern labels. This system ensures a user does not have to label all existing content to train (2) a level generation system that incorporates the user's level design patterns, and can use these patterns as a vocabulary with which to interact with the user. For example, generating labels on level structure to represent the model's interpretation of that structure to the user.
The rest of this paper is organized as follows. First, we relate our work to prior, related work. Second, we describe our Explainable PCGML (XPCGML) system in terms of the two major components. Third, we discuss the three evaluations we ran with five expert designers. We end with a discussion of the systems limitations, future work, and conclusions. Our major contributions are the first application of explainable AI to PCGML, the use of a random forest classifier to minimize user effort, and the results of our evaluations. Our results demonstrate both the promise of these pattern labels in improving user interaction and a positive impact on the underlying model's performance.
System Overview
The approach presented in this paper builds an Explainable PCGML model based on existing level structure and an expert labeling design patterns upon that structure. We chose Super Mario Bros. as a domain given its familiarity to the game designers who took part in our evaluation. The general process for building a final model is as follows: First, users label existing game levels with the game design patterns they want to use for communicating with the system. For example, one might label both areas with large amounts of enemies and areas that require precise jumps as "challenges". The exact label can be anything as long as it is used consistently. Given this initial user labeling of level structure, we train a random forest classifier to classify additional level structure according to the labeled level chunks (Liaw, Wiener, and others 2002), which we then use to label all available levels with the user design pattern labels. Given this now larger training set of both level structure and labels, we train a convolutional neural network-based autoencoder on both levels structure and its associated labels (Lang 1988;LeCun et al. 1989), which can then be used to generate new level structure and label its generated content with these design pattern labels (Jain et al. 2016).
We make use of Super Mario Bros. as our domain, and, in particular, we utilize those Super Mario Bros levels present in the Video Game Level Corpus (Summerville et al. 2016b). We do not include underwater or boss/castle Super Mario Bros. levels. We made this choice as we perceived these two level types to be significantly different from all other level types. Further, while we make use of the VGLC levels, we do not make use of any of the VGLC Super Mario Bros. representations, which abstract away level components into higher order groups. Instead, we draw on the image parsing approach introduced in (Guzdial and Riedl 2016), using a spritesheet and OpenCV (Bradski and Kaehler 2000) to parse images of each level for a richer representation.
In total we identified thirty unique classes of level components, and make use of a matrix representation for each level section of size 8 × 8 × 30. The first two dimensions determine the tiles in the x and y axes, while the last dimension represents a one-hot vector of length 30 expressing component class. This vector is all 0's for any empty tile of a Super Mario Bros. level, and otherwise has 1's at the index associated with that particular level component. Thus, we can represent all level components, including background decoration. We note that we treat palette swaps of the same component as equivalent in class.
We make use of the SciPy random forest classifier (Jones, Oliphant, and Peterson 2014) and tensorflow for the autoencoder (Abadi et al. 2016).
Design Pattern Label Classifier
Our goal for the design pattern label classifier is to minimize the amount of work and time costs for a potential user of the system. Users have to label level structure with the design patterns they would like to use, but the label classifier ensures they do not have to hand-label all available levels. The classifier for this task must be able to perform given access to whatever small amount of training data a designer is willing to label for it, along with being able to easily update its model given potential feedback from a user. We anticipate the exact amount of training data the system has access to will differ widely between users, but we do not wish to overburden authors with long data labeling tasks. Random forest classifiers are known to perform reasonably under these constraints (Michalski, Carbonell, and Mitchell 2013).
The random forest model takes in an eight by eight level section and returns a level design pattern (either a userdefined design pattern or none). We train the random forest model based on the design pattern labels submitted by the user. We use a forest of size 100 with a maximum depth of 100 in order to encourage generality.
In the case of an interactive, iterative system the random forest can be easily retrained. In the case where the random forest classifier correctly classifies any new design pattern there is no need for retraining. Otherwise, we can delete a subset of the trees of the random forest that incorrectly classified the design pattern, and retrain an appropriate number of trees to return to the maximum forest size on the existing labels and any additional new information.
Even with the design pattern level classifier this system requires the somewhat unusual step of labeling existing level structure with design patterns a user finds important. However, this is a necessary step for the benefit of a shared vo-cabulary, and labeling content is much easier than designing new content. Further, we note that when two humans collaborate they must negotiate a shared vocabulary.
Generator
The existing level generation system is based on an autoencoder, and we visualize its architecture in Figure 1. The input comes in the form of a chunk of level content and the associated design patterns label, such as "intro" in the figure. This chunk is represented as an eight by eight by thirty input tensor plus a tensor of size n where n indicates the total number of design pattern labels given by the user. This last vector of size n is a one-hot encoded vector of level design pattern labels.
After input, the level structure and design pattern label vector are separated. The level structure passes through a two layer convolutional neural network (CNN). We note that we placed a dropout layer in between the two CNN layers to allow better generalization. After the CNN layers the output of this section and the design patterns vector recombine and pass through a fully connected layer with relu activation to an embedded vector of size 512. We note that, while large, this is much smaller than the 1920+n features of the input layer. The decoder section is an inverse of the encoder section of the architecture, starting with a relu fully connected layer, then deconvolutional neural network layers with upsampling handling the level structure. We implemented this model with an adam optimizer and mean square loss. Note that for the purposes of evaluation this is a standard autoencoder. We intend to make use of a variational autoencoder in future work (Kingma and Welling 2013).
Evaluation
Our system has two major parts: (1) a random forest classifier that attempts to label additional content with userprovided design patterns to learn the designer's vocabulary and (2) an autoencoder over level structure and associated patterns for generation. In this section we present three evaluations of our system. The first addresses the random forest classifier of labels, the second the entirety of the system, and the third addresses the limiting factor of time in human computer interactions. For all three evaluations we make use of a dataset of levels from Super Mario Bros. labeled by five expert designers.
Dataset Collection
We reached out to ten design experts to label three or more Super Mario Bros. levels of their choice to serve as a dataset for this evaluation. We do not include prior, published academic patterns of Super Mario Bros. levels (e.g. (Dahlskog and Togelius 2012)) as these patterns were designed for general automated design instead of explainable co-creation. Our goals for choosing these ten designers were to get as diverse a pool of labels as possible. Of these ten, five responded and took part in this study.
• Adam Le Doux: Le Doux is a game developer and designer best known for his Bitsy game engine. He is currently a Narrative Tool Developer at Bungie. • Dee Del Rosario: Del Rosario is an events and community organizer in games with organizations such as Different Games Collective and Seattle Indies, along with being a gamedev hobbyist. They currently work as a web developer and educator.
• Kartik Kini: Kini is an indie game developer through his studio Finite Reflection, and an associate producer at Cartoon Network Games.
• Gillian Smith: Smith is an Assistant Professor at WPI. She focuses on game design, AI, craft, and generative design.
• Kelly Snyder: Snyder is an Art Producer at Bethesda and previously a Technical Producer at Bungie.
All five of these experts were asked to label their choice of three levels with labels that established "a common language/vocabulary that you'd use if you were designing levels like this with another human". Of these experts only Smith had any knowledge of the underlying system. She produced two sets of design patterns for the levels she labeled, one including only those patterns she felt the system could understand and the second including all patterns that matched the above criteria. We refer to these labels as Smith and Smith-Naive through the rest of this section, respectively.
These experts labeled static images of non-boss and nonunderwater Super Mario Bros. levels present in the Video Game Level Corpus (VGLC) (Summerville et al. 2016b). The experts labeled these images by drawing a rectangle over the level structure in which the design pattern occurred with some string to define the pattern. These rectangles could be of arbitrary size, but we translated each into either a single training example centered on the eight by eight chunk our system requires, or multiple training examples if it was larger than eight by eight.
We include some summarizing information about these six sets of design pattern labels in Table 1. Specifically, we include the total number of labels and the top three labels, sorted by frequency and alphabetically, of each set. Each ex- pert produced very distinct labels, with less than one percent of labels shared between different experts. We include the first example for the top label for each set of design patterns in Figure 2. Even in the case of Kini and Del Rosario, where there is a similar area and design pattern label, the focus differs. We train six separate models, one for each set of design pattern labels (Smith has two).
Label Classifier Evaluation
In this section we seek to understand how well our random forest classifier is able to identify design patterns in level structure. For the purposes of this evaluation we made use of AlexNet as a baseline (Szegedy et al. 2016), given that a convolutional neural network would be the naive way one might anticipate solving this problem. We chose AlexNet given its popularity and success at similar image recognition tasks. In all instances we trained the AlexNet until its error converged. We make use of a three-fold cross validation on the labels for this and the remaining evaluations. We make use of a three-fold validation to address the variance across even a single expert's labels and due to the small set of labels available for some experts. Our major focus is training and test accuracy across the folds. We summarize the results of this evaluation in Table 2, giving the average training and test accuracies across all folds along with the standard deviation. We note that in all instances our random forest (RF) approach outperformed AlexNet CNN in terms of training accuracy, and nearly always in terms of test accuracy. We note that given more training time AlexNet's training accuracy might improve, but at the cost of test accuracy. We further note that AlexNet was on average one and a half times slower than the random forest in terms of training time. These results indicate that our random forest produces a more general classifier compared to AlexNet.
We note that our random forest performed fairly consistently in terms of training accuracy, at around 85%, but that the test accuracy varied significantly. Notably, the test accuracy did not vary according to the the number of training samples or number of labels per expert. This indicates that individual experts identify patterns that are more or less easy to classify automatically. Further we note that Snyder and Del Rosario had very low testing error across the board, which indicates a large amount of variance between tagged examples. Despite this, we demonstrate the utility of this approach in the next section.
Autoencoder Structure Evaluation
We hypothesize that the inclusion of design pattern labels into our autoencoder network will improve its overall representative quality. Further, that the use of an automatic label classifier will allow us to gather sufficient training data to train the autoencoder. This evaluation addresses both these hypotheses. We draw upon the same dataset and the same three folds from the prior evaluation and create three variations of our system. The first autoencoder variation has no design pattern labels and is trained on all 8 × 8 chunks of level instead of only those chunks labeled or autolabeled with a design pattern. Given that this means fewer features and smaller input and output tensors, this model should outperform our full model unless the design pattern labels im-prove overall representative quality. The second autoencoder variation does not make use of the automatic design pattern label classifier, thus greatly reducing the training data. The last variation is simply our full system. For all approaches we trained till training error converged. We note that we trained a single 'no labels' variation and tested it on each expert, but trained models for the no automatic classifier and full versions of our approach for each expert.
Given these three variations, we chose to measure the difference in structure when the autoencoder was fed the test portions of each of the three folds. Specifically we capture the number of incorrect structure features predicted. This can be understood as a stand in for representation quality, given that the output of the autoencoder for the test sample will be the closest thing the autoencoder can represent to the test sample.
We give the average number and standard deviation of incorrect structural features/tiles over all three folds in Table 2. We note that the minimum value here would be 0 errors and the maximum value would be 8 × 8 × 30 or 1920 incorrect structural feature values. For every expert except for Kini, who authored the smallest number of labels, our full system outperformed both variations. While some of these numbers are fairly close between the full and no labels variation, the values in bold were significantly lower according to the paired Wilcoxon Mann Whitney U test (p < 0.001).
Given the results in Table 3. We argue that both our hypotheses were shown to be correct, granted that the expert gives sufficient labels, with the cut-off appearing to be between Kini's 28 and Del Rosario's 38. Specifically the representation quality is improved when labels are used, and the label classifier improves performance over not applying the label classifier.
Transfer Evaluation
A major concern for any co-creative tool based on Machine Learning is training time. In the prior autoencoder evaluation, both the no labels and full versions of our system took hours to train to convergence. This represents a major weakness, given that in some co-creative contexts designers may not want to wait for an offline training process, especially when we anticipate authors wanting to rapidly update their set of labels. Given these concerns, we evaluate a variation of our approach utilizing transfer learning. This drastically speeds up training time by adapting the weights of a pretrained network on one task to a new task.
We make use of student-teacher or born again neural networks, a transfer learning approach in which the weights of a pre-trained neural network are copied into another network of a different size. In this case we take the weights from our no labels autoencoder from the prior evaluation, copy them into our full architecture, and train from there. We construct two variations of this approach, once again depending on the use of the random forest label classifier or not. We compare both variations to the full and no labels system from the prior evaluation, using the same metric.
We present the results of this evaluation in Table 4. We note that, while the best performing variation did not change from the prior variation, in all cases except for the Kini models, the transfer approaches got closer to the full variation approach, sometimes off by as little as a fraction of one structure feature. Further, these approaches were significantly faster to train, with the no automatic labeling transfer approach training in an average of 4.48 seconds and the automatic labeler transfer approach training in an average of 144.92 seconds, compared to the average of roughly five hours of the full approach on the same computer. This points to a clear breakdown in when it makes sense to apply what variation of our approach, depending on time requirements and processing power. In addition, it continues to support our hypotheses concerning the use of automatic labeler and personal level design pattern labels.
Qualitative Example
We do not present a front-end or interaction paradigm for the use of this Explainable PCGML system, as we feel such implementation details will depend upon the intended audience. However, it is illustrative to give an example of how the system could be used. In Figure 3 we present an example of the two training examples of the pattern "completionist reward" labeled by the expert Dee Del Rosario. The full system, including the random forest classifier, trains on these examples (and the other labels from Del Rosario), and is then given as input the eight by eight chunk with only the floating bar within it on the left of the image along with the desired label "completionist reward". One can imagine that Del Rosario as a user wants to add a reward to this section, but doesn't have any strong ideas. Given this input the system outputs the image on the right. We asked Del Rosario what they thought of the performance of the system and whether they considered this output matched their definition of completionist reward. They replied "Yes -I think? I would because I'm focusing on the position of the coins." We note that Del Rosario did not see the most decisive patch when making this statement, which we extracted as in (Olah et al. 2018). This clearly demonstrates some harmony between the learned model and the design intent. However, Del Rosario went on to say "I think if I were to go... more strict with the definition/phrase, I'd think of some other configuration that would make you think, 'oooooh, what a tricky design!!' ". This indicates a desire to further clarify the model. Thus, we imagine an iterative model is necessary for a tool utilizing this system and a user to reach a state of harmonious interaction.
Conclusions
In this paper, we present an approach to explainable PCGML (XPCGML) through user-authored design pattern labels over existing level structure. We evaluate our autoencoder and random forest labeler components on levels labeled by game design experts. These labels serve as a shared language between the user and level design agent, which allows for the possibility of explainability and meaningful collaborative interaction. We intend to take our system and incorporate it into a co-creative tool for novice and expert level designers. To the best of our knowledge this represents the first approach to explainable PCGML.
| 3,846 |
1809.08159
|
2891804603
|
In many fields of social and industrial sciences, simulation is crucial in comprehending a target system. A major task in simulation is the estimation of optimal parameters to express the observed data need to directly elucidate the properties of the target system as a modeling based on the expert's domain knowledge. However, skilled human experts struggle to obtain the desired parameters. Data assimilation therefore becomes an unavoidable task to reduce the cost of simulator optimization. Another necessary task is extrapolation; in many practical cases, predictions based on simulation results will be often outside of the dominant range of a given data area, and this is referred to as the covariate shift. This paper focuses on a regression problem with covariate shift. While the parameter estimation for the covariate shift has been studied thoroughly in parametric and nonparametric settings, conventional statistical methods of parameter searching are not applicable in the data assimilation of the simulation owing to the properties of the likelihood function: intractable or nondifferentiable. Hence, we propose a novel framework of Bayesian inference based on kernel mean embedding. This framework allows for predictions in covariate shift situations, and its effectiveness is evaluated in both synthetic numerical experiments and a widely used production simulator reproducing real-world manufacturing factories.
|
A series of studies exist for covariate shift assuming that a regression function is an analytical function @cite_4 @cite_2 @cite_1 , such as kernel ridge regression @cite_0 . In our problem, however, we assume that the functional relation of the regression model @math is only given as a nonanalytical function: a simulation. The difference between kernel ridge regression and the proposed method with formulation is presented in .
|
{
"abstract": [
"As the power of computing has grown over the past few decades, the field of machine learning has advanced rapidly in both theory and practice. Machine learning methods are usually based on the assumption that the data generation mechanism does not change over time. Yet real-world applications of machine learning, including image recognition, natural language processing, speech recognition, robot control, and bioinformatics, often violate this common assumption. Dealing with non-stationarity is one of modern machine learning's greatest challenges. This book focuses on a specific non-stationary environment known as covariate shift, in which the distributions of inputs (queries) change but the conditional distribution of outputs (answers) is unchanged, and presents machine learning theory, algorithms, and applications to overcome this variety of non-stationarity. After reviewing the state-of-the-art research in the field, the authors discuss topics that include learning under covariate shift, model selection, importance estimation, and active learning. They describe such real world applications of covariate shift adaption as brain-computer interface, speaker identification, and age prediction from facial images. With this book, they aim to encourage future research in machine learning, statistics, and engineering that strives to create truly autonomous learning machines able to learn under non-stationarity.",
"",
"Abstract A class of predictive densities is derived by weighting the observed samples in maximizing the log-likelihood function. This approach is effective in cases such as sample surveys or design of experiments, where the observed covariate follows a different distribution than that in the whole population. Under misspecification of the parametric model, the optimal choice of the weight function is asymptotically shown to be the ratio of the density function of the covariate in the population to that in the observations. This is the pseudo-maximum likelihood estimation of sample surveys. The optimality is defined by the expected Kullback–Leibler loss, and the optimal weight is obtained by considering the importance sampling identity. Under correct specification of the model, however, the ordinary maximum likelihood estimate (i.e. the uniform weight) is shown to be optimal asymptotically. For moderate sample size, the situation is in between the two extreme cases, and the weight function is selected by minimizing a variant of the information criterion derived as an estimate of the expected loss. The method is also applied to a weighted version of the Bayesian predictive density. Numerical examples as well as Monte-Carlo simulations are shown for polynomial regression. A connection with the robust parametric estimation is discussed.",
"In supervised learning, we commonly assume that training and test data are sampled from the same distribution. However, this assumption can be violated in practice and then standard machine learning techniques perform poorly. This paper focuses on revealing and improving the performance of Bayesian estimation when the training and test distributions are different. We formally analyze the asymptotic Bayesian generalization error and establish its upper bound under a very general setting. Our important finding is that lower order terms---which can be ignored in the absence of the distribution change---play an important role under the distribution change. We also propose a novel variant of stochastic complexity which can be used for choosing an appropriate model and hyper-parameters under a particular distribution change."
],
"cite_N": [
"@cite_1",
"@cite_0",
"@cite_4",
"@cite_2"
],
"mid": [
"1493730910",
"",
"2034368206",
"2097079088"
]
}
|
Simulator Calibration under Covariate Shift with Kernels
|
Computer simulators are ubiquitous in many areas of science and engineering, examples including climate science, social science, and epidemics, to just name a few (Winsberg, 2010;Weisberg, 2012). Such tools are useful in understanding and predicting complicated time-evolving phenomena of interest. Computer simulators are also widely used in industrial manufacturing process modeling (Mourtzis et al., 2014), and we use one such simulator described in Fig. 1-(A), which models an assembling process of certain products in a factory, as our working example.
In this work we deal with the task of simulator calibration (Kennedy and O'Hagan, 2001), which is necessary Proceedings of the 23 rd International Conference on Artificial Intelligence and Statistics (AISTATS) 2020, Palermo, Italy. PMLR: Volume 108. Copyright 2020 by the author(s). In the factory, one product is made from three items (TOPS, BOTTOMS and SCREWS) by the ASSEMBLY machine, and four such products are checked by the INSPECTION machine at the same time. Parameter θ of the simulation model r(x, θ) consists of 4 constants: mean θ 1 and variance θ 2 of the distribution of the processing time in the ASSEMBLY machine, and those (described as θ 3 and θ 4 ) in the INSPECTION machine. (B) Results of our method without covariate shift adaptation: training data (red points), generated predictive outputs (orange) and their means (brown curve). (C) Results of our method with covariate shift adaptation: training data (red points), generated predictive outputs (light green) and their means (green curve). q 0 (x) and q 1 (x) are input densities for training and prediction, respectively. More details in Secs. 1 and 5.2.
to make simulation-based predictions reliable. To describe this, we introduce some notation used in the paper. We are interested in a system R(x) that takes x as an input and output y = R(x)+ε possibly corrupted by a noise ε. This system R(x) is of interest but not known. Instead, we are given data (X i , Y i ) n i=1 from the system, where input locations X 1 , . . . , X n are generated from a distribution q 0 (x) and outputs Y 1 , . . . , Y n from the target system Y i = R(X i ) + ε i . On the other hand, a simulator is defined as a function r(x, θ) that takes x as an input and outputs r(x, θ), where θ is a model parameter. The task of simulator calibration is to tune (or estimate) the parameter θ so that the r(x, θ) "approximates well" the unknown target system R(x) by using the data (X i , Y i ) n i=1 . For instance, in Fig. 1, the target system R(x) takes as an input the number x of required products to be manufactured in one day, and outputs the total time y = R(x) + ε required for producing all the products; the simulator r(x, θ) models this process (see the "pred mean" curves in Fig. 1-(B)(C)).
There are mainly two challenges in the task of simulator calibration, which distinguish it from standard statistical learning problems. The first one owes to the complexity of the simulation model. Very often, a simulation model r(x, θ) cannot be written as a simple function of the input x and parameter θ, because the process of producing the output y = r(x, θ) may involve various numerical algorithms (e.g., solutions for differential equations) and/or IF-ELSE type decision rules of multiple agents. Therefore, one cannot access the gradient of the simulator output r(x, θ) with respect to the parameter θ, and thus calibration cannot reply on gradient-based methods for optimization (e.g., gradient descent) and sampling (e.g., Hamiltonian Monte Carlo). Moreover, one simulation y = r(x, θ) for a given input x can be computationally very expensive. Thus only a limited number of simulations can be performed for calibration. To summarise, the first challenge is that calibration should be done by only making use of forward simulations (or evaluations of r(x, θ)), while the number of simulations cannot be large.
The second challenge is that of covariate shift (or sample selection bias) (Shimodaira, 2000;Sugiyama and Kawanabe, 2012), which is ubiquitous in applications of simulations, but has been rarely discussed in the literature on calibration methods. The situation is that the input distribution q 1 (x) for the test (or prediction) phase is different from the input distribution q 0 (x) generating the training input locations X 1 , . . . , X n . In other words, the parameter θ is to be tuned so that the simulator r(x, θ) accurately approximates the target system R(x) with respect to the distribution q 1 (x) (e.g., the error defined as (R(x) − r(x, θ)) 2 q 1 (x)dx is to be small), while training data (X i , Y i ) n i=1 are only given with respect to another distribution q 0 (x).
The covariate shift setting is inherently important and ubiquitous in applications of computer simulation, because the purpose of a simulation is often in extrapolation. An illustrative example is climate simulations, where the aim is to answer whether global warming will occur in the future. As such, input x is a time point and the target system R(x) is the global temperature. Calibration of the simulator r(x, θ) is to be done based on data from the past, but prediction is required for the future. This means that training input distribu-tion q 0 (x) has a support in the past, but that of test q 1 (x) has a support on the future. For our working example in Fig. 1, training input locations X 1 , . . . , X n from q 0 (x) are more densely distributed in the region x < 110 than the region x ≥ 110, since the data are obtained in a trial period. On the other hand, the test phase (i.e., when the factory is deployed) is targeted on mass production, and thus the test input distribution q 1 (x) has mass concentrated in the region x ≥ 110. Being a parametric model, a simulator only has a finite degree of freedom, and thus cannot capture all the aspects of the target system. Under such a model misspecification, the covariate shift is known to have a huge effect: the optimal model for the test input distribution may be drastically different from that for the training input distribution (Shimodaira, 2000). In climate simulations, care must be taken in how to tune the simulator as the data are only from the past; otherwise, the resulting predictions about the future will not be reliable (Winsberg, 2018). In the example of Fig. 1, the behavior of the target system R(x) changes for the trial and test phases: Figs. 1-(B)(C) describe this situation. As can be seen in training data (red points), the total manufacturing time R(x) becomes significantly larger when the number x of required products is greater than x = 110, because of of the overload of workers and machines. However, such structural change of the target R(x) is not modeled in the simulator r(x, θ) (model misspecification). Thus, if calibration is done without taking the covariate shift into account, the resulting simulator makes predictions that fit well to the data in the region x < 110, but do not fit well in the region x ≥ 110, as described in Fig. 1-(B).
Because of the first challenge of simulator calibration, exiting methods for covariate shift adaptation, which have been developed for standard statistical and machine learning approaches, cannot be directly employed for the simulator calibration problem: see e.g., Shimodaira (2000); Yamazaki et al. (2007); Gretton et al. (2009);Sugiyama and Kawanabe (2012) and references therein. On the other hand, existing approaches to likelihood-free inference, such as Approximate Bayesian Computation (ABC) methods (e.g. Csilléry et al. (2010); Marin et al. (2012); Nakagome et al. (2013)), are applicable to simulator calibration, but they do not address the problem of covariate shift. Our approach combines these two approaches and thus enjoys the best of both worlds, offering a solution to the calibration problem with covariate shift adaptation.
This work proposes a novel approach to simulator calibration, dealing explicitly with the setting of covariate shift. Our approach is Bayesian, deriving a certain posterior distribution over the parameter space given observed data. The proposed method is based on Kernel ABC (Nakagome et al., 2013;Fukumizu et al., 2013), which is an approach to ABC based on kernel mean embedding of distributions (Muandet et al., 2017), and a certain importance-weighted kernel that works for covariate shift adaptation. We provide a theoretical analysis of this approach, showing that it produces a distribution over the parameter space that approximates the posterior distribution in which the "observed data" is predictions from the model that minimises the importance-weighted empirical risk. In other words, the proposed method approximates the posterior distribution whose support consists of parameters such that the resulting simulator produces a small generalization error for the test input distribution. For instance, Fig. 1-(C) shows predictions obtained with our method, which fit well in the test region x ≥ 110 as a result of covariate shift adaptation. This paper is organized as follows. In Sec. 2, we briefly review the setting of covariate shift and the framework of kernel mean embedding. In Sec. 3, we present our method for simulator calibration with covariate shift adaptation, and in Sec. 4 we investigate its theoretical properties. In Sec. 5 we report results of numerical experiments that include calibration of the production simulator in Fig. 1, confirming the effectiveness of the proposed method. Additional experimental results and all the theoretical proofs are presented in Appendix.
Calibration under Covariate Shift
Let X ⊂ R d X with d X ∈ N be a measurable subset that serves as the input space for a target system and a simulator. Denote by R : X → R the regression function of the (unknown) target system, which is deterministic, and define the true data-generating process as
y(x) := R(x) + e(x),(1)
where e : X → R is a (zero-mean) stochastic process that represent error in observations. Observed data D n := {(X i , Y i )} n i=1 ⊂ X × R are assumed to be generated from the process (1) as
X 1 , . . . , X n ∼ q 0 (i.i.d.), Y i = y(X i ), (i = 1, . . . , n),
where q 0 is a probability density function on X . We use the following notation to write the output values:
Y n := (Y 1 , . . . , Y n ) ∈ R n .
Let Θ ⊂ R dΘ with d Θ ∈ N be a measurable subset that serves as a parameter space. Let r : X × Θ → R be a (measurable) deterministic simulation model that outputs a real value r(x, θ) ∈ R given an input x ∈ X and a parameter θ ∈ Θ. Assume that we have a prior distribution π(θ) on the parameter space Θ.
In the setting of covariate shift, the input distribution q 1 (x) in the test or prediction phase is different from that q 0 (x) for training data X 1 , . . . , X n , while the input-output relationship (1) remains the same. Thus, the expected loss (or the generalization error) to be minimized may be defined as
L(θ) := (y(x) − r(x, θ)) 2 q 1 (x)dx = (y(x) − r(x, θ)) 2 β(x)q 0 (x)dx,
where β : X → R is the importance weight function, defined as the ratio of the two input densities:
β(x) := q 1 (x)/q 0 (x).
In this work, we assume for simplicity that importance weights β(X i ) at training inputs X 1 , . . . , X n are known, or estimated in advance. The knowledge of the importance weights is available when q 0 (x) and q 1 (x) are designed by an experimenter. For estimation of the importance, we refer to Gretton et al. (2009); and references therein. 1 Using the importance weights, the expected loss can be estimated as
L n (θ) := 1 n n i=1 β(X i ) (Y i − r(X i , θ)) 2 .(2)
Covariate shift has a strong inference of the generalization performance of an estimated model, when the true regression function R(x) does not belong to the class of functions realizable by the simulation model {r(·, θ) | θ ∈ Θ}, i.e., when model misspecification occurs (Shimodaira, 2000;Yamazaki et al., 2007). Such a misspecification happens in practice, since the simulation model only has a finite degree of freedom, as the parameter space is finite dimensional. To obtain a model with a good prediction performance, one needs to use an importance-weighted loss like (2) for parameter estimation.
Kernel Mean Embedding of Distributions
This is a framework for representing probability measures as elements in an Reproducing Kernel Hilbert Space (RKHS). We refer to Muandet et al. (2017) and references therein for details.
Let Ω be a measurable space, k : Ω × Ω → R be a measurable positive definite kernel and H be its RKHS.
In this framework, any probability measure P on Ω is represented as a Bochner integral
µ P := k(·, θ)dP (θ) ∈ H,
which is called the kernel mean of P . Estimation of P can be carried out by that of µ P , which is usually computationally and statistically easier, thanks to nice properties of the RKHS. Such a strategy is justified if the mapping P → µ P is injective, in which case µ P maintains all information of P . Kernels satisfying this property are called characteristic, and examples of characteristic kernels on Ω = R d include Gaussian and Matérn kernels (Sriperumbudur et al., 2010).
Proposed Calibration Method
We present our approach to simulator calibration with covariate shift adaptation. We take a Bayesian approach, and our target posterior distribution is described in Sec. 3.1. The proposed approach consists of Kernel ABC using a certain importance-weighted kernel (Sec. 3.2) and posterior sampling with the kernel herding algorithm (Sec. 3.3).
Target Posterior Distribution
We define a vector-valued function r n : Θ → R n from the simulator r(x, θ) as
r n (θ) := (r(X 1 ), . . . , r(X n )) ∈ R n , θ ∈ Θ. (3)
Let supp(π) be the support of π. Define Θ * ⊂ supp(π) as the set of parameters that minimize the weighted square error, i.e., for all θ ∈ Θ * we have
n i=1 β(X i )(Y i − r(X i , θ * )) 2 = min θ∈supp(π) n i=1 β(X i )(Y i − r(X i , θ)) 2 .(4)
We allow for Θ * to contain multiple elements, but assume that they all give the same simulation outputs, which we denote by r * ∈ R n :
r * := r n (θ * ) = r n (θ * ), ∀θ * ,θ * ∈ Θ * .(5)
Let ϑ ∼ π be a random variable following π. Then r n (ϑ) is also a random variable taking values in R n and its distribution is the push-forward measure of π under the mapping r n , denoted by r n π. We write the distribution of the joint random variable (ϑ, r n (ϑ)) ∈ Θ × R n as P ΘR n , and their marginal distributions on Θ and R n as P Θ and P R n , respectively. Then by definition we have P Θ = π and P R n = r n π. Let
supp(P R n ) = supp(r n π) = {r n (θ) | θ ∈ supp(π)}
be the support of the push-forward measure, which is the range of the simulation outputs when the parameter is in the support of the prior.
We consider the conditional distribution on Θ induced from the joint distribution P ΘR n by conditioning on y ∈ supp(P R n ), which we write
P π (θ|y), y ∈ supp(P R n )(6)
Note that, since the conditional distribution on R n given θ ∈ Θ is the Dirac distribution at r n (θ), one cannot use Bayes' rule to define the conditional distribution. However, the conditional distribution (6) is well-defined as a disintegration, and is uniquely determined up to an almost sure equivalence with respect to P R n (Chang and Pollard, 1997, Thm. 1 and Example 9); see also Cockayne et al. (2017, Sec. 2.5).
It will turn out in Sec. 4 that our approach provides an estimator for the kernel mean of the conditional distribution (6) with y = r * :
P π (θ|r * )(7)
where r * is the outputs of the optimal simulator (5). In other words, (7) is the posterior distribution on the parameters, given that the optimal outputs r * are observed. Sampling from (7) thus amounts to sampling parameters that provide the optimal simulation outputs.
Finally, we define a predictive distribution of outputs y for any input point x ∈ X as the push-forward measure of the posterior (7) under the mapping r(x, ·) : θ → r(x, θ), which we denote by P π (y|x, r * ).
Kernel ABC with a Weighted Kernel
Let k Θ : Θ × Θ → R be a kernel on the parameter space and H Θ be its its RKHS. We define the kernel mean of the posterior (7) as
µ Θ|r * := k Θ (·, θ)dP π (θ|r * ) ∈ H Θ ,(9)
We propose to use the following weighted kernel on R n defined from importance weights. As mentioned, we assume that the importance weight function β(x) = q 1 (x)/q 0 (x) is known or estimated in advance. For Y n ,Ỹ n ∈ R n , the kernel is defined as
k R n (Y n ,Ỹ n ) = exp − 1 2σ 2 n i=1 β(X i )(Y i −Ỹ i ) 2 ,(10)
where σ 2 > 0 is a constant and a parameter of the kernel.
We apply Kernel ABC (Nakagome et al., 2013) with the importance-weighted kernel defined above, to estimate the posterior kernel mean (9). First, we independently generate m ∈ N parameters from the prior π(θ) θ 1 , . . . ,θ m ∼ π.
Then for each parameterθ j , j = 1, . . . , m, we run the simulator to generate pseudo observations at X 1 , . . . , X n :Ȳ n j := r n (θ j ), j = 1, . . . , m, where r n : Θ → R n is defined in (3). Then an estimator of the kernel mean (9) is given bŷ
µ Θ|r * := m j=1 w j k Θ (·,θ j ) ∈ H Θ ,(11)(w 1 , ..., w m ) := (G + mεI m ) −1 k R n (Y n ) ∈ R m ,
where I m ∈ R m×m is the identity and ε > 0 is a regularization constant; the vector k R n (Y n ) ∈ R m and the Gram matrix G ∈ R m×m are computed from the kernel k R n in (10) with the observed data Y n as
k R n (Y n ) := (k R n (Ȳ n 1 , Y n ), . . . , k R n (Ȳ n m , Y n )) ∈ R m G := (k R n (Ȳ n j ,Ȳ n j )) m j,j =1 ∈ R m×m .
Posterior Sampling with Kernel Herding
We apply Kernel herding (Chen et al., 2010), a deterministic sampling method based on kernel mean embedding, to generate parametersθ 1 , ...,θ m ∈ Θ from the posterior kernel meanμ Θ|r * in (11). The procedure is as follows. The initial pointθ 1 is generated aš θ 1 := argmax θ∈ΘμΘ|r * (θ). Then the subsequent pointš θ t , t = 2, . . . , m, are generated sequentially aš
θ t := argmax θ∈Θμ Θ|r * (θ) − 1 t t−1 j=1 k Θ (θ,θ j ).
These points are a sample from the approximate posterior, in the sense that they satisfy μ Θ|r * − Prediction. Let x ∈ X be any test input location, and recall that the predictive distribution P π (y|x, r * ) in (8) is defined as the push-forward measure of the posterior P π (θ|r * ) under the mapping r(x, ·). Therefore, predictive outputs can be obtained simply by running simulations with the posterior samplesθ 1 , . . . ,θ m :
r(x,θ 1 ), . . . , r(x,θ m ),
and the predictive distribution is approximated by the empirical distribution
P π (y|x, r * ) := 1 m m j=1 δ(y − r(x,θ j )),
where δ(·) is the Dirac distribution at 0.
Theoretical Analysis
To analyze the proposed method, we first express the estimator (11) in terms of covariance operators on the RKHSs, which is how the estimator was originally proposed (Song et al., 2009;Nakagome et al., 2013). To this end, define joint random variables (ϑ, y) ∈ Θ × R n by ϑ ∼ π, y := r n (ϑ),
where r n : Θ → R n is defined in (3). Let H Θ and H R n be the RKHSs of k Θ and k R n , respectively.
Covariance operators C ϑy : H R n → H Θ and C yy : H R n → H R n are then defined as
C ϑy f := E[k Θ (·, ϑ)f (y)] ∈ H Θ , f ∈ H R n , C yy f := E[k R n (·, y)f (y)] ∈ H R n , f ∈ H R n . Note that parameter-data pairs (θ j ,Ȳ n j ) m j=1 = (θ j , r n (θ j )) m j=1 ⊂ Θ × R n in Kernel ABC (Sec. 3.2) are i.i.d.
copies of the random variables (ϑ, y). Thus empirical covariance operatorsĈ ϑy : H R n → H Θ and C yy :
H R n → H R n are defined aŝ C ϑy f := 1 m m j=1 k Θ (·,θ j )f (Ȳ n j ), f ∈ H R n , C yy f := 1 m m j=1 k R n (·,Ȳ n j )f (Ȳ n j ), f ∈ H R n .
The estimator (11) is then expressed aŝ
µ Θ|r * =Ĉ ϑy (Ĉ yy + εI) −1 k R n (·, Y n ).(12)
See the above original references as well as Song et al. (2013); Fukumizu et al. (2013); Muandet et al. (2017) for the derivation.
Recall that Y n is the observed data from the real process. The issue is that, in our setting, Y n may not lie in the support of the distribution P R n of y = r n (ϑ), since the simulation model r(θ, x) is misspecified, i.e., there exists no θ ∈ Θ such that R(x) = r(x, θ) for all x ∈ X . The misspecified setting where Y n ∈ supp(P R n ) has not been studied in the literature on kernel mean embeddings, and therefore existing theoretical results on conditional mean embeddings (Grünewälder et al., 2012;Fukumizu, 2015;Singh et al., 2019) are not directly applicable. Our theoretical contribution is to study the estimator (12) in this misspecified setting, which may be of general interest.
Projection and Best Approximation
Let H y ⊂ H R n be the Hilbert subspace of H R n defined as the completion of the linear span of functions k R n (·,Ỹ n ) withỸ n from the support of P R n :
H y := span k R n (·,Ỹ n ) |Ỹ n ∈ supp(P R n ) ,(13)
where the closure is taken with respect to the norm of H R n . In other words, every h ∈ H y may be written in the form h
= ∞ =1 α k R n (·,Ỹ n ) for some (α ) ∞ =1 ⊂ R and (Ỹ n ) ∞ =1 ⊂ supp(P R n ) such that h 2 H R n = ∞ ,j=1 α α j k R n (Ỹ n ,Ỹ n j ) < ∞.
Since H y is a Hilbert subspace, one can consider the orthogonal projection of k R n (·, Y n ), the "feature vector" of the observed data Y n , onto H y , which is uniquely determined and denoted by
h * := argmin h∈Hy h − k R n (·, Y n ) H R n .(14)
Then k R n (·, Y n ) can be written as
k R n (·, Y n ) = h * + h ⊥ , where h ⊥ ∈ H R n is orthogonal to H y .
Note that the estimator (12) is an approximation to the following population expression:
C ϑy (C yy + εI) −1 k R n (·, Y n ).(15)
Our first result below shows that (15) can be written in terms of the projection (14).
Lemma 1. Let k Θ be a bounded and continuous kernel and assume that 0 < β(X i ) < ∞ holds for all i = 1, . . . , n. Then (15) is equal to C ϑy (C yy + εI) −1 h * We make the following identifiability assumption. It is an assumption on the observed data Y n (or the data generating process (1)), the simulation model r(x, θ) and the kernel k R n (or the importance weight function β(x) = q 1 (x)/q 0 (x); see the definition of k R n in (10)).
Assumption 1. There exists someỸ n ∈ supp(P R n ) such that k R n (·,Ỹ n ) = h * , where h * is the orthogonal projection of k R n (·, Y n ) onto the subspace H y in (14).
The assumption states that the orthogonal projection of the feature vector k R n (·, Y n ) of observed data Y n onto H y lies in the set
{k R n (·,Ỹ n ) |Ỹ n ∈ supp(P R n )} = {k R n (·, r n (θ)) | θ ∈ supp(π)}.
Thus the assumption implies that the best approximation h * of the observed data is given by the simulation model with some parameter θ * ∈ supp(π), i.e., h * = k R n (·, r n (θ * )). Such θ * satisfies
θ * ∈ argmin θ∈supp(π) k R n (·, Y n ) − k R n (·, r(·, θ)) 2 H R n = argmax θ∈supp(π) k R n (Y n , r(·, θ)) = argmax θ∈supp(π) exp − 1 2σ 2 n i=1 β(X i )(Y i − r(X i , θ)) 2 = argmin θ∈supp(π) n i=1 β(X i )(Y i − r(X i , θ)) 2 ,
where the last identity follows from the exponential function being monotonically increasing. This shows that, under Assumption 1, the parameter θ * realizing the projection is a least weighted-squares solution, and thus belongs to the set Θ * defined in (4). Moreover, since h * is uniquely determined, so is the simulation outputs r * := r n (θ * ), in the sense of (5).
By these arguments, Lemma 1 and Assumption 1 lead to the following result.
Theorem 1. Suppose that the assumptions in Lemma 1 and Assumption 1 hold. Let r * := r n (θ * ) where θ * is any element satisfying (4). Then (15) is equal to
C ϑy (C yy + εI) −1 k R n (·, r * ).
Theorem 1 suggests that the estimator (12) would behave as if the observed data is the optimal simulation outputs r * obtained as a best approximation for the given data Y n . The convergence result presented below shows that this is indeed the case.
To state the result, we define a function G :
supp(P R n ) × supp(P R n ) → R as G(Y n a , Y n b ) := E[k Θ (ϑ, ϑ)|y = Y n a , y = Y n b ],(16)= E[k Θ (ϑ, ϑ)|r n (ϑ) = Y n a , r n (ϑ ) = Y n b ],
where (ϑ , y ) is an independent copy of (ϑ, y).
The following result shows that (12) (or (11)) is a consistent estimator of the kernel mean µ Θ|r * (9) of the posterior P π (θ|r * ). It is obtained by extending the result of Fukumizu (2015, Theorem 1.3.2) to the misspecified setting where Y n ∈ supp(P R n ) by using Theorem1. The assumptions made are essentially the same those in Fukumizu (2015, Theorem 1.3.2). Below Range(C yy ⊗ C yy ) denotes the range of the tensorproduct operator C yy ⊗ C yy on the tensor-product RKHS H R n ⊗ H R n (see Appendix for details).
Theorem 2. Suppose that the assumptions in Lemma 1 and Assumption 1 hold. Assume that the eigenvalues λ 1 ≥ λ 2 ≥ · · · ≥ 0 of C yy satisfy λ i ≤ βi −b for all i ∈ N for some constants β > 0 and b > 1, and that the function G in (16) satisfies G ∈ Range(C yy ⊗ C yy ). Let C > 0 be any fixed constant, and set the regularization constant ε := ε m := Cm − b 1+4b ofμ Θ|r * in (12) (or (11)). Then we have
μ Θ|r * − µ Θ|r * HΘ = O p m − b 1+4b (m → ∞).
Experiments
We first explain the setting common for all the experiments. In each experiment, we consider both regression problems with and without covariate shift, to see whether the proposed method can deal with covariate shift. In the latter case, which we call "ordinary regression," we set the importance weights to be constant, β(X i ) = 1 (i = 1, ..., n). The noise process e(x) in (1) is independent Gaussian ε ∼ N (0, σ 2 noise ). We write N (a, b) for the normal distribution with mean a and variance b; the multivariate version is denoted similarly.
For the proposed method, we used a Gaussian kernel k Θ (θ, θ ) = exp(− θ − θ 2 /2σ 2 Θ ) for the parameter space, where σ 2 Θ > 0 is a constant. We set the constants σ 2 , σ 2 Θ > 0 in the kernels k R n and k Θ by the median heuristic (e.g. Garreau et al., 2018) using the simulated pairs (θ j ,Ȳ n j ) m j=1 .
For comparison, we used Markov Chain Monte Carlo (MCMC) for posterior sampling, more specifically the Metropolis-Hastings (MH) algorithm. For this competitor, we assume that the noise process e(x) in (1) is known, so that the likelihood function is available in MCMC (which is of the form exp(− n i=1 β(X i ) (Y i − r(X i , θ)) 2 /2σ 2 noise ) up to constant). In this sense, we give an unfair advantage for MH over the proposed method, as the latter does not assume the knowledge of the noise process, which is usually not available in practice.
For evaluation, we compute Root Mean Square Er- ror (RMSE) in prediction for each method (and for a different number of simulations, m) as follows. Test input locationsX 1 , . . . ,X n are generated from q 0 (x) in the case of ordinary regression, and from q 1 (x) in the covaraite shift setting. After sampling parameterš θ 1 , . . . ,θ m with the method for evaluation, the RMSE is computed as ( 1
n n i=1 (R(X i ) − 1 m m j=1 r(X i ,θ j )) 2 ) 1/2 .
Synthetic Experiments
We consider the problem setting of the benchmark experiment in Shimodaira (2000).
Setting. The input space is X = R, and the data generating process (1) is given by R(x) = −x + x 3 and e(x) = with ∼ N (0, 2) being an independent noise. The simulation model is defined by r(x, θ) = θ 0 + θ 1 x, where θ = (θ 1 , θ 2 ) ∈ Θ = R d . For demonstration, we treat this model as intractable, i.e., we assume that only evaluation of function values r(x, θ) is possible once x and θ are given. The input densities q 0 (x) and q 1 (x) for for training and prediction are those of N (0.5, 0.5) and N (0, 0.3), respectively. We define the prior as multivariate Gaussian π = N (0, 5I 2 ), where I 2 ∈ R 2×2 is the identity. We set the size of training data (X i , Y i ) n i=1 as n = 100.
Results. Figure 2 shows RMSEs for (A) ordinary regression and (B) covariate shift as a function of the number m of simulations, with the means and standard deviations calculated from 30 independent trials. For the proposed method, we set the regularization constant to be ε = 1.0. We set the proposal distribution of MH to be N (0, σ 2 p I 2 ) with σ p being 0.08, 0.06, and 0.03, which were tuned so that the acceptance ratios become about 20%, 40%, and 60% respectively. In the horizontal axis, the number of simulations for MH is the number of all MCMC steps (which all require running the simulator) including burn-in and rejected executions. For MH, we used the first 10% MCMC steps for burn-in, and excluded them for predictions. The results show that the proposed method is more efficient than MH, in the sense that it gives better predictions than MH based on a small number of simulations. This is a promising property, since real-world simulators are often computationally expensive, as is the case for the experiment in the next section.
Experiments on Production Simulator
We performed experiments on the manufacturing process simulator mentioned in Sec. 1 (Fig. 1), and a more sophisticated production simulator with 12 parameters. We only describe the former here, and report the latter in the Appendix due to the space limitation.
Setting. We used a simulator constructed with WIT-NESS, a popular software package for production simulation (https://www.lanner.com/en-us/). We refer to Sec. 1 for an explanation of the simulator. This simulator r(x, θ) has 4 parameters θ ∈ Θ ⊂ R 4 . The input space for regression is X = (0, ∞).
The data generating process (1) is defined as R(x) = r(x, θ (0) ) for x < 110 and R(x) = r(x, θ (1) ) for x ≥ 110, where θ (0) := (2, 0.5, 5, 1) and θ (1) := (3.5, 0.5, 7, 1) ; the noise model is an independent noise e(x) = ∼ N (0, 30). The input densities are defined as q 0 (x) = N (100, 10) (training) and q 1 (x) = N (120, 10) (prediction). We constructed this model so that the two regions x < 110 and x ≥ 110 correspond to those for training and prediction, respectively, with θ (0) and θ (1) being the "true" parameters in the respective regions. We defined the prior π(θ) as the uniform distribution over Θ := [0, 5] × [0, 2] × [0, 10] × [0, 2] ⊂ R 4 . The size of training data (X i , Y i ) n i=1 (which are described in Fig. 1 (B)(C) as red points) is n = 50. Figure 3 shows the averages and standard deviations of RMSEs for the proposed method and MH of 10 independent trials, changing the number m Figure 4: Parametersθ 1 , . . . ,θ m generated from the proposed method, in the subspace of coordinates of θ 1 and θ 3 . (A): Ordinary regression: the generated parameters (orange), the mean of them (brown), and the "true" parameter θ (0) for the training region x < 110 (red). (B) Covariate shift: the generated parameters (light green), the mean of them (green), and the "true" parameter θ (1) for the prediction region x ≥ 110 (blue, "true shifted"). of simulations. We set the regularization constant of the proposed method as ε = 0.01, and the proposal distribution of MH as N (0, 0.03 2 I 4 ), which was tuned to make the acceptance about 40%. 2 The results show that the proposed method is more accurate than MH with a small number of simulations, even though the latter used the full knowledge of the data generating process (1). Fig. 4 (A) and (B) describe parametersθ 1 , . . . ,θ m generated in one run of the proposed method in the ordinary and covariate shift settings, respectively; the corresponding predictive outputs are shown in Fig. 1 (B) and (C). In both settings, the estimated posterior mean is located near the "true" parameter of each scenario. Fig. 4 (A) and (B) also demonstrate how our method might be useful for sensitivity analysis. Our method generates parametersθ 1 , . . . ,θ m so as to approximate the posterior P π (θ|r * ), where r * is "optimal" simulation outputs. Therefore, the more variation in the coordinate θ 1 indicates that the value of θ 1 is not very important to obtain optimal simulation outputs. But a comparison between (A) and (B) indicates that, under covariate shift, there should be small correlation between θ 1 and θ 3 to obtain optimal simulation outputs.
Results.
Yamazaki, K., Kawanabe, M., Watanabe, S., Sugiyama, M., and Müller, K.-R. (2007). Asymptotic Bayesian generalization error when training and test distributions are different. Proceedings of the 24th international conference on Machine learning -ICML '07, pages 1079-1086.
Supplementary Materials
Simulator Calibration under Covariate Shift with Kernels
A Proofs
A.1 Proof of Lemma 1
First we note that from the assumption 0 < β(X i ) < ∞ for all i = 1, . . . , n, the importance-weighted kernel (10) is continuous on R n . Therefore Steinwart and Christmann (2008, Lemma 4.33) implies that the RKHS H R n of k R n is separable.
To prove Lemma 1, we need the following result. Lemma 2. Suppose that the assumptions in Lemma 1 hold. Let (φ i ) ∞ i=1 ⊂ H R n be the eigenfunctions of the covariance operator C yy associated with positive eigenvalues, and let (φ j ) ∞ j=1 ⊂ H R n be an ONB of the null space of C yy . Thenφ j (Ỹ n ) = 0 holds for P R n -almost everyỸ n ∈ R n .
Proof. By definition ofφ j , its holds that
0 = C yyφj = k R n (·,Ỹ n )φ j (Ỹ n )dP R n (Ỹ n ) =: k R n (·,Ỹ n )dν(Ỹ n ),
where the measure ν is defined by dν(Ỹ n ) :=φ j (Ỹ n )dP R n (Ỹ n ). Since the kernel k R n is bounded on R n , H R n consists of bounded functions, and thusφ j ∈ H R n is bounded. Therefore ν a finite measure. But since k R n is a Gaussian kernel (see (10)), it is c 0 -universal, and so Sriperumbudur et al. (2011, Proposition 2) and the integral being zero imply that ν is the zero measure. Thus for ν to be the zero measure,φ j (Ỹ n ) = 0 should hold for P R n -almost everyỸ n , which concludes the proof.
We now prove Lemma 1.
Proof. Let (φ i ) ∞ i=1
⊂ H R n be the eigenfunctions of the covariance operator C yy associated with positive eigenvalues λ 1 ≥ λ 2 ≥ · · · > 0, and let (φ j ) ∞ j=1 ⊂ H R n be an ONB of the null space of C yy . To prove the assertion, we first show that (a) φ i , h ⊥ = 0 for every φ i , and that (b) C ϑyφj = 0 for everyφ j .
(a) By definition of φ i , it can be written as
φ i = λ −1 i C yy φ i = λ −1 i k R n (·,Ỹ n )φ i (Ỹ n )dP R n (Ỹ n ).
Therefore,
φ i , h ⊥ H R n = λ −1 i k R n (·,Ỹ n )φ i (Ỹ n )dP R n (Ỹ n ), h ⊥ H R n = λ −1 i k R n (·,Ỹ n ), h ⊥ H R n φ i (Ỹ n )dP R n (Ỹ n ) = 0,
where the last identity follows from k R n (·,Ỹ n ), h ⊥ H R n = 0 forỸ n ∈ supp(P R n ), which follows from the definition of h ⊥ .
(b) We have
C ϑyφj = k Θ (·, θ)φ j (Ỹ n )dP ΘR n (θ,Ỹ n ) = k Θ (·, θ)dP π (θ|Ỹ n ) φ j (Ỹ n )dP R n (Ỹ n ) = 0,
where the last identity follows from Lemma 2.
We now prove the assertion. By using (a) and (b), we obtain
C ϑy (C yy + εI) −1 k R n (·, Y n ) = C ϑy (C yy + εI) −1 (h * + h ⊥ ) = C ϑy ∞ i=1 (λ i + ε) −1 h * , φ i H R n φ i + C ϑy ∞ j=1 ε −1 h * + h ⊥ ,φ j H R nφ j = C ϑy ∞ i=1 (λ i + ε) −1 h * , φ i H R n φ i = C ϑy ∞ i=1 (λ i + ε) −1 h * , φ i H R n φ i + C ϑy ∞ j=1 ε −1 h * ,φ j H R nφ j = C ϑy (C yy + εI) −1 h * ,
which completes the proof.
A.2 Proof of Theorem 2
Theorem 2 can be easily proven by combining the proof idea of Fukumizu (2015, Theorem 1.3.2) and Theorem 1, but for completeness we present the proof.
Before presenting, we introduce some notation and definitions. Below A for an operator A denotes the operator norm. H R n ⊗ H R n denotes the tensor-product RKHS of H R n and H R n , which is the RKHS of the product kernel k R n ×R n : R n × R n → R defined by k R n ×R n ((Y n a ,Ỹ n a ), (Y n b ,Ỹ n b )) = k R n ((Y n a , Y n b ))k R n ((Ỹ n a ,Ỹ n b )). C yy ⊗ C yy : H R n ⊗ H R n → H R n ⊗ H R n is the covariance operator defined by C yy ⊗ C yy F := E[k R n ×R n (·, (y, y ))F (y, y )], F ∈ H R n ⊗ H R n , where y is an independent copy of the random variable y.
Note that the covariance operator C ϑy satisfies C ϑy f, g HΘ = E[f (y)g(ϑ)] for any f ∈ H R n and g ∈ H Θ . Similarly, C yy satisfies C yy f, h H R n = E[f (y)h(y)] for any f, h ∈ H R n , and C yy ⊗ C yy satisfies C yy F a ,
F b H R n ⊗H R n = E[F a (y, y )F b (y, y )] for any F a , F b ∈ H R n ⊗ H R n .
Proof. By the triangle inequality,
Ĉ ϑy (Ĉ yy + ε m I) −1 k R n (·, Y n ) − µ Θ|r * HΘ ≤ Ĉ ϑy (Ĉ yy + ε m I) −1 k R n (·, Y n ) − C ϑy (C yy + ε m I) −1 k R n (·, Y n ) HΘ + C ϑy (C yy + ε m I) −1 k R n (·, Y n ) − µ Θ|r * HΘ ≤ Ĉ ϑy (Ĉ yy + ε m I) −1 − C ϑy (C yy + ε m I) −1 k R n (·, Y n ) HΘ(17)+ C ϑy (C yy + ε m I) −1 k R n (·, r * ) − µ Θ|r * HΘ ,(18)
where we used Theorem 1 in the last line. Below we derive convergence rates of the two terms (17)(18) separately, and then determine the decay schedule of ε m as m → ∞ so that the two terms have the same rate.
The first term (17). We first haveĈ ϑy (Ĉ yy + ε m I) −1 − C ϑy (C yy + ε m I) −1 =Ĉ ϑy (Ĉ yy + ε m I) −1 −Ĉ ϑy (C yy + ε m I) −1
+Ĉ ϑy (C yy + ε m I) −1 − C ϑy (C yy + ε m I) −1 =Ĉ ϑy (Ĉ yy + ε m I) −1 − (C yy + ε m I) −1 +(Ĉ ϑy − C ϑy )(C yy + ε m I) −1 =Ĉ ϑy (Ĉ yy + ε m I) −1 (C yy −Ĉ yy )(C yy + ε m I) −1 +(Ĉ ϑy − C ϑy )(C yy + ε m I) −1 ,
where the last equality follows from the formula A −1 − B −1 = A −1 (B − A)B −1 that holds for any invertible operators A and B. Note thatĈ ϑy =Ĉ 1/2 ϑϑ W ϑyĈ 1/2 yy holds for some W ΘF : H R n → H Θ with W ϑy ≤ 1 (Baker, 1973, Theorem 1). Using this, we have
Ĉ ϑy (Ĉ yy + ε m I) −1 − C ϑy (C yy + ε m I) −1 ≤ Ĉ ϑy (Ĉ yy + ε m I) −1 (C yy −Ĉ yy )(C yy + ε m I) −1 + (Ĉ ϑy − C ϑy )(C yy + ε m I) −1 = Ĉ 1/2 ϑϑ W ϑyĈ 1/2 yy (Ĉ yy + ε m I) −1 (C yy −Ĉ yy )(C yy + ε m I) −1 + (Ĉ ϑy − C ϑy )(C yy + ε m I) −1 ≤ Ĉ 1/2 ϑϑ ε −1/2 m (C yy −Ĉ yy )(C yy + ε m I) −1 + (Ĉ ϑy − C ϑy )(C yy + ε m I) −1 = O p ε −3/2 m m −1/2 + N (ε m )ε −1 m m −1/2 (m → ∞, ε m → 0),
where the second inequality follows from W ϑy ≤ 1 and Ĉ 1/2 yy (Ĉ yy + ε m I) −1 ≤ ε −1/2 m , and the last line from Fukumizu (2015, Lemma 1.5.1); the quantity N (ε) for any ε > 0 is defined by N (ε) := Tr[C yy (C yy + εI) −1 ], where Tr(A) denotes the trace of an operator A. Under our assumption on the eigenvalue decay rate of C yy , we have N (ε) ≤ βb b−1 ε −1/b (Caponnetto and Vito, 2007, Proposition 3), which implies that the above rate becomes
O p ε −3/2 m m −1/2 + ε −1−1/2b m m −1/2 (m → ∞, ε m → 0).
From mε m → ∞ and ε m → 0 (as we determine the schedule of ε m below), it is easy to show that the second term is slower and thus dominates the above rate. This concludes that the rate of the first term (17) is
Ĉ ϑy (Ĉ yy + ε m I) −1 − C ϑy (C yy + ε m I) −1 k R n (·, Y n ) HΘ = O p ε −1−1/2b m m −1/2 (m → ∞, ε m → 0).
The second term (18). Let (ϑ , y ) be an independent copy of the random variables (ϑ, y). Note that for any ψ ∈ H R n , we have
C ϑy ψ, C ϑy ψ HΘ = E [k Θ (ϑ, ϑ )ψ(y)ψ(y )] = E [E[k Θ (ϑ, ϑ )|y, y ]ψ(y)ψ(y )] = E [G(y, y )ψ(y)ψ(y )] = (C yy ⊗ C yy )G, ψ ⊗ ψ H R n ⊗H R n .
Similarly, for any ψ ∈ H R n andỸ n ∈ supp(P R n ), we have
C ϑy ψ, E[k Θ (·, ϑ)|y =Ỹ n ] HΘ = E ψ(y )E[k Θ (ϑ , ϑ)|y =Ỹ n ] = E ψ(y )E[k Θ (ϑ , ϑ)|y =Ỹ n , y ] = E ψ(y )G(Ỹ n , y ) = (I ⊗ C yy )G, k R n (·,Ỹ n ) ⊗ ψ H R n ⊗H R n ,
where I : H R n → H R n is the identity operator and
((I ⊗ C yy )G) (·, * ) := E[G(·, y )k R n (y , * )].
Now let ψ := (C yy + ε m I) −1 k R n (·, r * ). Recall µ Θ|r * = E[k Θ (·, ϑ)|y = r * ], which gives µ Θ|r * 2 HΘ = G(r * , r * ). Then the square of (18) can be written as
C ϑy (C yy + ε m I) −1 k R n (·, r * ) − µ Θ|r * 2 HΘ = C ϑy ψ 2 HΘ − 2 C ϑy ψ, µ Θ|r * HΘ + µ Θ|r * 2 HΘ = (C yy ⊗ C yy )G, (C yy + ε m I) −1 k R n (·, r * ) ⊗ (C yy + ε m I) −1 k R n (·, r * ) H R n ⊗H R n −2 (I ⊗ C yy )G, k R n (·, r * ) ⊗ (C yy + ε m I) −1 k R n (·, r * ) H R n ⊗H R n + G(r * , r * ) = ((C yy + ε m I) −1 C yy ⊗ (C yy + ε m I) −1 C yy )G, k R n (·, r * ) ⊗ k R n (·, r * ) H R n ⊗H R n −2 (I ⊗ (C yy + ε m I) −1 C yy )G, k R n (·, r * ) ⊗ k R n (·, r * ) H R n ⊗H R n + G(r * , r * ) = (C yy + ε m I) −1 C yy ⊗ (C yy + ε m I) −1 C yy − I ⊗ (C yy + ε m ) −1 C yy −(C yy + ε m I) −1 C yy ⊗ I + I ⊗ I G, k R n (·, r * ) ⊗ k R n (·, r * ) H R n ⊗H R n ≤ (C yy + ε m I) −1 C yy ⊗ (C yy + ε m I) −1 C yy − I ⊗ (C yy + ε m ) −1 C yy −(C yy + ε m I) −1 C yy ⊗ I + I ⊗ I G H R n ⊗H R n k R n (·, r * ) ⊗ k R n (·, r * ) H R n ⊗H R n . Let (φ i ) ∞ i=1
⊂ H R n be the eigenfunctions of C yy and (λ i ) ∞ i=1 be the associated eigenvalues such that λ 1 ≥ λ 2 ≥ · · · ≥ 0. Then the eigenfunctions and eigenvalues of the operator C yy ⊗ C yy are given as
(φ i ⊗ φ j ) ∞ i,j=1
and (λ i λ i ) ∞ i,j=1 , respectively. Note that (C yy + ε m I) −1 C 2 yy φ i = ( λ 2 i 1+εm )φ i . Note also that our assumption G ∈ Range(C yy ⊗ C yy ) implies that there exists some ξ ∈ H R n ⊗ H R n such that G = (C yy ⊗ C yy )ξ. Using these identities and Parseval's identity, we have
(C yy + ε m I) −1 C yy ⊗ (C yy + ε m I) −1 C yy − I ⊗ (C yy + ε m ) −1 C yy −(C yy + ε m I) −1 C yy + I ⊗ I G 2 H R n ⊗H R n = (C yy + ε m I) −1 C yy ⊗ (C yy + ε m I) −1 C yy − I ⊗ (C yy + ε m ) −1 C yy −(C yy + ε m I) −1 C yy + I ⊗ I (C yy ⊗ C yy )ξ 2 H R n ⊗H R n = i,j λ 2 i λ i + ε m λ 2 j λ j + ε m − λ i λ 2 j λ j + ε m − λ 2 i λ j λ i + ε m + λ i λ j 2 φ i ⊗ φ j , ξ 2 H R n ⊗H R n = i,j ε 2 m λ i λ j (λ i + ε m )(λ j + ε m ) 2 φ i ⊗ φ j , ξ 2 H R n ⊗H R n ≤ ε 4 m ξ 2 H R n ⊗H R n . .2 4
) 12 4 1 52 Figure 5: Illustration of the manufacturing process (metal processing factory) for producing valves. Table 1: Summary of the true and estimated parameters for the experiment on the sophisticated simulation model. T BF represents the mean time between failures, and T R the mode of repair time for each process. The parameter estimates are the posterior means of the generated parameters, averaged over 10 independent trials, and the corresponding standard deviations are shown in brackets. θ 10 θ 11 θ 12 true θ (0) ( From this the second term (18) is upper-bounded as
C ϑy (C yy + ε m I) −1 k R n (·, r * ) − µ Θ|r * HΘ ≤ ε m ξ 1/2 H R n ⊗H R n k R n (·, r * ) ⊗ k R n (·, r * ) 1/2 H R n ⊗H R n = O(ε m ), (m → ∞, ε m → 0).
The obtained rates for the two terms (17)(18) can be balanced by setting ε m = Cm − b 1+4b for any fixed constant C > 0, and this gives the rate in the assertion.
B Experiments on Sophisticated Production Simulator
We performed experiments on a sophisticated but more complicated simulator for industrial manufacturing processes than the one in Sec. 5.2. We used a simulation model constructed with the software package WITNESS (https://www.lanner.com/en-us/) described in Fig. 5. It models a metal processing factory for producing valves (products) from metal pipes, with six primary processes of 1) "saw", 2) "coat", 3) "inspection", 4) "harden", 5) "grind", and 6) "clean." Each process consists of complicated procedures such preparation, waiting, and machine repair in case of a trouble.
B.1 Setting
As in Sec. 5.2, the input space is X = (0, ∞) and each input x represents the number of products required to make, and the resulting output y(x) = R(x) + e(x) is the length of time needed to produce that number of products.
The mapping x → r(x, θ) consists of the above six processes, and each of them contains two parameters for machine downtime due to failures: the mean time between failures (T BF ), and the mode of repair time (T R ). q 0 (x) and q 1 (x) are input densities for training and prediction, respectively.
Thus, in total, there are 12 parameters, i.e., θ = (θ 1 , ..., θ 12 ) ∈ Θ ⊂ R 12 , where θ 2j = T
BF and θ 2j+1 = T (j) R for the j (= 1, . . . , 6)-th process (see Table 1). In each process (say the j-th process), the time between two failures follows the negative exponential distribution with the mean time θ 2j = T (j) BF , and the time required for repair follows the Erlang distribution with the mode of repair time θ 2j+1 = T (j) R and the shape parameter 3. We set the prior distribution π(θ) by defining the uniform distribution over [0, 300] for θ 2j and that over [0,30] for θ 2j+1 , and taking the product of the uniform distributions for all the parameters (j = 1, . . . , 6).
In a similar manner to the experiment in Section 5.2, we defined the regression function R(x) of the data generating process as R(x) = r(x, θ (0) ) for x < 140 and R(x) = r(x, θ (1) ) for x ≥ 140, where θ (0) and θ (1) are the "true" parameters for training and prediction, and defined in Table 1. We set the input densities q 0 (x) and q 1 (x) for training and prediction as N (130, 15) and N (160, 12), respectively. The size of training data is n = 50, and the number of simulations is m = 400. We set the noise process of the data generating process to be independent Gaussian, e(x) = ∼ N (0, 300). We set the constants σ 2 , σ 2 Θ > 0 in the kernels k R n and k Θ by the median heuristic using the simulated pairs (θ j ,Ȳ n j ) m j=1 , and the regularization constant to be ε = 0.1.
B.1.1 Details of the Simulation Model
We explain below qualitative details of the six processes in the simulation model constructed with the WITNESS software package.
Cutting process: The manufacturing process begins with the arrival of pipes, all of which have the same diameter and length of 30 cm. These pipes arrive at a fixed time interval, depending on the vendor's supply schedule. Subsequently, each pipe is cut into 10-cm sections along the length, resulting in three pieces. A worker is assigned for this process to perform changeover, repair, and disconnection operations. This worker takes a break once every eight hours. Then the small pieces obtained are transferred to the coating process by a conveyor belt.
Coating process: The small pieces are coated for protection by a coating machine. The machine processes six pieces in a batch manner at once. A coating material must have been prepared in the coating machine, before those pieces have arrived; otherwise, the quality of those pieces will be degraded by heat. When the pieces ride on the belt conveyor, a sensor detects them and the coating material is prepared.
Inspection process: After the coating process, each piece is placed in an inspection waiting buffer. An inspector picks up those pieces one by one from the waiting buffer, and inspects the coating quality. If a piece fails the quality inspection, the inspector places that piece in the recoating waiting buffer. The coating machine must process the pieces of the recoating buffer preferentially. When pieces pass the quality inspection, the inspector sends those pieces to the curing step.
Harden process: In the harden (quenching) process, up to 10 pieces are processed simultaneously in a first-come first-out basis, and each piece is quenched for at least one hour.
Grind process: The quenched pieces are polished to satisfy a customer's specifications. Two polishing machines with the same priority are available. Each machine uses special jigs to process four pieces simultaneously, and produces two different types of valves. Further, 10 jigs exist in the system, and when not in use, they are placed in a jig storage buffer.
A loader fixes four pieces with a jig and sends it to the polishing machine. The polishing machine sends the jig and the four pieces to an unloader, once polishing is done. The unloader sends the finished pieces to a valve storage area and the jig to a jig return area. The two types of valves are separated, and placed in a dedicated valve storage buffer. When a jig is required to be used again, it is returned by a jig return conveyor to the jig storage buffer.
Cleaning process: Valves issued from a valve storage area are cleaned before shipment. In the washing machine, five stations are available where valves can be placed one at a time, and the valves are cleaned in these stations. Up to 10 valves of each type can be washed simultaneously. When the valve type is changed, the cleaning head must be replaced.
B.2 Results
The true 12 parameters are estimated as the posterior means of generated parameters, and their averages and standard deviations over 10 independent trials are shown in the bottom rows in Table 1. Most of the true parameters are estimated for both of the ordinary regression and covariate shift settings. Fig. 6 (A) and (B) describe predictive outputs and their means given by the proposed method, which fit well for both the ordinary and covariate shift settings. The RMSE for predictive outputs by the proposed method with covariate shift adaptation, calculated for test data generated from q 1 (x), is 1.48 × 10 2 . On the other hand, the RMSE on the same test data for the proposed method without covariate shift adaptation (i.e., setting β(X i ) = 1, i = 1, . . . , n in the importance-weighted kernel) is 1.64 × 10 3 . This confirms that the use of the importance-weighted kernel indeed works for covariate shift adaptation.
In this experiment, approximately 3 [s] was required for one evaluation of the simulation model r(x, θ) with the authors' computational environment. Thus, the dominant factor in the computational cost was that of simulations.
| 10,555 |
1809.08159
|
2891804603
|
In many fields of social and industrial sciences, simulation is crucial in comprehending a target system. A major task in simulation is the estimation of optimal parameters to express the observed data need to directly elucidate the properties of the target system as a modeling based on the expert's domain knowledge. However, skilled human experts struggle to obtain the desired parameters. Data assimilation therefore becomes an unavoidable task to reduce the cost of simulator optimization. Another necessary task is extrapolation; in many practical cases, predictions based on simulation results will be often outside of the dominant range of a given data area, and this is referred to as the covariate shift. This paper focuses on a regression problem with covariate shift. While the parameter estimation for the covariate shift has been studied thoroughly in parametric and nonparametric settings, conventional statistical methods of parameter searching are not applicable in the data assimilation of the simulation owing to the properties of the likelihood function: intractable or nondifferentiable. Hence, we propose a novel framework of Bayesian inference based on kernel mean embedding. This framework allows for predictions in covariate shift situations, and its effectiveness is evaluated in both synthetic numerical experiments and a widely used production simulator reproducing real-world manufacturing factories.
|
Kernel mean embedding is a framework to map distributions into a reproducing kernel Hilbert space (RKHS) @math as a feature space @cite_7 . Figure shows a schematic illustration of the kernel mean embedding. In this section, we briefly review three applications of kernel mean embedding: kernel ABC, kernel sum rule, and kernel herding. The detailed formulations of these methods are described in with the proposed method.
|
{
"abstract": [
"A Hilbert space embedding of a distribution---in short, a kernel mean embedding---has recently emerged as a powerful tool for machine learning and inference. The basic idea behind this framework is to map distributions into a reproducing kernel Hilbert space (RKHS) in which the whole arsenal of kernel methods can be extended to probability measures. It can be viewed as a generalization of the original \"feature map\" common to support vector machines (SVMs) and other kernel methods. While initially closely associated with the latter, it has meanwhile found application in fields ranging from kernel machines and probabilistic modeling to statistical inference, causal discovery, and deep learning. The goal of this survey is to give a comprehensive review of existing work and recent advances in this research area, and to discuss the most challenging issues and open problems that could lead to new research directions. The survey begins with a brief introduction to the RKHS and positive definite kernels which forms the backbone of this survey, followed by a thorough discussion of the Hilbert space embedding of marginal distributions, theoretical guarantees, and a review of its applications. The embedding of distributions enables us to apply RKHS methods to probability measures which prompts a wide range of applications such as kernel two-sample testing, independent testing, and learning on distributional data. Next, we discuss the Hilbert space embedding for conditional distributions, give theoretical insights, and review some applications. The conditional mean embedding enables us to perform sum, product, and Bayes' rules---which are ubiquitous in graphical model, probabilistic inference, and reinforcement learning---in a non-parametric way. We then discuss relationships between this framework and other related areas. Lastly, we give some suggestions on future research directions."
],
"cite_N": [
"@cite_7"
],
"mid": [
"2418306335"
]
}
|
Simulator Calibration under Covariate Shift with Kernels
|
Computer simulators are ubiquitous in many areas of science and engineering, examples including climate science, social science, and epidemics, to just name a few (Winsberg, 2010;Weisberg, 2012). Such tools are useful in understanding and predicting complicated time-evolving phenomena of interest. Computer simulators are also widely used in industrial manufacturing process modeling (Mourtzis et al., 2014), and we use one such simulator described in Fig. 1-(A), which models an assembling process of certain products in a factory, as our working example.
In this work we deal with the task of simulator calibration (Kennedy and O'Hagan, 2001), which is necessary Proceedings of the 23 rd International Conference on Artificial Intelligence and Statistics (AISTATS) 2020, Palermo, Italy. PMLR: Volume 108. Copyright 2020 by the author(s). In the factory, one product is made from three items (TOPS, BOTTOMS and SCREWS) by the ASSEMBLY machine, and four such products are checked by the INSPECTION machine at the same time. Parameter θ of the simulation model r(x, θ) consists of 4 constants: mean θ 1 and variance θ 2 of the distribution of the processing time in the ASSEMBLY machine, and those (described as θ 3 and θ 4 ) in the INSPECTION machine. (B) Results of our method without covariate shift adaptation: training data (red points), generated predictive outputs (orange) and their means (brown curve). (C) Results of our method with covariate shift adaptation: training data (red points), generated predictive outputs (light green) and their means (green curve). q 0 (x) and q 1 (x) are input densities for training and prediction, respectively. More details in Secs. 1 and 5.2.
to make simulation-based predictions reliable. To describe this, we introduce some notation used in the paper. We are interested in a system R(x) that takes x as an input and output y = R(x)+ε possibly corrupted by a noise ε. This system R(x) is of interest but not known. Instead, we are given data (X i , Y i ) n i=1 from the system, where input locations X 1 , . . . , X n are generated from a distribution q 0 (x) and outputs Y 1 , . . . , Y n from the target system Y i = R(X i ) + ε i . On the other hand, a simulator is defined as a function r(x, θ) that takes x as an input and outputs r(x, θ), where θ is a model parameter. The task of simulator calibration is to tune (or estimate) the parameter θ so that the r(x, θ) "approximates well" the unknown target system R(x) by using the data (X i , Y i ) n i=1 . For instance, in Fig. 1, the target system R(x) takes as an input the number x of required products to be manufactured in one day, and outputs the total time y = R(x) + ε required for producing all the products; the simulator r(x, θ) models this process (see the "pred mean" curves in Fig. 1-(B)(C)).
There are mainly two challenges in the task of simulator calibration, which distinguish it from standard statistical learning problems. The first one owes to the complexity of the simulation model. Very often, a simulation model r(x, θ) cannot be written as a simple function of the input x and parameter θ, because the process of producing the output y = r(x, θ) may involve various numerical algorithms (e.g., solutions for differential equations) and/or IF-ELSE type decision rules of multiple agents. Therefore, one cannot access the gradient of the simulator output r(x, θ) with respect to the parameter θ, and thus calibration cannot reply on gradient-based methods for optimization (e.g., gradient descent) and sampling (e.g., Hamiltonian Monte Carlo). Moreover, one simulation y = r(x, θ) for a given input x can be computationally very expensive. Thus only a limited number of simulations can be performed for calibration. To summarise, the first challenge is that calibration should be done by only making use of forward simulations (or evaluations of r(x, θ)), while the number of simulations cannot be large.
The second challenge is that of covariate shift (or sample selection bias) (Shimodaira, 2000;Sugiyama and Kawanabe, 2012), which is ubiquitous in applications of simulations, but has been rarely discussed in the literature on calibration methods. The situation is that the input distribution q 1 (x) for the test (or prediction) phase is different from the input distribution q 0 (x) generating the training input locations X 1 , . . . , X n . In other words, the parameter θ is to be tuned so that the simulator r(x, θ) accurately approximates the target system R(x) with respect to the distribution q 1 (x) (e.g., the error defined as (R(x) − r(x, θ)) 2 q 1 (x)dx is to be small), while training data (X i , Y i ) n i=1 are only given with respect to another distribution q 0 (x).
The covariate shift setting is inherently important and ubiquitous in applications of computer simulation, because the purpose of a simulation is often in extrapolation. An illustrative example is climate simulations, where the aim is to answer whether global warming will occur in the future. As such, input x is a time point and the target system R(x) is the global temperature. Calibration of the simulator r(x, θ) is to be done based on data from the past, but prediction is required for the future. This means that training input distribu-tion q 0 (x) has a support in the past, but that of test q 1 (x) has a support on the future. For our working example in Fig. 1, training input locations X 1 , . . . , X n from q 0 (x) are more densely distributed in the region x < 110 than the region x ≥ 110, since the data are obtained in a trial period. On the other hand, the test phase (i.e., when the factory is deployed) is targeted on mass production, and thus the test input distribution q 1 (x) has mass concentrated in the region x ≥ 110. Being a parametric model, a simulator only has a finite degree of freedom, and thus cannot capture all the aspects of the target system. Under such a model misspecification, the covariate shift is known to have a huge effect: the optimal model for the test input distribution may be drastically different from that for the training input distribution (Shimodaira, 2000). In climate simulations, care must be taken in how to tune the simulator as the data are only from the past; otherwise, the resulting predictions about the future will not be reliable (Winsberg, 2018). In the example of Fig. 1, the behavior of the target system R(x) changes for the trial and test phases: Figs. 1-(B)(C) describe this situation. As can be seen in training data (red points), the total manufacturing time R(x) becomes significantly larger when the number x of required products is greater than x = 110, because of of the overload of workers and machines. However, such structural change of the target R(x) is not modeled in the simulator r(x, θ) (model misspecification). Thus, if calibration is done without taking the covariate shift into account, the resulting simulator makes predictions that fit well to the data in the region x < 110, but do not fit well in the region x ≥ 110, as described in Fig. 1-(B).
Because of the first challenge of simulator calibration, exiting methods for covariate shift adaptation, which have been developed for standard statistical and machine learning approaches, cannot be directly employed for the simulator calibration problem: see e.g., Shimodaira (2000); Yamazaki et al. (2007); Gretton et al. (2009);Sugiyama and Kawanabe (2012) and references therein. On the other hand, existing approaches to likelihood-free inference, such as Approximate Bayesian Computation (ABC) methods (e.g. Csilléry et al. (2010); Marin et al. (2012); Nakagome et al. (2013)), are applicable to simulator calibration, but they do not address the problem of covariate shift. Our approach combines these two approaches and thus enjoys the best of both worlds, offering a solution to the calibration problem with covariate shift adaptation.
This work proposes a novel approach to simulator calibration, dealing explicitly with the setting of covariate shift. Our approach is Bayesian, deriving a certain posterior distribution over the parameter space given observed data. The proposed method is based on Kernel ABC (Nakagome et al., 2013;Fukumizu et al., 2013), which is an approach to ABC based on kernel mean embedding of distributions (Muandet et al., 2017), and a certain importance-weighted kernel that works for covariate shift adaptation. We provide a theoretical analysis of this approach, showing that it produces a distribution over the parameter space that approximates the posterior distribution in which the "observed data" is predictions from the model that minimises the importance-weighted empirical risk. In other words, the proposed method approximates the posterior distribution whose support consists of parameters such that the resulting simulator produces a small generalization error for the test input distribution. For instance, Fig. 1-(C) shows predictions obtained with our method, which fit well in the test region x ≥ 110 as a result of covariate shift adaptation. This paper is organized as follows. In Sec. 2, we briefly review the setting of covariate shift and the framework of kernel mean embedding. In Sec. 3, we present our method for simulator calibration with covariate shift adaptation, and in Sec. 4 we investigate its theoretical properties. In Sec. 5 we report results of numerical experiments that include calibration of the production simulator in Fig. 1, confirming the effectiveness of the proposed method. Additional experimental results and all the theoretical proofs are presented in Appendix.
Calibration under Covariate Shift
Let X ⊂ R d X with d X ∈ N be a measurable subset that serves as the input space for a target system and a simulator. Denote by R : X → R the regression function of the (unknown) target system, which is deterministic, and define the true data-generating process as
y(x) := R(x) + e(x),(1)
where e : X → R is a (zero-mean) stochastic process that represent error in observations. Observed data D n := {(X i , Y i )} n i=1 ⊂ X × R are assumed to be generated from the process (1) as
X 1 , . . . , X n ∼ q 0 (i.i.d.), Y i = y(X i ), (i = 1, . . . , n),
where q 0 is a probability density function on X . We use the following notation to write the output values:
Y n := (Y 1 , . . . , Y n ) ∈ R n .
Let Θ ⊂ R dΘ with d Θ ∈ N be a measurable subset that serves as a parameter space. Let r : X × Θ → R be a (measurable) deterministic simulation model that outputs a real value r(x, θ) ∈ R given an input x ∈ X and a parameter θ ∈ Θ. Assume that we have a prior distribution π(θ) on the parameter space Θ.
In the setting of covariate shift, the input distribution q 1 (x) in the test or prediction phase is different from that q 0 (x) for training data X 1 , . . . , X n , while the input-output relationship (1) remains the same. Thus, the expected loss (or the generalization error) to be minimized may be defined as
L(θ) := (y(x) − r(x, θ)) 2 q 1 (x)dx = (y(x) − r(x, θ)) 2 β(x)q 0 (x)dx,
where β : X → R is the importance weight function, defined as the ratio of the two input densities:
β(x) := q 1 (x)/q 0 (x).
In this work, we assume for simplicity that importance weights β(X i ) at training inputs X 1 , . . . , X n are known, or estimated in advance. The knowledge of the importance weights is available when q 0 (x) and q 1 (x) are designed by an experimenter. For estimation of the importance, we refer to Gretton et al. (2009); and references therein. 1 Using the importance weights, the expected loss can be estimated as
L n (θ) := 1 n n i=1 β(X i ) (Y i − r(X i , θ)) 2 .(2)
Covariate shift has a strong inference of the generalization performance of an estimated model, when the true regression function R(x) does not belong to the class of functions realizable by the simulation model {r(·, θ) | θ ∈ Θ}, i.e., when model misspecification occurs (Shimodaira, 2000;Yamazaki et al., 2007). Such a misspecification happens in practice, since the simulation model only has a finite degree of freedom, as the parameter space is finite dimensional. To obtain a model with a good prediction performance, one needs to use an importance-weighted loss like (2) for parameter estimation.
Kernel Mean Embedding of Distributions
This is a framework for representing probability measures as elements in an Reproducing Kernel Hilbert Space (RKHS). We refer to Muandet et al. (2017) and references therein for details.
Let Ω be a measurable space, k : Ω × Ω → R be a measurable positive definite kernel and H be its RKHS.
In this framework, any probability measure P on Ω is represented as a Bochner integral
µ P := k(·, θ)dP (θ) ∈ H,
which is called the kernel mean of P . Estimation of P can be carried out by that of µ P , which is usually computationally and statistically easier, thanks to nice properties of the RKHS. Such a strategy is justified if the mapping P → µ P is injective, in which case µ P maintains all information of P . Kernels satisfying this property are called characteristic, and examples of characteristic kernels on Ω = R d include Gaussian and Matérn kernels (Sriperumbudur et al., 2010).
Proposed Calibration Method
We present our approach to simulator calibration with covariate shift adaptation. We take a Bayesian approach, and our target posterior distribution is described in Sec. 3.1. The proposed approach consists of Kernel ABC using a certain importance-weighted kernel (Sec. 3.2) and posterior sampling with the kernel herding algorithm (Sec. 3.3).
Target Posterior Distribution
We define a vector-valued function r n : Θ → R n from the simulator r(x, θ) as
r n (θ) := (r(X 1 ), . . . , r(X n )) ∈ R n , θ ∈ Θ. (3)
Let supp(π) be the support of π. Define Θ * ⊂ supp(π) as the set of parameters that minimize the weighted square error, i.e., for all θ ∈ Θ * we have
n i=1 β(X i )(Y i − r(X i , θ * )) 2 = min θ∈supp(π) n i=1 β(X i )(Y i − r(X i , θ)) 2 .(4)
We allow for Θ * to contain multiple elements, but assume that they all give the same simulation outputs, which we denote by r * ∈ R n :
r * := r n (θ * ) = r n (θ * ), ∀θ * ,θ * ∈ Θ * .(5)
Let ϑ ∼ π be a random variable following π. Then r n (ϑ) is also a random variable taking values in R n and its distribution is the push-forward measure of π under the mapping r n , denoted by r n π. We write the distribution of the joint random variable (ϑ, r n (ϑ)) ∈ Θ × R n as P ΘR n , and their marginal distributions on Θ and R n as P Θ and P R n , respectively. Then by definition we have P Θ = π and P R n = r n π. Let
supp(P R n ) = supp(r n π) = {r n (θ) | θ ∈ supp(π)}
be the support of the push-forward measure, which is the range of the simulation outputs when the parameter is in the support of the prior.
We consider the conditional distribution on Θ induced from the joint distribution P ΘR n by conditioning on y ∈ supp(P R n ), which we write
P π (θ|y), y ∈ supp(P R n )(6)
Note that, since the conditional distribution on R n given θ ∈ Θ is the Dirac distribution at r n (θ), one cannot use Bayes' rule to define the conditional distribution. However, the conditional distribution (6) is well-defined as a disintegration, and is uniquely determined up to an almost sure equivalence with respect to P R n (Chang and Pollard, 1997, Thm. 1 and Example 9); see also Cockayne et al. (2017, Sec. 2.5).
It will turn out in Sec. 4 that our approach provides an estimator for the kernel mean of the conditional distribution (6) with y = r * :
P π (θ|r * )(7)
where r * is the outputs of the optimal simulator (5). In other words, (7) is the posterior distribution on the parameters, given that the optimal outputs r * are observed. Sampling from (7) thus amounts to sampling parameters that provide the optimal simulation outputs.
Finally, we define a predictive distribution of outputs y for any input point x ∈ X as the push-forward measure of the posterior (7) under the mapping r(x, ·) : θ → r(x, θ), which we denote by P π (y|x, r * ).
Kernel ABC with a Weighted Kernel
Let k Θ : Θ × Θ → R be a kernel on the parameter space and H Θ be its its RKHS. We define the kernel mean of the posterior (7) as
µ Θ|r * := k Θ (·, θ)dP π (θ|r * ) ∈ H Θ ,(9)
We propose to use the following weighted kernel on R n defined from importance weights. As mentioned, we assume that the importance weight function β(x) = q 1 (x)/q 0 (x) is known or estimated in advance. For Y n ,Ỹ n ∈ R n , the kernel is defined as
k R n (Y n ,Ỹ n ) = exp − 1 2σ 2 n i=1 β(X i )(Y i −Ỹ i ) 2 ,(10)
where σ 2 > 0 is a constant and a parameter of the kernel.
We apply Kernel ABC (Nakagome et al., 2013) with the importance-weighted kernel defined above, to estimate the posterior kernel mean (9). First, we independently generate m ∈ N parameters from the prior π(θ) θ 1 , . . . ,θ m ∼ π.
Then for each parameterθ j , j = 1, . . . , m, we run the simulator to generate pseudo observations at X 1 , . . . , X n :Ȳ n j := r n (θ j ), j = 1, . . . , m, where r n : Θ → R n is defined in (3). Then an estimator of the kernel mean (9) is given bŷ
µ Θ|r * := m j=1 w j k Θ (·,θ j ) ∈ H Θ ,(11)(w 1 , ..., w m ) := (G + mεI m ) −1 k R n (Y n ) ∈ R m ,
where I m ∈ R m×m is the identity and ε > 0 is a regularization constant; the vector k R n (Y n ) ∈ R m and the Gram matrix G ∈ R m×m are computed from the kernel k R n in (10) with the observed data Y n as
k R n (Y n ) := (k R n (Ȳ n 1 , Y n ), . . . , k R n (Ȳ n m , Y n )) ∈ R m G := (k R n (Ȳ n j ,Ȳ n j )) m j,j =1 ∈ R m×m .
Posterior Sampling with Kernel Herding
We apply Kernel herding (Chen et al., 2010), a deterministic sampling method based on kernel mean embedding, to generate parametersθ 1 , ...,θ m ∈ Θ from the posterior kernel meanμ Θ|r * in (11). The procedure is as follows. The initial pointθ 1 is generated aš θ 1 := argmax θ∈ΘμΘ|r * (θ). Then the subsequent pointš θ t , t = 2, . . . , m, are generated sequentially aš
θ t := argmax θ∈Θμ Θ|r * (θ) − 1 t t−1 j=1 k Θ (θ,θ j ).
These points are a sample from the approximate posterior, in the sense that they satisfy μ Θ|r * − Prediction. Let x ∈ X be any test input location, and recall that the predictive distribution P π (y|x, r * ) in (8) is defined as the push-forward measure of the posterior P π (θ|r * ) under the mapping r(x, ·). Therefore, predictive outputs can be obtained simply by running simulations with the posterior samplesθ 1 , . . . ,θ m :
r(x,θ 1 ), . . . , r(x,θ m ),
and the predictive distribution is approximated by the empirical distribution
P π (y|x, r * ) := 1 m m j=1 δ(y − r(x,θ j )),
where δ(·) is the Dirac distribution at 0.
Theoretical Analysis
To analyze the proposed method, we first express the estimator (11) in terms of covariance operators on the RKHSs, which is how the estimator was originally proposed (Song et al., 2009;Nakagome et al., 2013). To this end, define joint random variables (ϑ, y) ∈ Θ × R n by ϑ ∼ π, y := r n (ϑ),
where r n : Θ → R n is defined in (3). Let H Θ and H R n be the RKHSs of k Θ and k R n , respectively.
Covariance operators C ϑy : H R n → H Θ and C yy : H R n → H R n are then defined as
C ϑy f := E[k Θ (·, ϑ)f (y)] ∈ H Θ , f ∈ H R n , C yy f := E[k R n (·, y)f (y)] ∈ H R n , f ∈ H R n . Note that parameter-data pairs (θ j ,Ȳ n j ) m j=1 = (θ j , r n (θ j )) m j=1 ⊂ Θ × R n in Kernel ABC (Sec. 3.2) are i.i.d.
copies of the random variables (ϑ, y). Thus empirical covariance operatorsĈ ϑy : H R n → H Θ and C yy :
H R n → H R n are defined aŝ C ϑy f := 1 m m j=1 k Θ (·,θ j )f (Ȳ n j ), f ∈ H R n , C yy f := 1 m m j=1 k R n (·,Ȳ n j )f (Ȳ n j ), f ∈ H R n .
The estimator (11) is then expressed aŝ
µ Θ|r * =Ĉ ϑy (Ĉ yy + εI) −1 k R n (·, Y n ).(12)
See the above original references as well as Song et al. (2013); Fukumizu et al. (2013); Muandet et al. (2017) for the derivation.
Recall that Y n is the observed data from the real process. The issue is that, in our setting, Y n may not lie in the support of the distribution P R n of y = r n (ϑ), since the simulation model r(θ, x) is misspecified, i.e., there exists no θ ∈ Θ such that R(x) = r(x, θ) for all x ∈ X . The misspecified setting where Y n ∈ supp(P R n ) has not been studied in the literature on kernel mean embeddings, and therefore existing theoretical results on conditional mean embeddings (Grünewälder et al., 2012;Fukumizu, 2015;Singh et al., 2019) are not directly applicable. Our theoretical contribution is to study the estimator (12) in this misspecified setting, which may be of general interest.
Projection and Best Approximation
Let H y ⊂ H R n be the Hilbert subspace of H R n defined as the completion of the linear span of functions k R n (·,Ỹ n ) withỸ n from the support of P R n :
H y := span k R n (·,Ỹ n ) |Ỹ n ∈ supp(P R n ) ,(13)
where the closure is taken with respect to the norm of H R n . In other words, every h ∈ H y may be written in the form h
= ∞ =1 α k R n (·,Ỹ n ) for some (α ) ∞ =1 ⊂ R and (Ỹ n ) ∞ =1 ⊂ supp(P R n ) such that h 2 H R n = ∞ ,j=1 α α j k R n (Ỹ n ,Ỹ n j ) < ∞.
Since H y is a Hilbert subspace, one can consider the orthogonal projection of k R n (·, Y n ), the "feature vector" of the observed data Y n , onto H y , which is uniquely determined and denoted by
h * := argmin h∈Hy h − k R n (·, Y n ) H R n .(14)
Then k R n (·, Y n ) can be written as
k R n (·, Y n ) = h * + h ⊥ , where h ⊥ ∈ H R n is orthogonal to H y .
Note that the estimator (12) is an approximation to the following population expression:
C ϑy (C yy + εI) −1 k R n (·, Y n ).(15)
Our first result below shows that (15) can be written in terms of the projection (14).
Lemma 1. Let k Θ be a bounded and continuous kernel and assume that 0 < β(X i ) < ∞ holds for all i = 1, . . . , n. Then (15) is equal to C ϑy (C yy + εI) −1 h * We make the following identifiability assumption. It is an assumption on the observed data Y n (or the data generating process (1)), the simulation model r(x, θ) and the kernel k R n (or the importance weight function β(x) = q 1 (x)/q 0 (x); see the definition of k R n in (10)).
Assumption 1. There exists someỸ n ∈ supp(P R n ) such that k R n (·,Ỹ n ) = h * , where h * is the orthogonal projection of k R n (·, Y n ) onto the subspace H y in (14).
The assumption states that the orthogonal projection of the feature vector k R n (·, Y n ) of observed data Y n onto H y lies in the set
{k R n (·,Ỹ n ) |Ỹ n ∈ supp(P R n )} = {k R n (·, r n (θ)) | θ ∈ supp(π)}.
Thus the assumption implies that the best approximation h * of the observed data is given by the simulation model with some parameter θ * ∈ supp(π), i.e., h * = k R n (·, r n (θ * )). Such θ * satisfies
θ * ∈ argmin θ∈supp(π) k R n (·, Y n ) − k R n (·, r(·, θ)) 2 H R n = argmax θ∈supp(π) k R n (Y n , r(·, θ)) = argmax θ∈supp(π) exp − 1 2σ 2 n i=1 β(X i )(Y i − r(X i , θ)) 2 = argmin θ∈supp(π) n i=1 β(X i )(Y i − r(X i , θ)) 2 ,
where the last identity follows from the exponential function being monotonically increasing. This shows that, under Assumption 1, the parameter θ * realizing the projection is a least weighted-squares solution, and thus belongs to the set Θ * defined in (4). Moreover, since h * is uniquely determined, so is the simulation outputs r * := r n (θ * ), in the sense of (5).
By these arguments, Lemma 1 and Assumption 1 lead to the following result.
Theorem 1. Suppose that the assumptions in Lemma 1 and Assumption 1 hold. Let r * := r n (θ * ) where θ * is any element satisfying (4). Then (15) is equal to
C ϑy (C yy + εI) −1 k R n (·, r * ).
Theorem 1 suggests that the estimator (12) would behave as if the observed data is the optimal simulation outputs r * obtained as a best approximation for the given data Y n . The convergence result presented below shows that this is indeed the case.
To state the result, we define a function G :
supp(P R n ) × supp(P R n ) → R as G(Y n a , Y n b ) := E[k Θ (ϑ, ϑ)|y = Y n a , y = Y n b ],(16)= E[k Θ (ϑ, ϑ)|r n (ϑ) = Y n a , r n (ϑ ) = Y n b ],
where (ϑ , y ) is an independent copy of (ϑ, y).
The following result shows that (12) (or (11)) is a consistent estimator of the kernel mean µ Θ|r * (9) of the posterior P π (θ|r * ). It is obtained by extending the result of Fukumizu (2015, Theorem 1.3.2) to the misspecified setting where Y n ∈ supp(P R n ) by using Theorem1. The assumptions made are essentially the same those in Fukumizu (2015, Theorem 1.3.2). Below Range(C yy ⊗ C yy ) denotes the range of the tensorproduct operator C yy ⊗ C yy on the tensor-product RKHS H R n ⊗ H R n (see Appendix for details).
Theorem 2. Suppose that the assumptions in Lemma 1 and Assumption 1 hold. Assume that the eigenvalues λ 1 ≥ λ 2 ≥ · · · ≥ 0 of C yy satisfy λ i ≤ βi −b for all i ∈ N for some constants β > 0 and b > 1, and that the function G in (16) satisfies G ∈ Range(C yy ⊗ C yy ). Let C > 0 be any fixed constant, and set the regularization constant ε := ε m := Cm − b 1+4b ofμ Θ|r * in (12) (or (11)). Then we have
μ Θ|r * − µ Θ|r * HΘ = O p m − b 1+4b (m → ∞).
Experiments
We first explain the setting common for all the experiments. In each experiment, we consider both regression problems with and without covariate shift, to see whether the proposed method can deal with covariate shift. In the latter case, which we call "ordinary regression," we set the importance weights to be constant, β(X i ) = 1 (i = 1, ..., n). The noise process e(x) in (1) is independent Gaussian ε ∼ N (0, σ 2 noise ). We write N (a, b) for the normal distribution with mean a and variance b; the multivariate version is denoted similarly.
For the proposed method, we used a Gaussian kernel k Θ (θ, θ ) = exp(− θ − θ 2 /2σ 2 Θ ) for the parameter space, where σ 2 Θ > 0 is a constant. We set the constants σ 2 , σ 2 Θ > 0 in the kernels k R n and k Θ by the median heuristic (e.g. Garreau et al., 2018) using the simulated pairs (θ j ,Ȳ n j ) m j=1 .
For comparison, we used Markov Chain Monte Carlo (MCMC) for posterior sampling, more specifically the Metropolis-Hastings (MH) algorithm. For this competitor, we assume that the noise process e(x) in (1) is known, so that the likelihood function is available in MCMC (which is of the form exp(− n i=1 β(X i ) (Y i − r(X i , θ)) 2 /2σ 2 noise ) up to constant). In this sense, we give an unfair advantage for MH over the proposed method, as the latter does not assume the knowledge of the noise process, which is usually not available in practice.
For evaluation, we compute Root Mean Square Er- ror (RMSE) in prediction for each method (and for a different number of simulations, m) as follows. Test input locationsX 1 , . . . ,X n are generated from q 0 (x) in the case of ordinary regression, and from q 1 (x) in the covaraite shift setting. After sampling parameterš θ 1 , . . . ,θ m with the method for evaluation, the RMSE is computed as ( 1
n n i=1 (R(X i ) − 1 m m j=1 r(X i ,θ j )) 2 ) 1/2 .
Synthetic Experiments
We consider the problem setting of the benchmark experiment in Shimodaira (2000).
Setting. The input space is X = R, and the data generating process (1) is given by R(x) = −x + x 3 and e(x) = with ∼ N (0, 2) being an independent noise. The simulation model is defined by r(x, θ) = θ 0 + θ 1 x, where θ = (θ 1 , θ 2 ) ∈ Θ = R d . For demonstration, we treat this model as intractable, i.e., we assume that only evaluation of function values r(x, θ) is possible once x and θ are given. The input densities q 0 (x) and q 1 (x) for for training and prediction are those of N (0.5, 0.5) and N (0, 0.3), respectively. We define the prior as multivariate Gaussian π = N (0, 5I 2 ), where I 2 ∈ R 2×2 is the identity. We set the size of training data (X i , Y i ) n i=1 as n = 100.
Results. Figure 2 shows RMSEs for (A) ordinary regression and (B) covariate shift as a function of the number m of simulations, with the means and standard deviations calculated from 30 independent trials. For the proposed method, we set the regularization constant to be ε = 1.0. We set the proposal distribution of MH to be N (0, σ 2 p I 2 ) with σ p being 0.08, 0.06, and 0.03, which were tuned so that the acceptance ratios become about 20%, 40%, and 60% respectively. In the horizontal axis, the number of simulations for MH is the number of all MCMC steps (which all require running the simulator) including burn-in and rejected executions. For MH, we used the first 10% MCMC steps for burn-in, and excluded them for predictions. The results show that the proposed method is more efficient than MH, in the sense that it gives better predictions than MH based on a small number of simulations. This is a promising property, since real-world simulators are often computationally expensive, as is the case for the experiment in the next section.
Experiments on Production Simulator
We performed experiments on the manufacturing process simulator mentioned in Sec. 1 (Fig. 1), and a more sophisticated production simulator with 12 parameters. We only describe the former here, and report the latter in the Appendix due to the space limitation.
Setting. We used a simulator constructed with WIT-NESS, a popular software package for production simulation (https://www.lanner.com/en-us/). We refer to Sec. 1 for an explanation of the simulator. This simulator r(x, θ) has 4 parameters θ ∈ Θ ⊂ R 4 . The input space for regression is X = (0, ∞).
The data generating process (1) is defined as R(x) = r(x, θ (0) ) for x < 110 and R(x) = r(x, θ (1) ) for x ≥ 110, where θ (0) := (2, 0.5, 5, 1) and θ (1) := (3.5, 0.5, 7, 1) ; the noise model is an independent noise e(x) = ∼ N (0, 30). The input densities are defined as q 0 (x) = N (100, 10) (training) and q 1 (x) = N (120, 10) (prediction). We constructed this model so that the two regions x < 110 and x ≥ 110 correspond to those for training and prediction, respectively, with θ (0) and θ (1) being the "true" parameters in the respective regions. We defined the prior π(θ) as the uniform distribution over Θ := [0, 5] × [0, 2] × [0, 10] × [0, 2] ⊂ R 4 . The size of training data (X i , Y i ) n i=1 (which are described in Fig. 1 (B)(C) as red points) is n = 50. Figure 3 shows the averages and standard deviations of RMSEs for the proposed method and MH of 10 independent trials, changing the number m Figure 4: Parametersθ 1 , . . . ,θ m generated from the proposed method, in the subspace of coordinates of θ 1 and θ 3 . (A): Ordinary regression: the generated parameters (orange), the mean of them (brown), and the "true" parameter θ (0) for the training region x < 110 (red). (B) Covariate shift: the generated parameters (light green), the mean of them (green), and the "true" parameter θ (1) for the prediction region x ≥ 110 (blue, "true shifted"). of simulations. We set the regularization constant of the proposed method as ε = 0.01, and the proposal distribution of MH as N (0, 0.03 2 I 4 ), which was tuned to make the acceptance about 40%. 2 The results show that the proposed method is more accurate than MH with a small number of simulations, even though the latter used the full knowledge of the data generating process (1). Fig. 4 (A) and (B) describe parametersθ 1 , . . . ,θ m generated in one run of the proposed method in the ordinary and covariate shift settings, respectively; the corresponding predictive outputs are shown in Fig. 1 (B) and (C). In both settings, the estimated posterior mean is located near the "true" parameter of each scenario. Fig. 4 (A) and (B) also demonstrate how our method might be useful for sensitivity analysis. Our method generates parametersθ 1 , . . . ,θ m so as to approximate the posterior P π (θ|r * ), where r * is "optimal" simulation outputs. Therefore, the more variation in the coordinate θ 1 indicates that the value of θ 1 is not very important to obtain optimal simulation outputs. But a comparison between (A) and (B) indicates that, under covariate shift, there should be small correlation between θ 1 and θ 3 to obtain optimal simulation outputs.
Results.
Yamazaki, K., Kawanabe, M., Watanabe, S., Sugiyama, M., and Müller, K.-R. (2007). Asymptotic Bayesian generalization error when training and test distributions are different. Proceedings of the 24th international conference on Machine learning -ICML '07, pages 1079-1086.
Supplementary Materials
Simulator Calibration under Covariate Shift with Kernels
A Proofs
A.1 Proof of Lemma 1
First we note that from the assumption 0 < β(X i ) < ∞ for all i = 1, . . . , n, the importance-weighted kernel (10) is continuous on R n . Therefore Steinwart and Christmann (2008, Lemma 4.33) implies that the RKHS H R n of k R n is separable.
To prove Lemma 1, we need the following result. Lemma 2. Suppose that the assumptions in Lemma 1 hold. Let (φ i ) ∞ i=1 ⊂ H R n be the eigenfunctions of the covariance operator C yy associated with positive eigenvalues, and let (φ j ) ∞ j=1 ⊂ H R n be an ONB of the null space of C yy . Thenφ j (Ỹ n ) = 0 holds for P R n -almost everyỸ n ∈ R n .
Proof. By definition ofφ j , its holds that
0 = C yyφj = k R n (·,Ỹ n )φ j (Ỹ n )dP R n (Ỹ n ) =: k R n (·,Ỹ n )dν(Ỹ n ),
where the measure ν is defined by dν(Ỹ n ) :=φ j (Ỹ n )dP R n (Ỹ n ). Since the kernel k R n is bounded on R n , H R n consists of bounded functions, and thusφ j ∈ H R n is bounded. Therefore ν a finite measure. But since k R n is a Gaussian kernel (see (10)), it is c 0 -universal, and so Sriperumbudur et al. (2011, Proposition 2) and the integral being zero imply that ν is the zero measure. Thus for ν to be the zero measure,φ j (Ỹ n ) = 0 should hold for P R n -almost everyỸ n , which concludes the proof.
We now prove Lemma 1.
Proof. Let (φ i ) ∞ i=1
⊂ H R n be the eigenfunctions of the covariance operator C yy associated with positive eigenvalues λ 1 ≥ λ 2 ≥ · · · > 0, and let (φ j ) ∞ j=1 ⊂ H R n be an ONB of the null space of C yy . To prove the assertion, we first show that (a) φ i , h ⊥ = 0 for every φ i , and that (b) C ϑyφj = 0 for everyφ j .
(a) By definition of φ i , it can be written as
φ i = λ −1 i C yy φ i = λ −1 i k R n (·,Ỹ n )φ i (Ỹ n )dP R n (Ỹ n ).
Therefore,
φ i , h ⊥ H R n = λ −1 i k R n (·,Ỹ n )φ i (Ỹ n )dP R n (Ỹ n ), h ⊥ H R n = λ −1 i k R n (·,Ỹ n ), h ⊥ H R n φ i (Ỹ n )dP R n (Ỹ n ) = 0,
where the last identity follows from k R n (·,Ỹ n ), h ⊥ H R n = 0 forỸ n ∈ supp(P R n ), which follows from the definition of h ⊥ .
(b) We have
C ϑyφj = k Θ (·, θ)φ j (Ỹ n )dP ΘR n (θ,Ỹ n ) = k Θ (·, θ)dP π (θ|Ỹ n ) φ j (Ỹ n )dP R n (Ỹ n ) = 0,
where the last identity follows from Lemma 2.
We now prove the assertion. By using (a) and (b), we obtain
C ϑy (C yy + εI) −1 k R n (·, Y n ) = C ϑy (C yy + εI) −1 (h * + h ⊥ ) = C ϑy ∞ i=1 (λ i + ε) −1 h * , φ i H R n φ i + C ϑy ∞ j=1 ε −1 h * + h ⊥ ,φ j H R nφ j = C ϑy ∞ i=1 (λ i + ε) −1 h * , φ i H R n φ i = C ϑy ∞ i=1 (λ i + ε) −1 h * , φ i H R n φ i + C ϑy ∞ j=1 ε −1 h * ,φ j H R nφ j = C ϑy (C yy + εI) −1 h * ,
which completes the proof.
A.2 Proof of Theorem 2
Theorem 2 can be easily proven by combining the proof idea of Fukumizu (2015, Theorem 1.3.2) and Theorem 1, but for completeness we present the proof.
Before presenting, we introduce some notation and definitions. Below A for an operator A denotes the operator norm. H R n ⊗ H R n denotes the tensor-product RKHS of H R n and H R n , which is the RKHS of the product kernel k R n ×R n : R n × R n → R defined by k R n ×R n ((Y n a ,Ỹ n a ), (Y n b ,Ỹ n b )) = k R n ((Y n a , Y n b ))k R n ((Ỹ n a ,Ỹ n b )). C yy ⊗ C yy : H R n ⊗ H R n → H R n ⊗ H R n is the covariance operator defined by C yy ⊗ C yy F := E[k R n ×R n (·, (y, y ))F (y, y )], F ∈ H R n ⊗ H R n , where y is an independent copy of the random variable y.
Note that the covariance operator C ϑy satisfies C ϑy f, g HΘ = E[f (y)g(ϑ)] for any f ∈ H R n and g ∈ H Θ . Similarly, C yy satisfies C yy f, h H R n = E[f (y)h(y)] for any f, h ∈ H R n , and C yy ⊗ C yy satisfies C yy F a ,
F b H R n ⊗H R n = E[F a (y, y )F b (y, y )] for any F a , F b ∈ H R n ⊗ H R n .
Proof. By the triangle inequality,
Ĉ ϑy (Ĉ yy + ε m I) −1 k R n (·, Y n ) − µ Θ|r * HΘ ≤ Ĉ ϑy (Ĉ yy + ε m I) −1 k R n (·, Y n ) − C ϑy (C yy + ε m I) −1 k R n (·, Y n ) HΘ + C ϑy (C yy + ε m I) −1 k R n (·, Y n ) − µ Θ|r * HΘ ≤ Ĉ ϑy (Ĉ yy + ε m I) −1 − C ϑy (C yy + ε m I) −1 k R n (·, Y n ) HΘ(17)+ C ϑy (C yy + ε m I) −1 k R n (·, r * ) − µ Θ|r * HΘ ,(18)
where we used Theorem 1 in the last line. Below we derive convergence rates of the two terms (17)(18) separately, and then determine the decay schedule of ε m as m → ∞ so that the two terms have the same rate.
The first term (17). We first haveĈ ϑy (Ĉ yy + ε m I) −1 − C ϑy (C yy + ε m I) −1 =Ĉ ϑy (Ĉ yy + ε m I) −1 −Ĉ ϑy (C yy + ε m I) −1
+Ĉ ϑy (C yy + ε m I) −1 − C ϑy (C yy + ε m I) −1 =Ĉ ϑy (Ĉ yy + ε m I) −1 − (C yy + ε m I) −1 +(Ĉ ϑy − C ϑy )(C yy + ε m I) −1 =Ĉ ϑy (Ĉ yy + ε m I) −1 (C yy −Ĉ yy )(C yy + ε m I) −1 +(Ĉ ϑy − C ϑy )(C yy + ε m I) −1 ,
where the last equality follows from the formula A −1 − B −1 = A −1 (B − A)B −1 that holds for any invertible operators A and B. Note thatĈ ϑy =Ĉ 1/2 ϑϑ W ϑyĈ 1/2 yy holds for some W ΘF : H R n → H Θ with W ϑy ≤ 1 (Baker, 1973, Theorem 1). Using this, we have
Ĉ ϑy (Ĉ yy + ε m I) −1 − C ϑy (C yy + ε m I) −1 ≤ Ĉ ϑy (Ĉ yy + ε m I) −1 (C yy −Ĉ yy )(C yy + ε m I) −1 + (Ĉ ϑy − C ϑy )(C yy + ε m I) −1 = Ĉ 1/2 ϑϑ W ϑyĈ 1/2 yy (Ĉ yy + ε m I) −1 (C yy −Ĉ yy )(C yy + ε m I) −1 + (Ĉ ϑy − C ϑy )(C yy + ε m I) −1 ≤ Ĉ 1/2 ϑϑ ε −1/2 m (C yy −Ĉ yy )(C yy + ε m I) −1 + (Ĉ ϑy − C ϑy )(C yy + ε m I) −1 = O p ε −3/2 m m −1/2 + N (ε m )ε −1 m m −1/2 (m → ∞, ε m → 0),
where the second inequality follows from W ϑy ≤ 1 and Ĉ 1/2 yy (Ĉ yy + ε m I) −1 ≤ ε −1/2 m , and the last line from Fukumizu (2015, Lemma 1.5.1); the quantity N (ε) for any ε > 0 is defined by N (ε) := Tr[C yy (C yy + εI) −1 ], where Tr(A) denotes the trace of an operator A. Under our assumption on the eigenvalue decay rate of C yy , we have N (ε) ≤ βb b−1 ε −1/b (Caponnetto and Vito, 2007, Proposition 3), which implies that the above rate becomes
O p ε −3/2 m m −1/2 + ε −1−1/2b m m −1/2 (m → ∞, ε m → 0).
From mε m → ∞ and ε m → 0 (as we determine the schedule of ε m below), it is easy to show that the second term is slower and thus dominates the above rate. This concludes that the rate of the first term (17) is
Ĉ ϑy (Ĉ yy + ε m I) −1 − C ϑy (C yy + ε m I) −1 k R n (·, Y n ) HΘ = O p ε −1−1/2b m m −1/2 (m → ∞, ε m → 0).
The second term (18). Let (ϑ , y ) be an independent copy of the random variables (ϑ, y). Note that for any ψ ∈ H R n , we have
C ϑy ψ, C ϑy ψ HΘ = E [k Θ (ϑ, ϑ )ψ(y)ψ(y )] = E [E[k Θ (ϑ, ϑ )|y, y ]ψ(y)ψ(y )] = E [G(y, y )ψ(y)ψ(y )] = (C yy ⊗ C yy )G, ψ ⊗ ψ H R n ⊗H R n .
Similarly, for any ψ ∈ H R n andỸ n ∈ supp(P R n ), we have
C ϑy ψ, E[k Θ (·, ϑ)|y =Ỹ n ] HΘ = E ψ(y )E[k Θ (ϑ , ϑ)|y =Ỹ n ] = E ψ(y )E[k Θ (ϑ , ϑ)|y =Ỹ n , y ] = E ψ(y )G(Ỹ n , y ) = (I ⊗ C yy )G, k R n (·,Ỹ n ) ⊗ ψ H R n ⊗H R n ,
where I : H R n → H R n is the identity operator and
((I ⊗ C yy )G) (·, * ) := E[G(·, y )k R n (y , * )].
Now let ψ := (C yy + ε m I) −1 k R n (·, r * ). Recall µ Θ|r * = E[k Θ (·, ϑ)|y = r * ], which gives µ Θ|r * 2 HΘ = G(r * , r * ). Then the square of (18) can be written as
C ϑy (C yy + ε m I) −1 k R n (·, r * ) − µ Θ|r * 2 HΘ = C ϑy ψ 2 HΘ − 2 C ϑy ψ, µ Θ|r * HΘ + µ Θ|r * 2 HΘ = (C yy ⊗ C yy )G, (C yy + ε m I) −1 k R n (·, r * ) ⊗ (C yy + ε m I) −1 k R n (·, r * ) H R n ⊗H R n −2 (I ⊗ C yy )G, k R n (·, r * ) ⊗ (C yy + ε m I) −1 k R n (·, r * ) H R n ⊗H R n + G(r * , r * ) = ((C yy + ε m I) −1 C yy ⊗ (C yy + ε m I) −1 C yy )G, k R n (·, r * ) ⊗ k R n (·, r * ) H R n ⊗H R n −2 (I ⊗ (C yy + ε m I) −1 C yy )G, k R n (·, r * ) ⊗ k R n (·, r * ) H R n ⊗H R n + G(r * , r * ) = (C yy + ε m I) −1 C yy ⊗ (C yy + ε m I) −1 C yy − I ⊗ (C yy + ε m ) −1 C yy −(C yy + ε m I) −1 C yy ⊗ I + I ⊗ I G, k R n (·, r * ) ⊗ k R n (·, r * ) H R n ⊗H R n ≤ (C yy + ε m I) −1 C yy ⊗ (C yy + ε m I) −1 C yy − I ⊗ (C yy + ε m ) −1 C yy −(C yy + ε m I) −1 C yy ⊗ I + I ⊗ I G H R n ⊗H R n k R n (·, r * ) ⊗ k R n (·, r * ) H R n ⊗H R n . Let (φ i ) ∞ i=1
⊂ H R n be the eigenfunctions of C yy and (λ i ) ∞ i=1 be the associated eigenvalues such that λ 1 ≥ λ 2 ≥ · · · ≥ 0. Then the eigenfunctions and eigenvalues of the operator C yy ⊗ C yy are given as
(φ i ⊗ φ j ) ∞ i,j=1
and (λ i λ i ) ∞ i,j=1 , respectively. Note that (C yy + ε m I) −1 C 2 yy φ i = ( λ 2 i 1+εm )φ i . Note also that our assumption G ∈ Range(C yy ⊗ C yy ) implies that there exists some ξ ∈ H R n ⊗ H R n such that G = (C yy ⊗ C yy )ξ. Using these identities and Parseval's identity, we have
(C yy + ε m I) −1 C yy ⊗ (C yy + ε m I) −1 C yy − I ⊗ (C yy + ε m ) −1 C yy −(C yy + ε m I) −1 C yy + I ⊗ I G 2 H R n ⊗H R n = (C yy + ε m I) −1 C yy ⊗ (C yy + ε m I) −1 C yy − I ⊗ (C yy + ε m ) −1 C yy −(C yy + ε m I) −1 C yy + I ⊗ I (C yy ⊗ C yy )ξ 2 H R n ⊗H R n = i,j λ 2 i λ i + ε m λ 2 j λ j + ε m − λ i λ 2 j λ j + ε m − λ 2 i λ j λ i + ε m + λ i λ j 2 φ i ⊗ φ j , ξ 2 H R n ⊗H R n = i,j ε 2 m λ i λ j (λ i + ε m )(λ j + ε m ) 2 φ i ⊗ φ j , ξ 2 H R n ⊗H R n ≤ ε 4 m ξ 2 H R n ⊗H R n . .2 4
) 12 4 1 52 Figure 5: Illustration of the manufacturing process (metal processing factory) for producing valves. Table 1: Summary of the true and estimated parameters for the experiment on the sophisticated simulation model. T BF represents the mean time between failures, and T R the mode of repair time for each process. The parameter estimates are the posterior means of the generated parameters, averaged over 10 independent trials, and the corresponding standard deviations are shown in brackets. θ 10 θ 11 θ 12 true θ (0) ( From this the second term (18) is upper-bounded as
C ϑy (C yy + ε m I) −1 k R n (·, r * ) − µ Θ|r * HΘ ≤ ε m ξ 1/2 H R n ⊗H R n k R n (·, r * ) ⊗ k R n (·, r * ) 1/2 H R n ⊗H R n = O(ε m ), (m → ∞, ε m → 0).
The obtained rates for the two terms (17)(18) can be balanced by setting ε m = Cm − b 1+4b for any fixed constant C > 0, and this gives the rate in the assertion.
B Experiments on Sophisticated Production Simulator
We performed experiments on a sophisticated but more complicated simulator for industrial manufacturing processes than the one in Sec. 5.2. We used a simulation model constructed with the software package WITNESS (https://www.lanner.com/en-us/) described in Fig. 5. It models a metal processing factory for producing valves (products) from metal pipes, with six primary processes of 1) "saw", 2) "coat", 3) "inspection", 4) "harden", 5) "grind", and 6) "clean." Each process consists of complicated procedures such preparation, waiting, and machine repair in case of a trouble.
B.1 Setting
As in Sec. 5.2, the input space is X = (0, ∞) and each input x represents the number of products required to make, and the resulting output y(x) = R(x) + e(x) is the length of time needed to produce that number of products.
The mapping x → r(x, θ) consists of the above six processes, and each of them contains two parameters for machine downtime due to failures: the mean time between failures (T BF ), and the mode of repair time (T R ). q 0 (x) and q 1 (x) are input densities for training and prediction, respectively.
Thus, in total, there are 12 parameters, i.e., θ = (θ 1 , ..., θ 12 ) ∈ Θ ⊂ R 12 , where θ 2j = T
BF and θ 2j+1 = T (j) R for the j (= 1, . . . , 6)-th process (see Table 1). In each process (say the j-th process), the time between two failures follows the negative exponential distribution with the mean time θ 2j = T (j) BF , and the time required for repair follows the Erlang distribution with the mode of repair time θ 2j+1 = T (j) R and the shape parameter 3. We set the prior distribution π(θ) by defining the uniform distribution over [0, 300] for θ 2j and that over [0,30] for θ 2j+1 , and taking the product of the uniform distributions for all the parameters (j = 1, . . . , 6).
In a similar manner to the experiment in Section 5.2, we defined the regression function R(x) of the data generating process as R(x) = r(x, θ (0) ) for x < 140 and R(x) = r(x, θ (1) ) for x ≥ 140, where θ (0) and θ (1) are the "true" parameters for training and prediction, and defined in Table 1. We set the input densities q 0 (x) and q 1 (x) for training and prediction as N (130, 15) and N (160, 12), respectively. The size of training data is n = 50, and the number of simulations is m = 400. We set the noise process of the data generating process to be independent Gaussian, e(x) = ∼ N (0, 300). We set the constants σ 2 , σ 2 Θ > 0 in the kernels k R n and k Θ by the median heuristic using the simulated pairs (θ j ,Ȳ n j ) m j=1 , and the regularization constant to be ε = 0.1.
B.1.1 Details of the Simulation Model
We explain below qualitative details of the six processes in the simulation model constructed with the WITNESS software package.
Cutting process: The manufacturing process begins with the arrival of pipes, all of which have the same diameter and length of 30 cm. These pipes arrive at a fixed time interval, depending on the vendor's supply schedule. Subsequently, each pipe is cut into 10-cm sections along the length, resulting in three pieces. A worker is assigned for this process to perform changeover, repair, and disconnection operations. This worker takes a break once every eight hours. Then the small pieces obtained are transferred to the coating process by a conveyor belt.
Coating process: The small pieces are coated for protection by a coating machine. The machine processes six pieces in a batch manner at once. A coating material must have been prepared in the coating machine, before those pieces have arrived; otherwise, the quality of those pieces will be degraded by heat. When the pieces ride on the belt conveyor, a sensor detects them and the coating material is prepared.
Inspection process: After the coating process, each piece is placed in an inspection waiting buffer. An inspector picks up those pieces one by one from the waiting buffer, and inspects the coating quality. If a piece fails the quality inspection, the inspector places that piece in the recoating waiting buffer. The coating machine must process the pieces of the recoating buffer preferentially. When pieces pass the quality inspection, the inspector sends those pieces to the curing step.
Harden process: In the harden (quenching) process, up to 10 pieces are processed simultaneously in a first-come first-out basis, and each piece is quenched for at least one hour.
Grind process: The quenched pieces are polished to satisfy a customer's specifications. Two polishing machines with the same priority are available. Each machine uses special jigs to process four pieces simultaneously, and produces two different types of valves. Further, 10 jigs exist in the system, and when not in use, they are placed in a jig storage buffer.
A loader fixes four pieces with a jig and sends it to the polishing machine. The polishing machine sends the jig and the four pieces to an unloader, once polishing is done. The unloader sends the finished pieces to a valve storage area and the jig to a jig return area. The two types of valves are separated, and placed in a dedicated valve storage buffer. When a jig is required to be used again, it is returned by a jig return conveyor to the jig storage buffer.
Cleaning process: Valves issued from a valve storage area are cleaned before shipment. In the washing machine, five stations are available where valves can be placed one at a time, and the valves are cleaned in these stations. Up to 10 valves of each type can be washed simultaneously. When the valve type is changed, the cleaning head must be replaced.
B.2 Results
The true 12 parameters are estimated as the posterior means of generated parameters, and their averages and standard deviations over 10 independent trials are shown in the bottom rows in Table 1. Most of the true parameters are estimated for both of the ordinary regression and covariate shift settings. Fig. 6 (A) and (B) describe predictive outputs and their means given by the proposed method, which fit well for both the ordinary and covariate shift settings. The RMSE for predictive outputs by the proposed method with covariate shift adaptation, calculated for test data generated from q 1 (x), is 1.48 × 10 2 . On the other hand, the RMSE on the same test data for the proposed method without covariate shift adaptation (i.e., setting β(X i ) = 1, i = 1, . . . , n in the importance-weighted kernel) is 1.64 × 10 3 . This confirms that the use of the importance-weighted kernel indeed works for covariate shift adaptation.
In this experiment, approximately 3 [s] was required for one evaluation of the simulation model r(x, θ) with the authors' computational environment. Thus, the dominant factor in the computational cost was that of simulations.
| 10,555 |
1809.08304
|
2890241187
|
Recent progress in logic programming (e.g., the development of the Answer Set Programming paradigm) has made it possible to teach it to general undergraduate and even middle high school students. Given the limited exposure of these students to computer science, the complexity of downloading, installing and using tools for writing logic programs could be a major barrier for logic programming to reach a much wider audience. We developed onlineSPARC, an online answer set programming environment with a self contained file system and a simple interface. It allows users to type edit logic programs and perform several tasks over programs, including asking a query to a program, getting the answer sets of a program, and producing a drawing animation based on the answer sets of a program.
|
As ASP has been applied to more and more problems, the importance of ASP software development tools has been realized by the community. Some integrated development environment (IDE) tools, e.g., APE @cite_12 , ASPIDE @cite_20 , iGROM @cite_16 and SeaLion @cite_15 have previously been developed. They provide a graphical user interface for users to carry out a sequence of tasks from editing an ASP program to debugging that program, easing the use of ASP significantly. However, the target audience of these tools is experienced software developers. Compared with the existing environments, our environment is online, self contained (i.e., fully independent of the users' local computers) and provides a very simple interface, focusing on teaching only. The interface is operable by any person who is able to use a typical web site and traverse a file system.
|
{
"abstract": [
"We report about the current state and designated features of the tool SeaLion, aimed to serve as an integrated development environment (IDE) for answer-set programming (ASP). A main goal of SeaLion is to provide a user-friendly environment for supporting a developer to write, evaluate, debug, and test answer-set programs. To this end, new support techniques have to be developed that suit the requirements of the answer-set semantics and meet the constraints of practical applicability. In this respect, SeaLion benefits from the research results of a project on methods and methodologies for answer-set program development in whose context SeaLion is realised. Currently, the tool provides source-code editors for the languages of Gringo and DLV that offer syntax highlighting, syntax checking, refactoring functionality, and a visual program outline. Further implemented features are a documentation generator, support for external solvers, and visualisation as well as visual editing of answer sets. SeaLion comes as a plugin of the popular Eclipse platform and provides itself interfaces for future extensions of the IDE.",
"Answer Set Programming (ASP) is a truly-declarative programming paradigm proposed in the area of non-monotonic reasoning and logic programming. In the last few years, several tools for ASP-program development have been proposed, including (more or less advanced) editors and debuggers. However, ASP still lacks an Integrated Development Environment (IDE) supporting the entire life-cycle of ASP development, from (assisted) programs editing to application deployment. In this paper we present ASPIDE, a comprehensive IDE for ASP, integrating a cutting-edge editing tool (featuring dynamic syntax highlighting, on-line syntax correction, autocompletion, code-templates, quick-fixes, refactoring, etc.) with a collection of user-friendly graphical tools for program composition, debugging, profiling, database access, solver execution configuration and output-handling.",
"It has been recognised that better programming tools are required to support the logic programming paradigm of Answer Set Programming (ASP), especially when larger scale applications need to be developed. In order to meet this demand, the aspects of programming in ASP that require better support need to be investigated, and suitable tools to support them identified and implemented. In this paper we detail an exploratory development approach to implementing an Integrated Development Environment (IDE) for ASP, the AnsProlog* Programming Environment (APE). APE is implemented as a plug-in for the Eclipse platform. Given that an IDE is itself composed of a set of programming tools, this approach is used to identify a set of tool requirements for ASP, together with suggestions for improvements to existing tools and programming practices.",
""
],
"cite_N": [
"@cite_15",
"@cite_20",
"@cite_12",
"@cite_16"
],
"mid": [
"1654219887",
"1512925707",
"33233687",
""
]
}
|
Under consideration for publication in Theory and Practice of Logic Programming onlineSPARC: a Programming Environment for Answer Set Programming *
|
Answer Set Programming (ASP) (Gelfond and Kahl 2014) is becoming a dominating language in the knowledge representation community (McIlraith 2011;Kowalski 2014) because it has offered elegant and effective solutions not only to classical Artificial Intelligence problems but also to many challenging application problems. Thanks to its simplicity and clarity in both informal and formal semantics, ASP provides a "natural" modeling of many problems. At the same time, the fully declarative nature of ASP also clears a major barrier to teaching logic programming, as the procedural features of classical logic programming systems such as PROLOG are taken as the source of misconceptions in students' learning of Logic Programming (Mendelsohn et al. 1990).
ASP has been taught to undergraduate students in the course of Artificial Intelligence at Texas Tech for more than a decade. We believe ASP has become mature enough to be used to introduce programming and problem solving to high school students. We have offered many sessions to students at New Deal High School and a three week long ASP course to high school students involved in the TexPREP program (http://www.math.ttu.edu/texprep/). In our teaching practice, we found that ASP is well accepted by the students and the students were able to focus on problem solving, instead of the language itself. The students were able to write programs to answer questions about the relationships (e.g., parent, ancestor) amongst family members and to find solutions for Sudoku problems.
In our teaching practices, particularly when approaching high school students, we identified two challenges. One is the installation, management and use of the solvers and their development tools. The other is to find a more vivid and intuitive presentation of results (answer sets) of logic programs to inspire students' interest in learning.
To overcome the challenges we have designed and built onlineSPARC, an online development environment for ASP. Its URL is at http://goo.gl/UwJ7Zj The environment gets rid of software installation and management. Its very simple interface also eases the use of the software. Specifically, it provides an easy to use editor for users to edit their programs, an online file system for them to store and retrieve their programs, and a few simple buttons allowing the users to query the program and to get answer sets of the program. The query capacity can also help a teacher to quickly raise students' interest in ASP based problem solving and modeling. The environment uses SPARC ) as the ASP language. SPARC is designed to further facilitate the teaching of logic programming by introducing sorts (or types) which simplify the difficult programming concept of domain variables in classical ASP systems such as Clingo (Gebser et al. 2011) and help programmers to identify errors early thanks to sort information. Initial experiment of teaching SPARC to high school students is promising (Reyes et al. 2016).
For the second challenge, onlineSPARC introduces drawing and animation predicates for students to present their solutions to problems in a more visually straightforward and exciting manner (instead of the answer sets which are simply a set of literals). As an example, with this facility, the students can show a straightforward visual solution to a Sudoku problem. We also noted observations in literature that multimedia and visualization play a positive role in promoting students' learning (Guzdial 2001;Clark et al. 2009).
We have been using this environment in our teaching of AI classes at both undergraduate and graduate levels and in our outreach to middle/high schools since 2016. Preparation of documents on installation or management of the software is no longer needed. We got very few questions from students on the use of the environment, and the online system is rarely down. With onlineSPARC, one of our Master students was able to offer a short-term lesson on ASP based modeling by himself to New Deal High School students. All his preparation was on the teaching materials, but not on the software or its use.
The rest of the paper is organized as follows. SPARC is recalled in Section 2. The design, implementation and a preliminary test of the online environment are presented in Section 3. The design and rendering of the drawing and animation predicates are presented in Section 4. Related work is reviewed in Section 5, and the paper is concluded in Section 6.
SPARC -an Answer Set Programming Language
SPARC is an Answer Set Programming (ASP) language which allows for the explicit representation of sorts. There are many excellent introduction materials on ASP including (Brewka et al. 2011) and (Gelfond and Kahl 2014). We will give a brief introduction of SPARC. The syntax and semantics of SPARC can be found in , and the SPARC manual and solver are freely available (Balai 2013).
A SPARC program consists of three sections: sorts, predicates and rules. We will use the map coloring problem as an example to illustrate SPARC: can the USA map be colored using red, green and blue such that no two neighboring states have the same color?
The first step is to identify the objects and their sorts in the problem. For example, the three colors are important and they form the sort of color for this problem. In SPARC syntax, we use #color = {red, green, blue} to represent the objects and their sort. The sorts section of the SPARC program is sorts % the keyword to start the sorts section #color = {red,green,blue}. #state = {texas, colorado, newMexico, ......}.
The next step is to identify relations in the problem and declare in the predicates section the sorts of the parameters of the predicates corresponding to the relations. The predicates section of the program is predicates % the keyword to start the predicates section % neighbor(X, Y) denotes that state X is a neighbor of state Y. neighbor(#state, #state). % ofColor(X, C) denotes that state X has color C ofColor(#state, #color).
The last step is to identify the knowledge needed in the problem and translate it into rules. The rules section of a SPARC program consists of rules in the typical ASP syntax. The rules section of a SPARC program will include the following.
rules % the keyword to start the rules section % Texas is a neighbor of Colorado neighbor(texas, colorado). % The neighbor relation is symmetric neighbor(S1, S2) :-neighbor(S2, S1). % Any state has one of the three colors: red, green and blue ofColor(S, red) | ofColor(S, green) | ofColor(S, blue). % No two neighbors have the same color :-ofColor(S1, C), ofColor(S2, C), neighbor(S1, S2), S1 != S2. % Every state has at most one color :-ofColor(S, C1), ofColor(S, C2), C1 != C2.
The current SPARC solver defines a query to be either an atom or the negation of an atom. Given a atom a, a is ¬a and ¬a = a. The answer to a ground query l with respect to a program P is yes if l is in every answer set of P , no if l is in every answer set of P , and unknown otherwise. An answer to a query with variables is a set of ground terms for the variables in the query such that the answer to the query resulting from replacing the variables by the corresponding ground terms is yes. Formal definitions of queries and answers to queries can be found in Section 2.2 of (Gelfond and Kahl 2014).
The SPARC solver is able to answer queries with respect to a program and to compute one answer set or all answer sets of a program. SPARC solver translates a SPARC program into a DLV or Clingo program and then uses the corresponding solver to find the answer sets of the resulting program. When SPARC solver answers queries, it computes all answer sets of the given program. onlineSPARC directly calls the SPARC solver to get answers to a query or get answer sets.
Online Development Environment Design and Implementation
Environment Design
The principle of design we followed is that the environment, with the simplest possible interface, should provide full support, from writing programs to getting the answer sets of a program, in order to help with the education of Answer Set Programming.
The design of the interface is shown in Figure 1. When logged in, it consists of 3 components: 1) the editor to edit a program, 2) the file navigation system, 3) the operations (including query answering, obtaining answer sets and executing actions in the answer sets) over the program and 4) the result/display area. One can edit a SPARC program directly inside the editor, which has syntax highlighting features (area 1). The file inside the editor can be saved by clicking the "Save" button (2.4). The files and folders are displayed in the area 2.1. The user can traverse them using the mouse like traversing a file system on a typical operating system. Files can be deleted and their names can be changed. To create a folder or a file, one clicks the "New" button (2.3). The panel showing files/folders can be toggled by clicking the "Directory" button (2.2) (so that users can have more space for the editing or result area (4)). To ask queries to the program inside the editor, one can type a query in the text box (3.1) and then press the "Submit" button (3.1). The answer to the query will be shown in area 4. To see the answer sets of a program, click the "Get Answer Sets" button (3.2) and the result will be shown in area (4) When the "Execute" button (3.3) is clicked, a list of buttons (with number labels) will be shown and when a button is clicked, the atoms for drawing and animation in the answer set of the program corresponding to the button label will be rendered in the display area (4).
A user can only access the full interface discussed above after login. The user will log out by clicking the "Logout" button (5). Without login, the interface is much simpler, with all the file navigation related functionalities invisible. Such an interface is convenient for a quick test or demo of a SPARC program.
Implementation
The architecture of the online environment (see Fig 2) follows that of a typical web application. It consists of a front end-component and a back-end component. The frontend provides the user interface and sends users' requests to the back-end while the backend fulfills the request and returns results, if needed, back to the front-end. After getting the results from the back-end, the front-end will update the interface correspondingly (e.g., display query answers to the result area). Details about the components and their interactions are given below.
Front-end. The front-end is implemented with HTML and JavaScript. The editor in our front-end uses ACE which is an embeddable (to any web page) code editor written in JavaScript (https://ace.c9.io/). The panel for file/folder navigation is based on JavaScript code by Yuez.me.
Back-end and Interactions between the Front-end and the Back-end. The backend is mainly implemented using PHP and is hosted on the server side. It has three components: 1) file system management, 2) an inference engine (SPARC solver) and 3) processors for fulfilling user interface functionalities in terms of the results from the inference engine.
The file system management uses a database to manage the files and folders of all users of the environment. The Entity/Relationship (ER) diagram of the system is shown in Fig 3. The SPARC files are saved in the server file system, not in a database table. The sharing is managed by the sharing information in the relevant database tables. In our implementation, we use a mySQL database system. The file management system gets requests such as creating a new file/folder, deleting a file, saving a file, getting the files and folders, etc, from the front-end. It then updates the tables and local file system correspondingly and returns the needed results to the front-end. After the front-end gets the results, it will update the graphical user interface (e.g., display the program returned from the back-end inside the editor) if needed. Fig. 2. The architecture for a simple use case: submitting a SPARC program to get the answer sets. First a SPARC program is typed in the editor of the front-end. After the "Get Answer Sets" button is pressed, the program and the command for getting answer sets are sent to the request handler in the back-end. The request handler runs the SPARC solver with the program and pipes the output (answer sets of the program) into the answer sets processor. The processor first formats the answer sets into XML and then employs XSLT to translate the XML format into an HTML ordered list element (i.e., <ol> ). The <ol> element encodes the answer sets for a user friendly display. The <ol> element is then sent to the front-end and the front-end inserts the <ol> element into the <div> element of the web page of the user interface. Because of the change of the web page, the browser will re-render the web page and the answer sets will be displayed in the result area of the user interface. For other functionalities (e.g., answering the query) in the user interface, the answer sets, the command and program are handled by their corresponding processor. Fig. 3. The Entity/Relationship (ER) diagram for file/folder management. Most names have a straightforward meaning. The Folderurl and Fileurl above refer to the full path of the folder/file in the file system.
The answer sets processors include those for generating all answer sets of a SPARC program (see details in the caption of Figure 2), for rendering a drawing/animation (see Section 4.2), and for answering queries. The processor for answering queries calls the SPARC solver to get the answers, translates the answers into a HTML element and passes the element to the front-end.
Preliminary onlineSPARC Performance Test
Given that the worst case complexity of answering a query or getting an answer set of a program is exponential, it is interesting to know how well onlineSPARC can support the programming activities commonly used for teaching and learning purposes. First, it is hard with our limited resources for onlineSPARC to support for programs related to hard search problems that need a lot of computation time, and thus onlineSPARC is not designed for an audience aiming to solve computationally hard problems and/or manage very complex programs. Second, onlineSPARC should have great support for typical programs used in teaching (for general students, including middle and high school students) and learning.
To obtain a preliminary idea on problems to test the performance of onlineSPARC, we consider the textbook by Gelfond and Khal (2014) which is a good candidate for teaching the knowledge representation methodologies based on ASP. We select, from there, the programs for the family problem and Sudoku problem which are widely used in teaching ASP. We also select the graph (map) coloring problem (which is also a popular problem in teaching ASP and constraints in AI). Programs for teaching/learning purposes usually involve smaller data sets. To make the problem more challenging we use the map data 1 from ASP Competition 2015.
When we consider the performance of onlineSPARC, we are mainly interested in how many users it can support in a given time window and in its response time to users' requests. Response time is defined as the difference between the time when a request was sent and the time when a response has been fully received. We employ the tool JMeter (https://jmeter.apache.org/) to carry out the performance tests. The onlineSPARC is installed on a Dell PowerEdge R330 Rack Server with CPU E3-1230 (v5 3.4GHz, 8M cache, 4Cores/8Threads), utilizing a memory of 32GB and CentOS 7.4. The server is installed with Apache/2.4.6 (HTTP server) and MySQL 14.14 (database server).
Our first test is to run 300 requests (the first 100 requests in 10 seconds, the second 100 requests in 300 seconds and the third 100 requests in 100 seconds) for the map coloring problem. Each request is to get all answer sets for the map coloring problem, for which the SPARC solver returns a message that there are too many answer sets. This test crashes the server and we have to manually reboot the server. When asking a single request without other requests, the response time is 2.2 seconds. It can be inferred from this test that onlineSPARC has limited capacity for solving hard problems in a relative short time window. Given the size of this problem (with more than 1500 edges), it is more suitable for use as a homework assignment, and not for an in-class assignment (and thus there is a shorter time window). Given a larger time window (at the level of days), onlineSPARC (on a single server) should still support a decent number of students.
To get an idea on how window size may impact onlineSPARC performance, we fix the number of requests to be 100 and change the time window. The first time window is 240s. Since the requests are evenly distributed in the time window, each request has enough time to solely occupy the server. The average response time for each request is 2.2s. When the time window becomes 60s, the maximum number of concurrent requests being attended to by the server becomes 18, and the average response time becomes 13s. When time window becomes 30s, the maximum number of concurrent requests is 91, and the average response time becomes 1023s.
When teaching in middle school and in high school, it seems that we need to combine the lecture and lab time, demonstrating and having students practice together. So, it must be possible for both teachers and students to work on the programs at the same time and in a short time window. onlineSPARC is expected to provide a good support of programs that are likely used in class teaching.
We first consider the Sudoku problem. It is possible for a teacher to demo it, but there is some chance a group of students will run it at the same time during the class. This occurrence is fine, since our tests show that 100 requests in a time window of 3s have an average response time of 7.7s.
We next consider the family problem. In our teaching experience, there has been a good chance that students will practice (parts of) this problem in class and send many requests in a relatively short period of time. With a time window of 3s, 500 requests can be processed by onlineSPARC with an average response time of 13.5s. JMeter seems to have a limit on the number of requests it can send (in a non-distributed environment), so, we didn't test larger amounts of requests.
In summary, onlineSPARC (even on a single server with limited capacity, like our PowerEdge R330) can provide support to a good number of students with their teaching and learning activities during class (mainly because the programs used during class are computationally cheap). For harder homework problems which need more computation time than class problems, thanks to the longer time window during which students may be active, onlineSPARC may be able to support a decent number of students. In the case of the map coloring problem, assuming students work evenly over a period of 8 hours, onlineSPARC can support at least 13000 requests (with certain assumptions on the programs). On the other hand, as shown in the first test that crashes the server, a smaller number of requests (at the level of tens or hundreds of requests) for solving very hard problems, even in a time window at the level of days, could make the server unstable. To reduce the potential negative impacts of hard problems on the server, in the current onlineSPARC, the maximal timeout is 50s, with the default one being 20s. An instructor using onlineSPARC should be aware of this limitation.
Drawing and Animation Design and Implementation
We will first present our design of drawing/animation predicates in Section 4.1 and its implementation in Section 4.2. One full example, in SPARC, on animation will be discussed in Section 4.3. In Section 4.4, we present an "extension" (in a preprocessing manner, instead of a native manner) of SPARC to allow more teaching and learning friendly programming. We also show example programs there. In the last subsection, we provide an online link for a set of drawing/animation programs in SPARC or the "extended" SPARC.
Drawing and Animation Design
To allow programmers to create drawing and animation using SPARC, we design two predicates, called display predicates: one for drawing and one for animation. The atoms using these predicates are called display atoms. To use these atoms in a SPARC program, a programmer needs to include sorts (e.g., sort of colors, fonts and numbers) and the corresponding predicate declaration which are predefined (see Appendix). In the following, we only focus on the atoms and their use for drawing and animation. More details can be found in Section 4.3.
Drawing.
A drawing predicate is of the form: draw(c) where c is called a drawing command. Intuitively an atom containing this predicate draws texts and graphics as instructed by the command c. By drawing a picture, we mean a shape is drawn with a style. We define a shape as either text or a geometric line or curve. Also, a style specifies the graphical properties of the shape it is applied to. For example, visual properties include color, thickness, and font. For modularity, we introduce style names, which are labels that can be associated with different styles so that the same style may be reused without being redefined. A drawing is completed by associating this shape and style to a certain position in the canvas, which is simply the display board. Note, the origin of the coordinate system is at the top left corner of the canvas.
Here is an example of drawing a red line from point (0, 0) to (2, 2). First, we introduce a style name redline and associate it to the red color by the style command line color(redline, red). With this defined style we then draw the red line by the shape command draw line(redline, 0, 0, 2, 2). Style commands and shape commands form all drawing commands. The SPARC program rules to draw the given line are draw(line color(redline, red)). draw(draw line(redline, 0, 0, 2, 2)).
We now present the possible style and shape commands recognized in atoms like the two above.
The style commands of our system include the following:
• line width(sn, t) specifies that lines drawn with style name sn should be drawn with a line thickness t. • text font(sn, fs, ff) specifies that text drawn with style name sn should be drawn with a font size fs and a font family ff. • line cap(sn, c) specifies that lines drawn with style name sn should be drawn with a capping c, such as an arrowhead. • text align(sn, al) specifies that text drawn with style name sn should be drawn with an alignment on the page al. • line color(sn, c) specifies that lines drawn with style name sn should be drawn with a color c. • text color(sn, c) specifies that text drawn with style name sn should be drawn with a color c.
The shape commands include the following:
• draw line(sn, xs, ys, xe, ye) draws a line from starting point (xs, ys) to ending point (xe, ye) with style name sn; • draw quad curve(sn, xs, ys, bx, by, xe, ye) draws a quadratic Bezier curve, with style name sn, from the start point (xs, ys) to the end point (xe, ye) using the control point (bx, by); • draw bezier curve(sn, xs, ys, b1x, b1y, b2x, b2y, xe, ye) draws a cubic Bezier curve, using style name sn, from the start point (xs, ys) to the end point (xe, ye) using the control points (b1x, b1y) and (b2x, b2y); • draw arc curve(sn, xs, ys, r, sa, se) draws an arc using style name sn and the arc is centered at (x, y) with radius r starting at angle sa and ending at angle se going in the clockwise direction; • draw text(sn, x, xs, ys) prints the value of x as text to screen from point (xs, ys) using style name sn.
Animation. A frame, a basic concept in animation, is defined as a drawing. When a sequence of frames, whose content is normally relevant, is shown on the screen in rapid succession (usually 24, 25, 30, or 60 frames per second), a fluid animation is seemingly created. To design an animation, a designer will specify the drawing for each frame.
Given that the order of frames matters, we give a frame a value equal to its index in a sequence of frames. We introduce the animation predicate animate(c, i) which indicates a desire to draw a picture at the i th frame using drawing command c. The index of the first frame of an animation is always 0. The frames will be shown on the screen at a rate of 60 frames per second, and the i th frame will be showed at time (i * 1/60) th second (from the start of the animation) for a duration of 1/60 of a second.
As an example, we would like to elaborate on an animation where a red box with a side length of 10 pixels moves its top left coordinate from the point (1, 70) to (200,70). We will create 200 frames with the box at the point (i + 1, 70) in i th frame.
Let the variable I be of a sort called frame, defined from 0 to some large number. In every frame I, we specify the drawing styling redline: animate(line color(redline, red), I).
To make a box at the I th frame, we need to draw the box's four sides using the style associated with style name redline. The following describes the four sides of a box at any frame: bottom -(I + 1, 70) to (I + 1 + 10, 70), left -(I + 1, 70) to (I + 1, 60), top -(I + 1, 60) to (I + 1 + 10, 60) and right -(I + 1 + 10, 60) to (I + 1 + 10, 70). Hence we have the rules Note that the drawing predicate produces the intended drawing throughout all the frames creating a static drawing. On the other hand, the animation predicate produces a drawing only for a specific frame.
Finally, we use the atom draw(set number of frames(N)) to specify that the number of frames of the animation is N.
Implementation
Informally, to achieve some drawing or animation, a programmer will write a SPARC program using display predicates. The answer sets of the SPARC program will be obtained by calling the SPARC solver. Our rendering algorithm extracts all the display atoms (intuitively instructions to make drawing) from an answer set and translates them into a JavaScript function (using HTML5 Canvas library) and a button element. The JavaScript and button will be used by the browser to show the drawing or animation.
The architecture for rendering the drawing and animation specified by a SPARC program is shown in Fig 4. The main rendering component is the processor inside the dashed box in the back-end. As a simple use case, a user will type, in the editor in the front-end, a SPARC program to specify the intended drawing or animation using the predicates introduced in the previous section. The user then presses the "Execute" 2 button in the web page. The command and the SPARC program are sent to the request handler at the back-end. The handler runs the SPARC solver with the program and pipes the output (answer sets of the program) into the drawing and animation processor. The processor first checks if there are any errors in the display atoms. An example of error is a typo in the name of a shape command. The processor then produces an HTML program segment and passes it to the front-end which will insert the segment to the current web page. The browser, equipped with HTML5 Canvas capacity, will use this segment to allow use to navigate and see the drawing/animation for each answer set. We now discuss the details of the algorithm for the processor. The input to the translation algorithm is a SPARC program P . Let the number of answer sets of P be n. The output is an HTML5 program segment that consists of a canvas element, a script element containing JavaScript code of n animation functions, and a sequence of n buttons each of which has a number on it. Informally, the display atoms in the i th answer set of P are rendered by the i th animation function. When a button with label i is clicked, the web browser (supporting HTML5 canvas methods) will invoke/execute the i th animation function in the script element (to render the display atoms in the i th answer set of P ).
In the following, we will use an example to demonstrate how a drawing command is implemented by JavaScript code using canvas methods. Consider two display atoms draw(line color(redline, red)). draw(draw line(redline, 0, 0, 2, 2)).
When we render the shape command draw line, we need to know the meaning of the redline style. From the style command line color, we know it means red. In the JavaScript program, we first create a context object ctx for a given canvas (simply identified by a name) where we would like to render the display atoms. The object offers methods to render the graphics in the canvas. We then use the following JavaScript code to implement the shape command to draw a line from (0,0) to (2,2):
ctx.beginPath(); ctx.moveTo(0,0); ctx.lineTo(2,2); ctx.stroke(); To make the line in red color, we have to insert the following JavaScript statement before the ctx.stroke() in the code above:
ctx.strokeStyle="red";
The meaning of the canvas methods in the code above is straightforward, so we don't explain them further. Now we are in a position to present the algorithm.
Algorithm:
• Input: a SPARC program P .
• Output: an HTML program segment HP which allows the rendering of the display atoms in all answer sets of P in an Internet Browser. • Steps:
1. Call SPARC solver to obtain the answer sets of P . 2. Add to HP the following HTML elements the canvas element <canvas id="myCanvas" width="500" height="500"> </canvas>. the script element <script> </script> insert, into the script element, a JavaScript function, denoted by mainf , which contains code associating the drawings in the script element with the canvas element above.
3. For each answer set S of P , (a) Extract all display atoms from S. (b) Let script be an array of empty strings. script[i] will hold the JavaScript statements to render the graphics for frame i. (c) For each display atom a in S, -If any syntax error is found in the display atoms, terminate with an output being an error message detailing the incorrect usage of the atoms.
-If a contains a shape command, let its style name be sn, find all style commands defining sn. For each style command, translate it into the corresponding JavaScript code P s on modifying the styling of the canvas pen (an HTML Canvas concept). Then translate the shape command into JavaScript code P r that renders that command. Let P d be the proper combination of P s and P r to render a.
if a is an drawing atom, append P d to script[i] for every frame i of the animation.
if a is an animation atom, let i be the frame referred to in a. Append P d to script[i].
(d) let S be the i th answer set (i ≥ 1). Create a JavaScript function animate(i-1)() whose body consisting of an array drawings initialized by the content of script array, and generic Javascript code executing the statements in drawings[i − 1] when the time to show frame i starts.
(e) append animate(i-1) to the end of body of the mainf function in the script element of HP .
4. Let n be the number of the answer sets of P . For each number i ∈ 0..n − 1, create a button element containing the number i and associating the click of the button to the name of the animation function animatei(). An example button element is <button onclick="animate2()"> 2 </button>. Append this list of button elements to the end of HP .
End of algorithm.
Moving Box Elaboration under SPARC
In order to use the drawing and animation predicate design of draw(c) and animate(c, I) a SPARC program requires corresponding predicate declarations, which in turn require the definition of many sorts, to establish a basis for animation to occur. In this section, we will first present the predicate declarations and sort definitions, and then discuss how to write a SPARC program for the earlier example of a box moving from (1,70) to (200,70) over 200 frames. We will also add to the moving box example a title, to demonstrate how static drawings can be used together with animations.
Predicate Declarations and Sort Definitions
First let us look at some important parameters for drawing and animation. The values of these parameters are defined using the SPARC directive #const: #const canvasWidth = 500. #const canvasHeight = 500. #const canvasSize = 500. #const numFrames = 200.
Here we have defined some constants for the dimension of the canvas and the number of frames that will be used later in sort definitions, making it easier to understand the purpose of certain sorts. We will be using a canvas with a dimension of 500 by 500 pixel size, and we will animate for 200 frames. These values can be changed by programmers in terms of their needs. Note that canvasSize must be the smaller of canvasWidth and canvasHeight. Now we begin the sorts section: sorts #frame = 0..numFrames. #drawing_command = #draw_line+#draw_quad_curve+#draw_bezier_curve+ #draw_arc_curve+#draw_text+#line_width+#text_font+ #line_cap+#text_align+#line_color+#text_color + #set_number_of_frames.
We have defined two sorts, one is a simple set of integers, corresponding with the frames of an animation. We use the sort name frame, but it is important to note that this name and other sort names introduced below are not predefined and thus can be changed by the programmer, as long as they are changed in a consistent way across the SPARC program. The second sort, drawing command, defines all the drawing commands introduced earlier in our design. It is the union (represented as + in SPARC) of the sorts defining shape commands the style commands. The sort names for the shape commands are prefixed with draw , and The sort names for the style commands are prefixed with line or text . Let us examine the definitions of these sorts: % sorts for shape commands #draw_line = draw_line(#stylename,#col,#row,#col,#row). % sort to set the number of frames #set_number_of_frames = set_number_of_frames(#frame).
Each of these sorts takes the form of what is called a record in SPARC.
A record is built from a record name and a list of other sorts. For example, the sort #draw line defines all shape commands of drawing lines. Recall from Section 4.1 the line drawing command is of the form draw line(sn, xs, ys, xe, ye) which draws a line from starting point (xs, ys) to ending point (xe, ye) with style name sn. The record name of the sort #draw line is draw line and is followed by the sorts for each parameter: #stylename for sn, #col for xs and ys, #row for ys and ye. Since the sorts above all contain record names that are recognized by the animation software as specific drawing commands, no record names should be modified, or the results of an executed animation will not be as expected.
Each of the records above uses other basic sorts such as #stylename. We will touch on only a few here: The sort #stylename consists of the names for styles the programmers would like to apply to their animation later on. The style name sort is important, as it is something a programmer can manipulate freely, to include as many styles for different objects in an animation as is desired by the programmer. For now, we do not have predefined styles and we do not put any names here. The sort #text consists of all the strings that will be used in an animation. As a limitation of SPARC, we are not able to represent strings containing spaces. We approximate a string by constant. These, like the style name sort elements, are decided upon by the programmer. So, we do not include specific elements in the definition above.
The other sorts defined above, as well as all other basic sorts not defined above, have much more restricted values. The row and col sorts must contain numerical values, although those values can be decided upon by the programmer, as can all numerical sorts used. For example, the definition of #row being 1..canvasHeight means that the sort #row contains all integers from 1 to the value of constant canvasHeight. The sort #color contains only a small sample of colors available. All color names used by a programmer must be from a predefined set of accepted colors. To see a complete definition of all accepted colors of the #color sort and other basic sorts such as font sorts #fontfamily, a complete listing can be found in the appendix. Moving on from sorts we may continue with the predicates section:
predicates % drawing command applies at specified frame animate(#drawing_command, #frame).
% static drawing command draw(#drawing_command).
Here we have defined two predicates, one for static drawings and one for animations. They both take a drawing command to execute if a corresponding atom exists in the answer set of the executed program. The animate predicate also takes a frame.
Write a SPARC Program for the Moving Box Example
To write a SARPC program for the moving box example, programmers have to include the definition of parameters (found in the earlier subsection) and include all the predicate declarations and the associated sort definitions (found either in the earlier subsection or in the appendix) into the predicates section and sorts section. They can simply copy and paste those constructs into the right sections of their program.
The programmers have to populate the two sorts #stylename and #text. For our example, we define them, in the sorts section, as follows:
#stylename = {redline, title}. #text = {aDemonstrationOfAMovingRedBox}.
The new style name, title is the style we will use to print the text on screen. The element in the #text sort is the text to show.
The rules section below concludes the example, and is responsible for the actual animation of a box moving beneath the demonstration title. draw(text_font(title, 18, arial)). draw(text_color(title, blue)).
draw (draw_text(title,aDemonstrationOfAMovingRedBox,5,25)).
animate(line_color(redline, red), I).
animate(draw_line(redline, I+1, 70, I+11, 70), I). animate(draw_line(redline, I+1, 70, I+1, 60), I). animate(draw_line(redline, I+1, 60, I+11, 60), I). animate(draw_line(redline, I+11, 60, I+11, 70), I).
The first line sets the number of frames of the animation. The second and third lines signify that the style name title means a style of blue arial font of size of 18. The next line signifies that aDemonstrationOfAMovingRedBox should be drawn with the style of title at position (5, 25). The constant will be shown from (5, 25) with the blue arial font of size 18. This completes the title display. As one can see, drawings are simple, since they do not occur over time, but simply exist in the canvas. Thus, the title does not need to be associated with any frames, and will be present as expected throughout any animation.
The next lines have to do with animating the red box from (1, 70) to (200,70). We begin by setting the style (readline) to be red. Note that the variable I can take on the value of any item in the sort frame, 0 through numFrames. Thus, one can expect in the answer set one atom per frame that styles the redline style to be red.
The next four lines correspond to the four sides of the box. For each frame I, four atoms are expected to exist of the form animate(draw line(redline, coordinates), I), corresponding to the four sides of the box, meaning that the animation will include a drawing of four lines at each frame. If one looks closely at the rules, one can see that I is used in the rule to calculate the starting and ending x coordinate of the sides of the box. This means that as the frame increases, so will the starting and ending coordinates, causing the box to appear to move in the positive x direction, which is to the right.
Design Facilitating Teaching
From the examples given in earlier subsections, one may see that the programs are unnecessarily complex due to the syntax restriction of SPARC: a program has to contain the sort definitions and declarations for display predicates. However, for teaching purpose, students are expected to focus mainly on the substance of drawing and animation, instead of the tedious sort definitions and declarations of display predicates.
In principle, the sort definitions and declarations of display predicates should be written by their designers, while programmer should simply be able to use them. To follow this principle, we introduce the #include directive, as in the C language, which includes a specified file into the file containing the directive. With this mechanism, the designer can provide a file containing the sort definitions, predicate declarations and common rules, and the programmer can simply include this file in their program.
For this approach, there is a challenge. As one can see, the sort #stylename (and #text) is expected to be defined by programmers, but it is used by the signature of the display predicates. It is further complicated by the following expectation: we would like to provide a default set of style names to further simplify the programming tasks for novices so that they can focus on the logic and substance of drawing/animation. To achieve the requirement and expectation above, we introduce a subtype (Pierce 2002), called subsort here, with the following syntax draw(draw_text(redPen,drawingAAndAnimation, 10, 10)). draw(draw_text(myPen,drawingAAndAnimation, 10, 30)).
In this program, the new style name myPen is introduced using a subsort statement and it is defined as green by the first rule.
Our design is implemented through a preprocessor whose output is a classical SPARC program. When the preprocessor sees the directive to include the file drawing.sp, it will include the contents of the sections of drawing.sp into the beginning of the corresponding sections of file P 1 . However, the subsort statements will not be included. For each sort name occurring in a subsort statement, the preprocessor will maintain a list of all its subsorts. The meaning of a sort with subsorts is the union of all its subsorts. After scanning P 1 (and thus all included files), for each sort S with subsorts S 1 , . . . , S n , the preprocessor inserts the following sort definition in the beginning of the sorts section:
S = S 1 + . . . + S n .
In our example, the file (not including comments or basic drawing/animation sorts and predicates) after preprocessing is sorts #stylename = {redPen, blackPen} + {myPen}. #text = {drawingAndAnimation}. predicates ...... rules draw(text_color(redPen, red)). draw(text_color(myPen, green)).
% make drawing draw(draw_text(redPen,drawingAAndAnimation, 10, 10). draw (draw_text(myPen,drawingAAndAnimation,10,20).
Once a sort name occurs in a subsort statement, it will be an error to define this sort name using =.
Default Styles and Default Values of any Style
We will first introduce the default styles onlineSPARC currently offers, give an example using default and user defined styles, and discuss how the default values of the styles (default or user defined) are set using ASP rules.
The current onlineSPARC offers regular, thin and thick styles. The regular styles include redPen, blackPen, greenPen. These styles are always associated with a color as shown in their name. When they are applied to draw text command, they use arial font with a font size of 11. When they are applied to other drawing commands, they use a line width of 2 points.
The thin styles include redPenThin, blackPenThin, greenPenThin. They are similar to regular styles except that they are thinner. When applied to draw text, they use font size of 10, and when applied to other drawing commands, they use a line width of 1 point.
The thick styles include redPenThick, blackPenThick, greenPenThick. They are similar to regular styles except that they are thicker. When applied to draw text, they use font size of 12, and when applied to other drawing commands, they use a line width of 3 point.
To model the example in Section 4.3.2, we have to specify the animation length, i.e., the number of frames for our animation. We define our own constant myFrames for this number. Note that with this local constant, we have to add the constraints on the animation length into each rule. Now a complete program, using the include directive (assuming the header file for drawing and animation is called drawing.sp) and subsorts, for the example in Section 4.3.2 is #include <drawing.sp>. #const myFrames = 60. sorts extend #stylename with {title}. extend #text with {aDemonstrationOfAMovingRedBox}. predicates rules draw(set_number_of_frames(myFrames)). % associate title style with 18-point arial font and blue color draw(text_color(title, blue)). draw(text_font(title, 18, arial)).
draw(draw_text(title, aDemonstrationOfAMovingRedBox, 5, 25)).
animate ( This new program contains minimal distraction, and the substantial information for drawing and animation stands out.
We will next discuss how the default values for default or user defined styles are set. For a style for text drawing, there are four aspects resulting from the style commands: color, font, font size and alignment. For a style for line drawing, there are three aspects: color, line width and line cap. It is well known that ASP is good at representing defaults. We use an example of the color of a style to illustrate how the default value is associated to the color of the style: normally the text color of a style is black.
We introduce nonDef aultV alueDef ined drawing(X, txtColor) to mean that some non default value (say red) has been associated to the text color of style X (through style command text color). So, we have rule nonDefaultValueDefined_drawing(X, txtColor) :draw(text_color(X, Y)), Y != black.
A style X has a text color of black if it does not have a non-default color associated:
draw(text_color(X, black)) :not nonDefaultValueDefined_drawing(X, txtColor).
We have similar rules for the styles related to animation predicate:
nonDefaultValueDefined_animation(X, txtColor, I) :animate(text_color(X, Y), I), Y != black. animate(text_color(X, black), I) :not nonDefaultValueDefined_animation(X, txtColor, I).
The default values of styles are defined as follows. The default value of the color (text or line) of a style is black, that of font and font size are arial and 11 point, that of text alignment is left, that of line width is 2 points, and that of line cap is butt.
Finally, one may note that we allow defining a style using the animate predicate. That means the same style name may refer to different values of its properties (e.g., color or font) in different frames. It allows one to use the same style name to represent some changing properties (which might not be known priori) without the need of introducing all style names for the changing properties. An example is a moving line growing fat. The growing effect is achieve by changing the width of the line bigger. Two rules are needed (assuming the number of frames is smaller than 100):
% specify the style growingLine animate(line_width(growingLine,J),I) :-J = I/6+1. % draw a line using grawingLine style animate(draw_line(growingLine, 2*I+1, 110, 2*I+71, 110), I).
By default, the style defined by the draw predicate will be used only for the drawing command inside the draw predicate. However, sometimes we may want the styles defined using draw to be usable in frames. In this case, at any frame i, for any style s and property p of the frame, we would like to use the value v of property p of style s as defined in draw unless p of s takes a value other than v by animate at frame i. The expectation above can be represented naturally by ASP rules. The following is an example on line color property. The atom styleDefinedInFrame(X, lineColor, I) means that style X has a line color defined, different from the one defined for X in the draw predicate, in frame I. The first rule is a straightforward definition of styleDefinedInFrame. The second rule says that the color of style X is also Value at frame I if Value is the color of X by draw, and there is no style command defining the color of X to be different from Value in frame I. Programmers can include such rules in their program. In <drawing.sp>, we have introduced general rules to make a style defined by draw to be usable in any frame, in a manner as illustrated above.
More Drawing/Animation Example Programs
More example programs for drawing and animation can be found at https://goo.gl/ nLD4LD. Some of the programs use the extended SPARC and some use the original SPARC. Some examples show different ways to write drawing and animation programs using the original SPARC.
Conclusion and Discussion
When we outreached to a local high school several years ago, even with the great tool ASPIDE, we needed an experienced student to communicate with the school lab several times before the final installation of the software on their computers could be completed. A carefully drafted document was prepared for students to install the software on their computers. There were still unexpected issues when students used/installed the software at home and thus they lost the opportunity to practice programming outside of class. The flow of teaching the class was often interrupted by the problems or issues associated with the use of the tools. Thus, the strong technical support needed for the management and use of the tools inside and outside of the class was and still is prohibitive for teaching ASP to general undergraduate or K-12 students.
With the availability of our online environment, we now only need to focus on teaching the content of ASP without worrying about the technical support. We hope our environment, and other online environments for knowledge representation systems, will expand the teaching of knowledge representation to a much wider audience in the future.
The drawing and animation features are relatively new features of onlineSPARC and have not been tested in high school teaching. However, we have used the drawing and animation features in a senior year course -special topics in AI -in Spring 2017 at Texas Tech University. Students demonstrated strong interests in drawing and animation (more than in the representation and reasoning with a family) and they were able to produce interesting animations. In the example link given earlier, we include an example produced by a team of this class to produce a vivid animation demonstrating geometric transformations including translation, reflection and rotation. The instructor provided the team only necessary information on doing drawings and animations not more than presented here in Section 4.3. (We did not have the include directive and subsort statement then.) The team found the topic and project idea themselves (the context is that every team in the class was asked to find and solve problems from Science, Math and Arts at the secondary school level).
Unlike the darwing and animation features, we have been using the general online environment in our teaching of AI classes at both undergraduate and graduate levels and in our outreach to local school districts including middle and high schools. The outreach includes offering short term lessons or demonstration to teachers or district administrators. onlineSPARC was first installed on an Amazon AWS server and later installed on a local server. According to Google Analytics from May 2016 to September 2017, there are 1206 new users added and there are accesses from 41 countries. The top three countries with the most accesses are USA, UK and Russia.
We noted that it can be very slow for ASP solvers to produce the answer sets of an animation program when the space for its ground instance is big. The space depends on the canvas size, number of frames and the number of parameters of a drawing command. As an example, assume we have a canvas size of 1000 and produce 1000 frames. If we use the following atom in the head of a rule, animate(draw bezier curve(redPen, X1, Y1, X2, Y2, X3, Y3, X4, y4), I),
where (X1, Y1), ..., (X4, Y4) are four points and I is a frame index, the possible ground instances will be at the level of 1000 9 . We would like to see research progress on any aspect of dealing with ASP programs with large space of ground instances.
For the include directive and subsort statements, they are only part of the the onlineS-PARC environment, but not a part of the official SPARC language. The current needs from drawing and animation programs provide compelling reasons to reexamine SPARC to see how best it can be refined to support the need. The very preliminary work reported in this paper on the preprocessor based extension of SPARC indicates some promising directions in refining SPARC. In the future, we would like to have a thorough and rigorous study of introducing the subtype (subsort) and the #include directive into SPARC. We would also like to examine the use of type inference in SPARC which we believe may enjoy the benefits of both world of sorted ASP and non-sorted ASP, while also providing students better learning experiences and providing support for the development and maintenance of programs for practical applications in the real world.
It is not technically hard to allow other ASP systems such as DLV or Clingo in on-lineSPARC. The addition of them into onlineSPARC may further promote its use in a much wider audience. Since our display predicates are nothing more or less than a predicate in any ASP program, they can be directly used in a DLV or Clingo program. Our rendering algorithm, with minimal changes on recognizing the minor differences between the format of the answer sets of the SPARC solver and those of the DLV/Clingo solver, can be applied to DLV or Clingo programs.
The error/warning messages from a programming environment are important for general programming learners. The current onlineSPARC simply passes the error report from the SPARC solver to the users. It is an ongoing work to make the error report more friendly and understandable by a general audience, and to make correction suggestions for some syntax and semantic errors (e.g., a typo of a constant of a sort).
There has been a notable effort in designing tools to help debug ASP programs (e.g., (Dodaro et al. 2015)). It will be interesting to see how well those tools can be integrated into an online ASP environment and how the integrated environment may help students in learning programming.
| 9,185 |
1809.08304
|
2890241187
|
Recent progress in logic programming (e.g., the development of the Answer Set Programming paradigm) has made it possible to teach it to general undergraduate and even middle high school students. Given the limited exposure of these students to computer science, the complexity of downloading, installing and using tools for writing logic programs could be a major barrier for logic programming to reach a much wider audience. We developed onlineSPARC, an online answer set programming environment with a self contained file system and a simple interface. It allows users to type edit logic programs and perform several tasks over programs, including asking a query to a program, getting the answer sets of a program, and producing a drawing animation based on the answer sets of a program.
|
As for online systems, in addition to IDP mentioned earlier, there are several others. Both DLV and Clingo offer online environments ( http: asptut.gibbi.com and http: potassco.sourceforge.net clingo.html respectively) which provide an editor and a window to show the direct output of the execution of DLV and Clingo command, but provide no other functionalities. We also noted SWISH http: lpsdemo.interprolog.com which offers an online environment for Prolog and a more recent computer language Logic-based Production Systems @cite_10 . A recent online system LoIDE @cite_21 allows a user to edit ASP programs and find answer sets of the programs. LoIDE allows a programmer to highlight names in answer sets.
|
{
"abstract": [
"Logic-based paradigms are nowadays widely used in many different fields, also thanks to the availability of robust tools and systems that allow the development of real-world and industrial applications. In this work, we present LoIDE, an advanced and modular web-editor for logic-based languages that also integrates with state-of-the-art solvers.",
"In previous work, we proposed a logic-based framework in which computation is the execution of actions in an attempt to make reactive rules of the form if antecedent then consequent true in a canonical model of a logic program determined by an initial state, sequence of events, and the resulting sequence of subsequent states. In this model-theoretic semantics, reactive rules are the driving force, and logic programs play only a supporting role. In the canonical model, states, actions, and other events are represented with timestamps. But in the operational semantics (OS), for the sake of efficiency, timestamps are omitted and only the current state is maintained. State transitions are performed reactively by executing actions to make the consequents of rules true whenever the antecedents become true. This OS is sound, but incomplete. It cannot make reactive rules true by preventing their antecedents from becoming true, or by proactively making their consequents true before their antecedents become true. In this paper, we characterize the notion of reactive model, and prove that the OS can generate all and only such models. In order to focus on the main issues, we omit the logic programming component of the framework."
],
"cite_N": [
"@cite_21",
"@cite_10"
],
"mid": [
"2776874285",
"2222166069"
]
}
|
Under consideration for publication in Theory and Practice of Logic Programming onlineSPARC: a Programming Environment for Answer Set Programming *
|
Answer Set Programming (ASP) (Gelfond and Kahl 2014) is becoming a dominating language in the knowledge representation community (McIlraith 2011;Kowalski 2014) because it has offered elegant and effective solutions not only to classical Artificial Intelligence problems but also to many challenging application problems. Thanks to its simplicity and clarity in both informal and formal semantics, ASP provides a "natural" modeling of many problems. At the same time, the fully declarative nature of ASP also clears a major barrier to teaching logic programming, as the procedural features of classical logic programming systems such as PROLOG are taken as the source of misconceptions in students' learning of Logic Programming (Mendelsohn et al. 1990).
ASP has been taught to undergraduate students in the course of Artificial Intelligence at Texas Tech for more than a decade. We believe ASP has become mature enough to be used to introduce programming and problem solving to high school students. We have offered many sessions to students at New Deal High School and a three week long ASP course to high school students involved in the TexPREP program (http://www.math.ttu.edu/texprep/). In our teaching practice, we found that ASP is well accepted by the students and the students were able to focus on problem solving, instead of the language itself. The students were able to write programs to answer questions about the relationships (e.g., parent, ancestor) amongst family members and to find solutions for Sudoku problems.
In our teaching practices, particularly when approaching high school students, we identified two challenges. One is the installation, management and use of the solvers and their development tools. The other is to find a more vivid and intuitive presentation of results (answer sets) of logic programs to inspire students' interest in learning.
To overcome the challenges we have designed and built onlineSPARC, an online development environment for ASP. Its URL is at http://goo.gl/UwJ7Zj The environment gets rid of software installation and management. Its very simple interface also eases the use of the software. Specifically, it provides an easy to use editor for users to edit their programs, an online file system for them to store and retrieve their programs, and a few simple buttons allowing the users to query the program and to get answer sets of the program. The query capacity can also help a teacher to quickly raise students' interest in ASP based problem solving and modeling. The environment uses SPARC ) as the ASP language. SPARC is designed to further facilitate the teaching of logic programming by introducing sorts (or types) which simplify the difficult programming concept of domain variables in classical ASP systems such as Clingo (Gebser et al. 2011) and help programmers to identify errors early thanks to sort information. Initial experiment of teaching SPARC to high school students is promising (Reyes et al. 2016).
For the second challenge, onlineSPARC introduces drawing and animation predicates for students to present their solutions to problems in a more visually straightforward and exciting manner (instead of the answer sets which are simply a set of literals). As an example, with this facility, the students can show a straightforward visual solution to a Sudoku problem. We also noted observations in literature that multimedia and visualization play a positive role in promoting students' learning (Guzdial 2001;Clark et al. 2009).
We have been using this environment in our teaching of AI classes at both undergraduate and graduate levels and in our outreach to middle/high schools since 2016. Preparation of documents on installation or management of the software is no longer needed. We got very few questions from students on the use of the environment, and the online system is rarely down. With onlineSPARC, one of our Master students was able to offer a short-term lesson on ASP based modeling by himself to New Deal High School students. All his preparation was on the teaching materials, but not on the software or its use.
The rest of the paper is organized as follows. SPARC is recalled in Section 2. The design, implementation and a preliminary test of the online environment are presented in Section 3. The design and rendering of the drawing and animation predicates are presented in Section 4. Related work is reviewed in Section 5, and the paper is concluded in Section 6.
SPARC -an Answer Set Programming Language
SPARC is an Answer Set Programming (ASP) language which allows for the explicit representation of sorts. There are many excellent introduction materials on ASP including (Brewka et al. 2011) and (Gelfond and Kahl 2014). We will give a brief introduction of SPARC. The syntax and semantics of SPARC can be found in , and the SPARC manual and solver are freely available (Balai 2013).
A SPARC program consists of three sections: sorts, predicates and rules. We will use the map coloring problem as an example to illustrate SPARC: can the USA map be colored using red, green and blue such that no two neighboring states have the same color?
The first step is to identify the objects and their sorts in the problem. For example, the three colors are important and they form the sort of color for this problem. In SPARC syntax, we use #color = {red, green, blue} to represent the objects and their sort. The sorts section of the SPARC program is sorts % the keyword to start the sorts section #color = {red,green,blue}. #state = {texas, colorado, newMexico, ......}.
The next step is to identify relations in the problem and declare in the predicates section the sorts of the parameters of the predicates corresponding to the relations. The predicates section of the program is predicates % the keyword to start the predicates section % neighbor(X, Y) denotes that state X is a neighbor of state Y. neighbor(#state, #state). % ofColor(X, C) denotes that state X has color C ofColor(#state, #color).
The last step is to identify the knowledge needed in the problem and translate it into rules. The rules section of a SPARC program consists of rules in the typical ASP syntax. The rules section of a SPARC program will include the following.
rules % the keyword to start the rules section % Texas is a neighbor of Colorado neighbor(texas, colorado). % The neighbor relation is symmetric neighbor(S1, S2) :-neighbor(S2, S1). % Any state has one of the three colors: red, green and blue ofColor(S, red) | ofColor(S, green) | ofColor(S, blue). % No two neighbors have the same color :-ofColor(S1, C), ofColor(S2, C), neighbor(S1, S2), S1 != S2. % Every state has at most one color :-ofColor(S, C1), ofColor(S, C2), C1 != C2.
The current SPARC solver defines a query to be either an atom or the negation of an atom. Given a atom a, a is ¬a and ¬a = a. The answer to a ground query l with respect to a program P is yes if l is in every answer set of P , no if l is in every answer set of P , and unknown otherwise. An answer to a query with variables is a set of ground terms for the variables in the query such that the answer to the query resulting from replacing the variables by the corresponding ground terms is yes. Formal definitions of queries and answers to queries can be found in Section 2.2 of (Gelfond and Kahl 2014).
The SPARC solver is able to answer queries with respect to a program and to compute one answer set or all answer sets of a program. SPARC solver translates a SPARC program into a DLV or Clingo program and then uses the corresponding solver to find the answer sets of the resulting program. When SPARC solver answers queries, it computes all answer sets of the given program. onlineSPARC directly calls the SPARC solver to get answers to a query or get answer sets.
Online Development Environment Design and Implementation
Environment Design
The principle of design we followed is that the environment, with the simplest possible interface, should provide full support, from writing programs to getting the answer sets of a program, in order to help with the education of Answer Set Programming.
The design of the interface is shown in Figure 1. When logged in, it consists of 3 components: 1) the editor to edit a program, 2) the file navigation system, 3) the operations (including query answering, obtaining answer sets and executing actions in the answer sets) over the program and 4) the result/display area. One can edit a SPARC program directly inside the editor, which has syntax highlighting features (area 1). The file inside the editor can be saved by clicking the "Save" button (2.4). The files and folders are displayed in the area 2.1. The user can traverse them using the mouse like traversing a file system on a typical operating system. Files can be deleted and their names can be changed. To create a folder or a file, one clicks the "New" button (2.3). The panel showing files/folders can be toggled by clicking the "Directory" button (2.2) (so that users can have more space for the editing or result area (4)). To ask queries to the program inside the editor, one can type a query in the text box (3.1) and then press the "Submit" button (3.1). The answer to the query will be shown in area 4. To see the answer sets of a program, click the "Get Answer Sets" button (3.2) and the result will be shown in area (4) When the "Execute" button (3.3) is clicked, a list of buttons (with number labels) will be shown and when a button is clicked, the atoms for drawing and animation in the answer set of the program corresponding to the button label will be rendered in the display area (4).
A user can only access the full interface discussed above after login. The user will log out by clicking the "Logout" button (5). Without login, the interface is much simpler, with all the file navigation related functionalities invisible. Such an interface is convenient for a quick test or demo of a SPARC program.
Implementation
The architecture of the online environment (see Fig 2) follows that of a typical web application. It consists of a front end-component and a back-end component. The frontend provides the user interface and sends users' requests to the back-end while the backend fulfills the request and returns results, if needed, back to the front-end. After getting the results from the back-end, the front-end will update the interface correspondingly (e.g., display query answers to the result area). Details about the components and their interactions are given below.
Front-end. The front-end is implemented with HTML and JavaScript. The editor in our front-end uses ACE which is an embeddable (to any web page) code editor written in JavaScript (https://ace.c9.io/). The panel for file/folder navigation is based on JavaScript code by Yuez.me.
Back-end and Interactions between the Front-end and the Back-end. The backend is mainly implemented using PHP and is hosted on the server side. It has three components: 1) file system management, 2) an inference engine (SPARC solver) and 3) processors for fulfilling user interface functionalities in terms of the results from the inference engine.
The file system management uses a database to manage the files and folders of all users of the environment. The Entity/Relationship (ER) diagram of the system is shown in Fig 3. The SPARC files are saved in the server file system, not in a database table. The sharing is managed by the sharing information in the relevant database tables. In our implementation, we use a mySQL database system. The file management system gets requests such as creating a new file/folder, deleting a file, saving a file, getting the files and folders, etc, from the front-end. It then updates the tables and local file system correspondingly and returns the needed results to the front-end. After the front-end gets the results, it will update the graphical user interface (e.g., display the program returned from the back-end inside the editor) if needed. Fig. 2. The architecture for a simple use case: submitting a SPARC program to get the answer sets. First a SPARC program is typed in the editor of the front-end. After the "Get Answer Sets" button is pressed, the program and the command for getting answer sets are sent to the request handler in the back-end. The request handler runs the SPARC solver with the program and pipes the output (answer sets of the program) into the answer sets processor. The processor first formats the answer sets into XML and then employs XSLT to translate the XML format into an HTML ordered list element (i.e., <ol> ). The <ol> element encodes the answer sets for a user friendly display. The <ol> element is then sent to the front-end and the front-end inserts the <ol> element into the <div> element of the web page of the user interface. Because of the change of the web page, the browser will re-render the web page and the answer sets will be displayed in the result area of the user interface. For other functionalities (e.g., answering the query) in the user interface, the answer sets, the command and program are handled by their corresponding processor. Fig. 3. The Entity/Relationship (ER) diagram for file/folder management. Most names have a straightforward meaning. The Folderurl and Fileurl above refer to the full path of the folder/file in the file system.
The answer sets processors include those for generating all answer sets of a SPARC program (see details in the caption of Figure 2), for rendering a drawing/animation (see Section 4.2), and for answering queries. The processor for answering queries calls the SPARC solver to get the answers, translates the answers into a HTML element and passes the element to the front-end.
Preliminary onlineSPARC Performance Test
Given that the worst case complexity of answering a query or getting an answer set of a program is exponential, it is interesting to know how well onlineSPARC can support the programming activities commonly used for teaching and learning purposes. First, it is hard with our limited resources for onlineSPARC to support for programs related to hard search problems that need a lot of computation time, and thus onlineSPARC is not designed for an audience aiming to solve computationally hard problems and/or manage very complex programs. Second, onlineSPARC should have great support for typical programs used in teaching (for general students, including middle and high school students) and learning.
To obtain a preliminary idea on problems to test the performance of onlineSPARC, we consider the textbook by Gelfond and Khal (2014) which is a good candidate for teaching the knowledge representation methodologies based on ASP. We select, from there, the programs for the family problem and Sudoku problem which are widely used in teaching ASP. We also select the graph (map) coloring problem (which is also a popular problem in teaching ASP and constraints in AI). Programs for teaching/learning purposes usually involve smaller data sets. To make the problem more challenging we use the map data 1 from ASP Competition 2015.
When we consider the performance of onlineSPARC, we are mainly interested in how many users it can support in a given time window and in its response time to users' requests. Response time is defined as the difference between the time when a request was sent and the time when a response has been fully received. We employ the tool JMeter (https://jmeter.apache.org/) to carry out the performance tests. The onlineSPARC is installed on a Dell PowerEdge R330 Rack Server with CPU E3-1230 (v5 3.4GHz, 8M cache, 4Cores/8Threads), utilizing a memory of 32GB and CentOS 7.4. The server is installed with Apache/2.4.6 (HTTP server) and MySQL 14.14 (database server).
Our first test is to run 300 requests (the first 100 requests in 10 seconds, the second 100 requests in 300 seconds and the third 100 requests in 100 seconds) for the map coloring problem. Each request is to get all answer sets for the map coloring problem, for which the SPARC solver returns a message that there are too many answer sets. This test crashes the server and we have to manually reboot the server. When asking a single request without other requests, the response time is 2.2 seconds. It can be inferred from this test that onlineSPARC has limited capacity for solving hard problems in a relative short time window. Given the size of this problem (with more than 1500 edges), it is more suitable for use as a homework assignment, and not for an in-class assignment (and thus there is a shorter time window). Given a larger time window (at the level of days), onlineSPARC (on a single server) should still support a decent number of students.
To get an idea on how window size may impact onlineSPARC performance, we fix the number of requests to be 100 and change the time window. The first time window is 240s. Since the requests are evenly distributed in the time window, each request has enough time to solely occupy the server. The average response time for each request is 2.2s. When the time window becomes 60s, the maximum number of concurrent requests being attended to by the server becomes 18, and the average response time becomes 13s. When time window becomes 30s, the maximum number of concurrent requests is 91, and the average response time becomes 1023s.
When teaching in middle school and in high school, it seems that we need to combine the lecture and lab time, demonstrating and having students practice together. So, it must be possible for both teachers and students to work on the programs at the same time and in a short time window. onlineSPARC is expected to provide a good support of programs that are likely used in class teaching.
We first consider the Sudoku problem. It is possible for a teacher to demo it, but there is some chance a group of students will run it at the same time during the class. This occurrence is fine, since our tests show that 100 requests in a time window of 3s have an average response time of 7.7s.
We next consider the family problem. In our teaching experience, there has been a good chance that students will practice (parts of) this problem in class and send many requests in a relatively short period of time. With a time window of 3s, 500 requests can be processed by onlineSPARC with an average response time of 13.5s. JMeter seems to have a limit on the number of requests it can send (in a non-distributed environment), so, we didn't test larger amounts of requests.
In summary, onlineSPARC (even on a single server with limited capacity, like our PowerEdge R330) can provide support to a good number of students with their teaching and learning activities during class (mainly because the programs used during class are computationally cheap). For harder homework problems which need more computation time than class problems, thanks to the longer time window during which students may be active, onlineSPARC may be able to support a decent number of students. In the case of the map coloring problem, assuming students work evenly over a period of 8 hours, onlineSPARC can support at least 13000 requests (with certain assumptions on the programs). On the other hand, as shown in the first test that crashes the server, a smaller number of requests (at the level of tens or hundreds of requests) for solving very hard problems, even in a time window at the level of days, could make the server unstable. To reduce the potential negative impacts of hard problems on the server, in the current onlineSPARC, the maximal timeout is 50s, with the default one being 20s. An instructor using onlineSPARC should be aware of this limitation.
Drawing and Animation Design and Implementation
We will first present our design of drawing/animation predicates in Section 4.1 and its implementation in Section 4.2. One full example, in SPARC, on animation will be discussed in Section 4.3. In Section 4.4, we present an "extension" (in a preprocessing manner, instead of a native manner) of SPARC to allow more teaching and learning friendly programming. We also show example programs there. In the last subsection, we provide an online link for a set of drawing/animation programs in SPARC or the "extended" SPARC.
Drawing and Animation Design
To allow programmers to create drawing and animation using SPARC, we design two predicates, called display predicates: one for drawing and one for animation. The atoms using these predicates are called display atoms. To use these atoms in a SPARC program, a programmer needs to include sorts (e.g., sort of colors, fonts and numbers) and the corresponding predicate declaration which are predefined (see Appendix). In the following, we only focus on the atoms and their use for drawing and animation. More details can be found in Section 4.3.
Drawing.
A drawing predicate is of the form: draw(c) where c is called a drawing command. Intuitively an atom containing this predicate draws texts and graphics as instructed by the command c. By drawing a picture, we mean a shape is drawn with a style. We define a shape as either text or a geometric line or curve. Also, a style specifies the graphical properties of the shape it is applied to. For example, visual properties include color, thickness, and font. For modularity, we introduce style names, which are labels that can be associated with different styles so that the same style may be reused without being redefined. A drawing is completed by associating this shape and style to a certain position in the canvas, which is simply the display board. Note, the origin of the coordinate system is at the top left corner of the canvas.
Here is an example of drawing a red line from point (0, 0) to (2, 2). First, we introduce a style name redline and associate it to the red color by the style command line color(redline, red). With this defined style we then draw the red line by the shape command draw line(redline, 0, 0, 2, 2). Style commands and shape commands form all drawing commands. The SPARC program rules to draw the given line are draw(line color(redline, red)). draw(draw line(redline, 0, 0, 2, 2)).
We now present the possible style and shape commands recognized in atoms like the two above.
The style commands of our system include the following:
• line width(sn, t) specifies that lines drawn with style name sn should be drawn with a line thickness t. • text font(sn, fs, ff) specifies that text drawn with style name sn should be drawn with a font size fs and a font family ff. • line cap(sn, c) specifies that lines drawn with style name sn should be drawn with a capping c, such as an arrowhead. • text align(sn, al) specifies that text drawn with style name sn should be drawn with an alignment on the page al. • line color(sn, c) specifies that lines drawn with style name sn should be drawn with a color c. • text color(sn, c) specifies that text drawn with style name sn should be drawn with a color c.
The shape commands include the following:
• draw line(sn, xs, ys, xe, ye) draws a line from starting point (xs, ys) to ending point (xe, ye) with style name sn; • draw quad curve(sn, xs, ys, bx, by, xe, ye) draws a quadratic Bezier curve, with style name sn, from the start point (xs, ys) to the end point (xe, ye) using the control point (bx, by); • draw bezier curve(sn, xs, ys, b1x, b1y, b2x, b2y, xe, ye) draws a cubic Bezier curve, using style name sn, from the start point (xs, ys) to the end point (xe, ye) using the control points (b1x, b1y) and (b2x, b2y); • draw arc curve(sn, xs, ys, r, sa, se) draws an arc using style name sn and the arc is centered at (x, y) with radius r starting at angle sa and ending at angle se going in the clockwise direction; • draw text(sn, x, xs, ys) prints the value of x as text to screen from point (xs, ys) using style name sn.
Animation. A frame, a basic concept in animation, is defined as a drawing. When a sequence of frames, whose content is normally relevant, is shown on the screen in rapid succession (usually 24, 25, 30, or 60 frames per second), a fluid animation is seemingly created. To design an animation, a designer will specify the drawing for each frame.
Given that the order of frames matters, we give a frame a value equal to its index in a sequence of frames. We introduce the animation predicate animate(c, i) which indicates a desire to draw a picture at the i th frame using drawing command c. The index of the first frame of an animation is always 0. The frames will be shown on the screen at a rate of 60 frames per second, and the i th frame will be showed at time (i * 1/60) th second (from the start of the animation) for a duration of 1/60 of a second.
As an example, we would like to elaborate on an animation where a red box with a side length of 10 pixels moves its top left coordinate from the point (1, 70) to (200,70). We will create 200 frames with the box at the point (i + 1, 70) in i th frame.
Let the variable I be of a sort called frame, defined from 0 to some large number. In every frame I, we specify the drawing styling redline: animate(line color(redline, red), I).
To make a box at the I th frame, we need to draw the box's four sides using the style associated with style name redline. The following describes the four sides of a box at any frame: bottom -(I + 1, 70) to (I + 1 + 10, 70), left -(I + 1, 70) to (I + 1, 60), top -(I + 1, 60) to (I + 1 + 10, 60) and right -(I + 1 + 10, 60) to (I + 1 + 10, 70). Hence we have the rules Note that the drawing predicate produces the intended drawing throughout all the frames creating a static drawing. On the other hand, the animation predicate produces a drawing only for a specific frame.
Finally, we use the atom draw(set number of frames(N)) to specify that the number of frames of the animation is N.
Implementation
Informally, to achieve some drawing or animation, a programmer will write a SPARC program using display predicates. The answer sets of the SPARC program will be obtained by calling the SPARC solver. Our rendering algorithm extracts all the display atoms (intuitively instructions to make drawing) from an answer set and translates them into a JavaScript function (using HTML5 Canvas library) and a button element. The JavaScript and button will be used by the browser to show the drawing or animation.
The architecture for rendering the drawing and animation specified by a SPARC program is shown in Fig 4. The main rendering component is the processor inside the dashed box in the back-end. As a simple use case, a user will type, in the editor in the front-end, a SPARC program to specify the intended drawing or animation using the predicates introduced in the previous section. The user then presses the "Execute" 2 button in the web page. The command and the SPARC program are sent to the request handler at the back-end. The handler runs the SPARC solver with the program and pipes the output (answer sets of the program) into the drawing and animation processor. The processor first checks if there are any errors in the display atoms. An example of error is a typo in the name of a shape command. The processor then produces an HTML program segment and passes it to the front-end which will insert the segment to the current web page. The browser, equipped with HTML5 Canvas capacity, will use this segment to allow use to navigate and see the drawing/animation for each answer set. We now discuss the details of the algorithm for the processor. The input to the translation algorithm is a SPARC program P . Let the number of answer sets of P be n. The output is an HTML5 program segment that consists of a canvas element, a script element containing JavaScript code of n animation functions, and a sequence of n buttons each of which has a number on it. Informally, the display atoms in the i th answer set of P are rendered by the i th animation function. When a button with label i is clicked, the web browser (supporting HTML5 canvas methods) will invoke/execute the i th animation function in the script element (to render the display atoms in the i th answer set of P ).
In the following, we will use an example to demonstrate how a drawing command is implemented by JavaScript code using canvas methods. Consider two display atoms draw(line color(redline, red)). draw(draw line(redline, 0, 0, 2, 2)).
When we render the shape command draw line, we need to know the meaning of the redline style. From the style command line color, we know it means red. In the JavaScript program, we first create a context object ctx for a given canvas (simply identified by a name) where we would like to render the display atoms. The object offers methods to render the graphics in the canvas. We then use the following JavaScript code to implement the shape command to draw a line from (0,0) to (2,2):
ctx.beginPath(); ctx.moveTo(0,0); ctx.lineTo(2,2); ctx.stroke(); To make the line in red color, we have to insert the following JavaScript statement before the ctx.stroke() in the code above:
ctx.strokeStyle="red";
The meaning of the canvas methods in the code above is straightforward, so we don't explain them further. Now we are in a position to present the algorithm.
Algorithm:
• Input: a SPARC program P .
• Output: an HTML program segment HP which allows the rendering of the display atoms in all answer sets of P in an Internet Browser. • Steps:
1. Call SPARC solver to obtain the answer sets of P . 2. Add to HP the following HTML elements the canvas element <canvas id="myCanvas" width="500" height="500"> </canvas>. the script element <script> </script> insert, into the script element, a JavaScript function, denoted by mainf , which contains code associating the drawings in the script element with the canvas element above.
3. For each answer set S of P , (a) Extract all display atoms from S. (b) Let script be an array of empty strings. script[i] will hold the JavaScript statements to render the graphics for frame i. (c) For each display atom a in S, -If any syntax error is found in the display atoms, terminate with an output being an error message detailing the incorrect usage of the atoms.
-If a contains a shape command, let its style name be sn, find all style commands defining sn. For each style command, translate it into the corresponding JavaScript code P s on modifying the styling of the canvas pen (an HTML Canvas concept). Then translate the shape command into JavaScript code P r that renders that command. Let P d be the proper combination of P s and P r to render a.
if a is an drawing atom, append P d to script[i] for every frame i of the animation.
if a is an animation atom, let i be the frame referred to in a. Append P d to script[i].
(d) let S be the i th answer set (i ≥ 1). Create a JavaScript function animate(i-1)() whose body consisting of an array drawings initialized by the content of script array, and generic Javascript code executing the statements in drawings[i − 1] when the time to show frame i starts.
(e) append animate(i-1) to the end of body of the mainf function in the script element of HP .
4. Let n be the number of the answer sets of P . For each number i ∈ 0..n − 1, create a button element containing the number i and associating the click of the button to the name of the animation function animatei(). An example button element is <button onclick="animate2()"> 2 </button>. Append this list of button elements to the end of HP .
End of algorithm.
Moving Box Elaboration under SPARC
In order to use the drawing and animation predicate design of draw(c) and animate(c, I) a SPARC program requires corresponding predicate declarations, which in turn require the definition of many sorts, to establish a basis for animation to occur. In this section, we will first present the predicate declarations and sort definitions, and then discuss how to write a SPARC program for the earlier example of a box moving from (1,70) to (200,70) over 200 frames. We will also add to the moving box example a title, to demonstrate how static drawings can be used together with animations.
Predicate Declarations and Sort Definitions
First let us look at some important parameters for drawing and animation. The values of these parameters are defined using the SPARC directive #const: #const canvasWidth = 500. #const canvasHeight = 500. #const canvasSize = 500. #const numFrames = 200.
Here we have defined some constants for the dimension of the canvas and the number of frames that will be used later in sort definitions, making it easier to understand the purpose of certain sorts. We will be using a canvas with a dimension of 500 by 500 pixel size, and we will animate for 200 frames. These values can be changed by programmers in terms of their needs. Note that canvasSize must be the smaller of canvasWidth and canvasHeight. Now we begin the sorts section: sorts #frame = 0..numFrames. #drawing_command = #draw_line+#draw_quad_curve+#draw_bezier_curve+ #draw_arc_curve+#draw_text+#line_width+#text_font+ #line_cap+#text_align+#line_color+#text_color + #set_number_of_frames.
We have defined two sorts, one is a simple set of integers, corresponding with the frames of an animation. We use the sort name frame, but it is important to note that this name and other sort names introduced below are not predefined and thus can be changed by the programmer, as long as they are changed in a consistent way across the SPARC program. The second sort, drawing command, defines all the drawing commands introduced earlier in our design. It is the union (represented as + in SPARC) of the sorts defining shape commands the style commands. The sort names for the shape commands are prefixed with draw , and The sort names for the style commands are prefixed with line or text . Let us examine the definitions of these sorts: % sorts for shape commands #draw_line = draw_line(#stylename,#col,#row,#col,#row). % sort to set the number of frames #set_number_of_frames = set_number_of_frames(#frame).
Each of these sorts takes the form of what is called a record in SPARC.
A record is built from a record name and a list of other sorts. For example, the sort #draw line defines all shape commands of drawing lines. Recall from Section 4.1 the line drawing command is of the form draw line(sn, xs, ys, xe, ye) which draws a line from starting point (xs, ys) to ending point (xe, ye) with style name sn. The record name of the sort #draw line is draw line and is followed by the sorts for each parameter: #stylename for sn, #col for xs and ys, #row for ys and ye. Since the sorts above all contain record names that are recognized by the animation software as specific drawing commands, no record names should be modified, or the results of an executed animation will not be as expected.
Each of the records above uses other basic sorts such as #stylename. We will touch on only a few here: The sort #stylename consists of the names for styles the programmers would like to apply to their animation later on. The style name sort is important, as it is something a programmer can manipulate freely, to include as many styles for different objects in an animation as is desired by the programmer. For now, we do not have predefined styles and we do not put any names here. The sort #text consists of all the strings that will be used in an animation. As a limitation of SPARC, we are not able to represent strings containing spaces. We approximate a string by constant. These, like the style name sort elements, are decided upon by the programmer. So, we do not include specific elements in the definition above.
The other sorts defined above, as well as all other basic sorts not defined above, have much more restricted values. The row and col sorts must contain numerical values, although those values can be decided upon by the programmer, as can all numerical sorts used. For example, the definition of #row being 1..canvasHeight means that the sort #row contains all integers from 1 to the value of constant canvasHeight. The sort #color contains only a small sample of colors available. All color names used by a programmer must be from a predefined set of accepted colors. To see a complete definition of all accepted colors of the #color sort and other basic sorts such as font sorts #fontfamily, a complete listing can be found in the appendix. Moving on from sorts we may continue with the predicates section:
predicates % drawing command applies at specified frame animate(#drawing_command, #frame).
% static drawing command draw(#drawing_command).
Here we have defined two predicates, one for static drawings and one for animations. They both take a drawing command to execute if a corresponding atom exists in the answer set of the executed program. The animate predicate also takes a frame.
Write a SPARC Program for the Moving Box Example
To write a SARPC program for the moving box example, programmers have to include the definition of parameters (found in the earlier subsection) and include all the predicate declarations and the associated sort definitions (found either in the earlier subsection or in the appendix) into the predicates section and sorts section. They can simply copy and paste those constructs into the right sections of their program.
The programmers have to populate the two sorts #stylename and #text. For our example, we define them, in the sorts section, as follows:
#stylename = {redline, title}. #text = {aDemonstrationOfAMovingRedBox}.
The new style name, title is the style we will use to print the text on screen. The element in the #text sort is the text to show.
The rules section below concludes the example, and is responsible for the actual animation of a box moving beneath the demonstration title. draw(text_font(title, 18, arial)). draw(text_color(title, blue)).
draw (draw_text(title,aDemonstrationOfAMovingRedBox,5,25)).
animate(line_color(redline, red), I).
animate(draw_line(redline, I+1, 70, I+11, 70), I). animate(draw_line(redline, I+1, 70, I+1, 60), I). animate(draw_line(redline, I+1, 60, I+11, 60), I). animate(draw_line(redline, I+11, 60, I+11, 70), I).
The first line sets the number of frames of the animation. The second and third lines signify that the style name title means a style of blue arial font of size of 18. The next line signifies that aDemonstrationOfAMovingRedBox should be drawn with the style of title at position (5, 25). The constant will be shown from (5, 25) with the blue arial font of size 18. This completes the title display. As one can see, drawings are simple, since they do not occur over time, but simply exist in the canvas. Thus, the title does not need to be associated with any frames, and will be present as expected throughout any animation.
The next lines have to do with animating the red box from (1, 70) to (200,70). We begin by setting the style (readline) to be red. Note that the variable I can take on the value of any item in the sort frame, 0 through numFrames. Thus, one can expect in the answer set one atom per frame that styles the redline style to be red.
The next four lines correspond to the four sides of the box. For each frame I, four atoms are expected to exist of the form animate(draw line(redline, coordinates), I), corresponding to the four sides of the box, meaning that the animation will include a drawing of four lines at each frame. If one looks closely at the rules, one can see that I is used in the rule to calculate the starting and ending x coordinate of the sides of the box. This means that as the frame increases, so will the starting and ending coordinates, causing the box to appear to move in the positive x direction, which is to the right.
Design Facilitating Teaching
From the examples given in earlier subsections, one may see that the programs are unnecessarily complex due to the syntax restriction of SPARC: a program has to contain the sort definitions and declarations for display predicates. However, for teaching purpose, students are expected to focus mainly on the substance of drawing and animation, instead of the tedious sort definitions and declarations of display predicates.
In principle, the sort definitions and declarations of display predicates should be written by their designers, while programmer should simply be able to use them. To follow this principle, we introduce the #include directive, as in the C language, which includes a specified file into the file containing the directive. With this mechanism, the designer can provide a file containing the sort definitions, predicate declarations and common rules, and the programmer can simply include this file in their program.
For this approach, there is a challenge. As one can see, the sort #stylename (and #text) is expected to be defined by programmers, but it is used by the signature of the display predicates. It is further complicated by the following expectation: we would like to provide a default set of style names to further simplify the programming tasks for novices so that they can focus on the logic and substance of drawing/animation. To achieve the requirement and expectation above, we introduce a subtype (Pierce 2002), called subsort here, with the following syntax draw(draw_text(redPen,drawingAAndAnimation, 10, 10)). draw(draw_text(myPen,drawingAAndAnimation, 10, 30)).
In this program, the new style name myPen is introduced using a subsort statement and it is defined as green by the first rule.
Our design is implemented through a preprocessor whose output is a classical SPARC program. When the preprocessor sees the directive to include the file drawing.sp, it will include the contents of the sections of drawing.sp into the beginning of the corresponding sections of file P 1 . However, the subsort statements will not be included. For each sort name occurring in a subsort statement, the preprocessor will maintain a list of all its subsorts. The meaning of a sort with subsorts is the union of all its subsorts. After scanning P 1 (and thus all included files), for each sort S with subsorts S 1 , . . . , S n , the preprocessor inserts the following sort definition in the beginning of the sorts section:
S = S 1 + . . . + S n .
In our example, the file (not including comments or basic drawing/animation sorts and predicates) after preprocessing is sorts #stylename = {redPen, blackPen} + {myPen}. #text = {drawingAndAnimation}. predicates ...... rules draw(text_color(redPen, red)). draw(text_color(myPen, green)).
% make drawing draw(draw_text(redPen,drawingAAndAnimation, 10, 10). draw (draw_text(myPen,drawingAAndAnimation,10,20).
Once a sort name occurs in a subsort statement, it will be an error to define this sort name using =.
Default Styles and Default Values of any Style
We will first introduce the default styles onlineSPARC currently offers, give an example using default and user defined styles, and discuss how the default values of the styles (default or user defined) are set using ASP rules.
The current onlineSPARC offers regular, thin and thick styles. The regular styles include redPen, blackPen, greenPen. These styles are always associated with a color as shown in their name. When they are applied to draw text command, they use arial font with a font size of 11. When they are applied to other drawing commands, they use a line width of 2 points.
The thin styles include redPenThin, blackPenThin, greenPenThin. They are similar to regular styles except that they are thinner. When applied to draw text, they use font size of 10, and when applied to other drawing commands, they use a line width of 1 point.
The thick styles include redPenThick, blackPenThick, greenPenThick. They are similar to regular styles except that they are thicker. When applied to draw text, they use font size of 12, and when applied to other drawing commands, they use a line width of 3 point.
To model the example in Section 4.3.2, we have to specify the animation length, i.e., the number of frames for our animation. We define our own constant myFrames for this number. Note that with this local constant, we have to add the constraints on the animation length into each rule. Now a complete program, using the include directive (assuming the header file for drawing and animation is called drawing.sp) and subsorts, for the example in Section 4.3.2 is #include <drawing.sp>. #const myFrames = 60. sorts extend #stylename with {title}. extend #text with {aDemonstrationOfAMovingRedBox}. predicates rules draw(set_number_of_frames(myFrames)). % associate title style with 18-point arial font and blue color draw(text_color(title, blue)). draw(text_font(title, 18, arial)).
draw(draw_text(title, aDemonstrationOfAMovingRedBox, 5, 25)).
animate ( This new program contains minimal distraction, and the substantial information for drawing and animation stands out.
We will next discuss how the default values for default or user defined styles are set. For a style for text drawing, there are four aspects resulting from the style commands: color, font, font size and alignment. For a style for line drawing, there are three aspects: color, line width and line cap. It is well known that ASP is good at representing defaults. We use an example of the color of a style to illustrate how the default value is associated to the color of the style: normally the text color of a style is black.
We introduce nonDef aultV alueDef ined drawing(X, txtColor) to mean that some non default value (say red) has been associated to the text color of style X (through style command text color). So, we have rule nonDefaultValueDefined_drawing(X, txtColor) :draw(text_color(X, Y)), Y != black.
A style X has a text color of black if it does not have a non-default color associated:
draw(text_color(X, black)) :not nonDefaultValueDefined_drawing(X, txtColor).
We have similar rules for the styles related to animation predicate:
nonDefaultValueDefined_animation(X, txtColor, I) :animate(text_color(X, Y), I), Y != black. animate(text_color(X, black), I) :not nonDefaultValueDefined_animation(X, txtColor, I).
The default values of styles are defined as follows. The default value of the color (text or line) of a style is black, that of font and font size are arial and 11 point, that of text alignment is left, that of line width is 2 points, and that of line cap is butt.
Finally, one may note that we allow defining a style using the animate predicate. That means the same style name may refer to different values of its properties (e.g., color or font) in different frames. It allows one to use the same style name to represent some changing properties (which might not be known priori) without the need of introducing all style names for the changing properties. An example is a moving line growing fat. The growing effect is achieve by changing the width of the line bigger. Two rules are needed (assuming the number of frames is smaller than 100):
% specify the style growingLine animate(line_width(growingLine,J),I) :-J = I/6+1. % draw a line using grawingLine style animate(draw_line(growingLine, 2*I+1, 110, 2*I+71, 110), I).
By default, the style defined by the draw predicate will be used only for the drawing command inside the draw predicate. However, sometimes we may want the styles defined using draw to be usable in frames. In this case, at any frame i, for any style s and property p of the frame, we would like to use the value v of property p of style s as defined in draw unless p of s takes a value other than v by animate at frame i. The expectation above can be represented naturally by ASP rules. The following is an example on line color property. The atom styleDefinedInFrame(X, lineColor, I) means that style X has a line color defined, different from the one defined for X in the draw predicate, in frame I. The first rule is a straightforward definition of styleDefinedInFrame. The second rule says that the color of style X is also Value at frame I if Value is the color of X by draw, and there is no style command defining the color of X to be different from Value in frame I. Programmers can include such rules in their program. In <drawing.sp>, we have introduced general rules to make a style defined by draw to be usable in any frame, in a manner as illustrated above.
More Drawing/Animation Example Programs
More example programs for drawing and animation can be found at https://goo.gl/ nLD4LD. Some of the programs use the extended SPARC and some use the original SPARC. Some examples show different ways to write drawing and animation programs using the original SPARC.
Conclusion and Discussion
When we outreached to a local high school several years ago, even with the great tool ASPIDE, we needed an experienced student to communicate with the school lab several times before the final installation of the software on their computers could be completed. A carefully drafted document was prepared for students to install the software on their computers. There were still unexpected issues when students used/installed the software at home and thus they lost the opportunity to practice programming outside of class. The flow of teaching the class was often interrupted by the problems or issues associated with the use of the tools. Thus, the strong technical support needed for the management and use of the tools inside and outside of the class was and still is prohibitive for teaching ASP to general undergraduate or K-12 students.
With the availability of our online environment, we now only need to focus on teaching the content of ASP without worrying about the technical support. We hope our environment, and other online environments for knowledge representation systems, will expand the teaching of knowledge representation to a much wider audience in the future.
The drawing and animation features are relatively new features of onlineSPARC and have not been tested in high school teaching. However, we have used the drawing and animation features in a senior year course -special topics in AI -in Spring 2017 at Texas Tech University. Students demonstrated strong interests in drawing and animation (more than in the representation and reasoning with a family) and they were able to produce interesting animations. In the example link given earlier, we include an example produced by a team of this class to produce a vivid animation demonstrating geometric transformations including translation, reflection and rotation. The instructor provided the team only necessary information on doing drawings and animations not more than presented here in Section 4.3. (We did not have the include directive and subsort statement then.) The team found the topic and project idea themselves (the context is that every team in the class was asked to find and solve problems from Science, Math and Arts at the secondary school level).
Unlike the darwing and animation features, we have been using the general online environment in our teaching of AI classes at both undergraduate and graduate levels and in our outreach to local school districts including middle and high schools. The outreach includes offering short term lessons or demonstration to teachers or district administrators. onlineSPARC was first installed on an Amazon AWS server and later installed on a local server. According to Google Analytics from May 2016 to September 2017, there are 1206 new users added and there are accesses from 41 countries. The top three countries with the most accesses are USA, UK and Russia.
We noted that it can be very slow for ASP solvers to produce the answer sets of an animation program when the space for its ground instance is big. The space depends on the canvas size, number of frames and the number of parameters of a drawing command. As an example, assume we have a canvas size of 1000 and produce 1000 frames. If we use the following atom in the head of a rule, animate(draw bezier curve(redPen, X1, Y1, X2, Y2, X3, Y3, X4, y4), I),
where (X1, Y1), ..., (X4, Y4) are four points and I is a frame index, the possible ground instances will be at the level of 1000 9 . We would like to see research progress on any aspect of dealing with ASP programs with large space of ground instances.
For the include directive and subsort statements, they are only part of the the onlineS-PARC environment, but not a part of the official SPARC language. The current needs from drawing and animation programs provide compelling reasons to reexamine SPARC to see how best it can be refined to support the need. The very preliminary work reported in this paper on the preprocessor based extension of SPARC indicates some promising directions in refining SPARC. In the future, we would like to have a thorough and rigorous study of introducing the subtype (subsort) and the #include directive into SPARC. We would also like to examine the use of type inference in SPARC which we believe may enjoy the benefits of both world of sorted ASP and non-sorted ASP, while also providing students better learning experiences and providing support for the development and maintenance of programs for practical applications in the real world.
It is not technically hard to allow other ASP systems such as DLV or Clingo in on-lineSPARC. The addition of them into onlineSPARC may further promote its use in a much wider audience. Since our display predicates are nothing more or less than a predicate in any ASP program, they can be directly used in a DLV or Clingo program. Our rendering algorithm, with minimal changes on recognizing the minor differences between the format of the answer sets of the SPARC solver and those of the DLV/Clingo solver, can be applied to DLV or Clingo programs.
The error/warning messages from a programming environment are important for general programming learners. The current onlineSPARC simply passes the error report from the SPARC solver to the users. It is an ongoing work to make the error report more friendly and understandable by a general audience, and to make correction suggestions for some syntax and semantic errors (e.g., a typo of a constant of a sort).
There has been a notable effort in designing tools to help debug ASP programs (e.g., (Dodaro et al. 2015)). It will be interesting to see how well those tools can be integrated into an online ASP environment and how the integrated environment may help students in learning programming.
| 9,185 |
1809.08400
|
2892371805
|
Collaborative filtering (CF) has been successfully employed by many modern recommender systems. Conventional CF-based methods use the user-item interaction data as the sole information source to recommend items to users. However, CF-based methods are known for suffering from cold start problems and data sparsity problems. Hybrid models that utilize auxiliary information on top of interaction data have increasingly gained attention. A few "collaborative learning"-based models, which tightly bridges two heterogeneous learners through mutual regularization, are recently proposed for the hybrid recommendation. However, the "collaboration" in the existing methods are actually asynchronous due to the alternative optimization of the two learners. Leveraging the recent advances in variational autoencoder (VAE), we here propose a model consisting of two streams of mutual linked VAEs, named variational collaborative model (VCM). Unlike the mutual regularization used in previous works where two learners are optimized asynchronously, VCM enables a synchronous collaborative learning mechanism. Besides, the two stream VAEs setup allows VCM to fully leverages the Bayesian probabilistic representations in collaborative learning. Extensive experiments on three real-life datasets have shown that VCM outperforms several state-of-art methods.
|
Compared to the CF-based approach, the hybrid model relies on only two sources of information to mitigate the sparsity problem. Based on the how tightly the interaction data and auxiliary information are integrated, the hybrid model can be divided into two subcategories: loose coupled and tightly coupled methods @cite_12 . The loosely coupled method combines the output from separate collaborative and content-based systems into a final recommendation by a linear combination @cite_23 or voting scheme @cite_14 . The tightly coupled method takes the processed auxiliary information as a feature of the collaborative method @cite_24 . However, they all assume that the features are the good representation which is usually not the case. Collaborative topic regression (CTR) @cite_10 is a method that explicitly integrates the latent Dirichlet allocation @cite_4 (LDA) and PMF for two source information with promising result. However, the representation ability is limited to the topic model.
|
{
"abstract": [
"",
"We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.",
"Homophily and stochastic equivalence are two primary features of interest in social networks. Recently, the multiplicative latent factor model (MLFM) is proposed to model social networks with directed links. Although MLFM can capture stochastic equivalence, it cannot model well homophily in networks. However, many real-world networks exhibit homophily or both homophily and stochastic equivalence, and hence the network structure of these networks cannot be modeled well by MLFM. In this paper, we propose a novel model, called generalized latent factor model (GLFM), for social network analysis by enhancing homophily modeling in MLFM. We devise a minorization-maximization (MM) algorithm with linear-time complexity and convergence guarantee to learn the model parameters. Extensive experiments on some real-world networks show that GLFM can effectively model homophily to dramatically outperform state-of-the-art methods.",
"",
"Researchers have access to large online archives of scientific articles. As a consequence, finding relevant papers has become more difficult. Newly formed online communities of researchers sharing citations provides a new way to solve this problem. In this paper, we develop an algorithm to recommend scientific articles to users of an online community. Our approach combines the merits of traditional collaborative filtering and probabilistic topic modeling. It provides an interpretable latent structure for users and items, and can form recommendations about both existing and newly published articles. We study a large subset of data from CiteULike, a bibliography sharing service, and show that our algorithm provides a more effective recommender system than traditional collaborative filtering.",
"Collaborative filtering (CF) is a successful approach commonly used by many recommender systems. Conventional CF-based methods use the ratings given to items by users as the sole source of information for learning to make recommendation. However, the ratings are often very sparse in many applications, causing CF-based methods to degrade significantly in their recommendation performance. To address this sparsity problem, auxiliary information such as item content information may be utilized. Collaborative topic regression (CTR) is an appealing recent method taking this approach which tightly couples the two components that learn from two different sources of information. Nevertheless, the latent representation learned by CTR may not be very effective when the auxiliary information is very sparse. To address this problem, we generalize recent advances in deep learning from i.i.d. input to non-i.i.d. (CF-based) input and propose in this paper a hierarchical Bayesian model called collaborative deep learning (CDL), which jointly performs deep representation learning for the content information and collaborative filtering for the ratings (feedback) matrix. Extensive experiments on three real-world datasets from different domains show that CDL can significantly advance the state of the art."
],
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_24",
"@cite_23",
"@cite_10",
"@cite_12"
],
"mid": [
"",
"1880262756",
"1348457",
"",
"2135790056",
"2950316093"
]
}
|
Variational Collaborative Learning for User Probabilistic Representation
|
With the rapid growth of information online, recommender systems have been playing an increasingly important role in alleviating the information overload. Existing models for recommender systems can be broadly classified into three categories (Adomavicius and Tuzhilin 2005): content-based models, CF-based models, and hybrid models. The contentbased models (Lang 1995;Pazzani and Billsus 1997) recommend items similar to what the user liked in the past utilizing user profiles or item descriptions. CF-based methods (Mnih and Salakhutdinov 2008;He et al. 2017;Liang et al. 2018) model user preferences based on historic user-item interactions and recommend what people with similar preference have liked. Although CF-based models generally achieve higher recommendation accuracy than content-based methods, their accuracy drops significantly in the case of sparse interaction data. Therefore, hybrid methods (Li, Yeung, and Copyright c 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Zhang 2011;Wang and Blei 2011), utilizing both interaction data and auxiliary information, have been largely adopted in real-world recommender systems.
Collaborative Deep Learning (CDL) (Wang, Wang, and Yeung 2015) and Collaborative Variational Autoencoder (CVAE) (Li and She 2017) have recently been proposed as unified models to integrate interaction data and auxiliary information and shown promising results. Both methods leverage Probabilistic matrix factorization (PMF) (Mnih and Salakhutdinov 2008) to learn user/item latent factors from interaction data through point estimation. At the meanwhile, a stacked denoising autoencoder (SDAE) (Vincent et al. 2010) (or a VAE (Kingma and Welling 2013)) is employed to learn latent representation from the auxiliary information. The two learners are integrated through mutual regularization, i.e., the latent representation in SDAE/VAE and the corresponding latent factor in PMF are used to regularize with each other. However, the two learners are actually optimized alternatively, making the "collaboration" asynchronous: one-directional regularization in any iteration. Besides, due to the point estimation nature of latent factors in PMF, the regularization here fails to fully leverage the Bayesian representation of the latent variable from SDAE/VAE.
To address aforementioned problems, we propose a deep generative probabilistic model under the collaborative learning framework named variational collaborative model for user preference (VCM). The overall architecture of the model is illustrated in Figure 1. Two parallel extended VAEs are collaboratively employed to simultaneously learn comprehensive representations of user latent variable from user interaction data and auxiliary review text data.
Unlike CVAE and CDL, which learn separate user/item latent factors with point estimation nature through PMF, the VCM use VAE for CF (Liang et al. 2018) to efficiently infer the variational distribution from interaction data as the probabilistic representation of user latent variable (without item). We also provide an alternative interpretation of the Kullback Leibler (KL) divergence regularization in VAE for CF: we view it as an upper bound of the amount of the information that preserved in the variational distribution, which can allocate proper user-level capacity and avoid over-fitting especially for the sparse signals from inactive users.
Benefit from the probabilistic representations for both the interaction data and auxiliary information, we design a synchronous collaborative learning mechanism: unlike the asynchronous "collaboration" of CDL and CVAE, we adopt KL Divergence to make the probabilistic representation learned from two data views to match with each other at each iteration of the optimization. Compared with previous works, it provides a simple but more effective way to make the information flows between user interaction data and auxiliary user information in bi-direction rather than onedirection. Furthermore, because of the versatility of VAE, the VCM model is not limited to taking the review as the auxiliary information. Different multimedia modalities, e.g., images and other texts, are unified in the framework. Our contribution can be summarized as follows:
• Unlike previous hybrid models that learns user/item latent factors by attaining maximum a posterior estimates for interaction data, we propose to use two stream VAEs set up to learn the probabilistic representation of user latent variable and provides user-level capacity. • Unlike the asynchronous mutual regularization used in previous models, we have the two components learning with each other under a synchronous collaborative learning mechanism, which allows the model to make full use of the Bayesian probabilistic representations from interaction data and auxiliary information. • Extensive experiment on three real-world datasets has shown that VCM can significantly outperform the state of the art models. Ablation studies have further proved that improvements come from specific components.
Methodology
Similar to the work in (Hu, Koren, and Volinsky 2008), the recommendation task we processed in this paper accepts implicit feedback. We use a binary matrix X ∈ N U ×I to indicate the click 1 history among user and item. We use u ∈ 1, . . . , U to indicate users and i ∈ 1, . . . , I to indicate items. The lower case x u = [x u1 , . . . , x uI ] ∈ N I is a binary vector indicating the click history for each item from user u. Each user's reviews are merged into one document, let Y ∈ N U ×V be the bag-of-words representation for review documents of U users (where V is the length of the vocabulary). We use v ∈ 1, . . . , V to indicate each word. The lower case y u = [y u1 , . . . , y uV ] ∈ N V is a bag-of-words vector with the number of each word from the document of user u.
Architecture
The architecture of our proposed model is shown in Figure 1. The model is consists of two parallel extended VAEs, one VAE (VAE x ) takes users' click history x u as input and output the probability over items, one VAE (VAE y ) takes users' review text data y u as input and output the probability over words. Each VAE uses the encoder to compresses the input to the variational distribution then transfers the latent variable sampled from the posterior to the decoder to 1 we use the verb "click" for concreteness to indicate any interactions, including "check-in," "purchase," "watch" Figure 1: VCM model architecture.
get the generative distribution for prediction. The KL divergence between two variational distributions is employed for the cooperation between VAE x and VAE y .
Encoders We assume that the interaction data click history x u can be generated by user latent variable z u ∈ R K , and the auxiliary information review document y u can be generated by the another user latent variable r u ∈ R K . We introduce the variational distribution q φx (z u |x u ) and q φy (r u |y u ) to approach the true posteriors p(z u |x u ) and p(r u |y u ), which represent the user click behavior preference and review document semantic content, respectively. Here, we employ the parameterised diagonal Gaussian N (µ φx , diag{σ 2 φx }) as q φx (z u |x u ), and employ N (µ φy , diag{σ 2 φy }) as q φy (r u |y u ). So we define the inference process of the probabilistic encoders as below: 1. Construct vector representations of observed data for user u:
j u = f DNN φx (x u )
, e u = f DNN φy (y u ). 2. Parameterise the variational distribution over the user latent variables r u and z u :
[µ φx (x u ), σ φx (x u )] = l φx (j u ) ∈ R 2K ,
[µ φy (y u ), σ φy (y u )] = l φy (e u ) ∈ R 2K .
f DNN φx (·) and f DNN φy (·) can be any type of deep neural networks (DNN) that are suitable for the observed data. l φx (·) and l φy (·) are linear transformation, computing the parameters of the variational distributions. And φ x is consist of the parameters of f DNN φx (·) and l φx , whereas φ y is consist of the parameters of f DNN φy (·) and l φy . Decoders We define the generation process of two softmax decoders as below: 1. Draw samples z u ∈ R K and r u ∈ R K from variational posterior q φx (z u |x u ) and q φy (r u |y u ), respectively. 2. Produce the probabilistic distribution over I items and V words for each user through DNN and softmax function:
π ui = exp(f DNN θx (z u ) i ) I i exp(f DNN θx (z u ) i ) , p uv = exp(f DNN θy (r u ) v ) V v exp(f DNN θy (r u ) v )
.
3. Reconstruct the data from two multinomial distributions, respectively:
x u ∼ Mult(N u , π u ), y u ∼ Mult(W u , p u ).
where f DNN θx and f DNN θy are two DNN with parameters θ x and θ y . N u = I i x ui is the sum of clicks, and W u = V v y uv is the sum of words in review document of user u, the observed data x u and y u can be generated from the two multinomial distribution respectively. Therefore, a suitable goal for learning the distribution of latent variable z u is to maximize the marginal log-likelihood function of click behavior data in expectation over the whole distribution of z u , max
θx,φx E q φx (zu|xu) [log p θx (x u |z u )], log p θx (x u |z u ) = I i x ui log π ui .
And we can also get similar likelihood function of review document, we omitted the similar process for space limitation.
User-level Capacity We introduce a limitation over q φ (z u |x u ) to control the capacity of different users. This can be achieved if we match q φ (z u |x u ) with the uninformative prior, such as the isotropic unit Gaussian used in (Higgins et al. 2016;Higgins et al. 2017). Hence, we get the constrained optimization problem for the marginal log-likelihood function of click behavior data as: max
θx,φx E q φ (zu|xu) [log p θx (x u |z u )], subject to KL(q φ (z u |x u )||p(z u )) < c u , KL(q φx (z u |x u )||p(z u )
) has the property of being zero if the posterior distribution is equal to the uninformative prior, which means the model learn nothing from the data. Thus, the hidden variable c u can be seen as the upper bound of the amount of information that preserved in the variational distribution for each user's preference. According to complementary slackness KKT conditions (Kuhn 1951;Karush 1939), solving this optimization problem is equivalent to maximize the lower bound as below:
L x (θ x , φ x ; x u , z u , β x ) = E q φx (zu|xu) [log p θx (x u |z u )] Reconstruction loss −β x KL(q φx (z u |x u )||p(z u ))
Capacity limitation regularization So far, we get the lower bound L x for VAE x , similar process can be done to obtain the lower bound L y for VAE y as:
L y (θ y , φ y ; y u , r u , β y ) = E q φy (ru|yu) [log p θy (y u |r u )] − β y KL(q φy (r u |y u )||p(r u ))
Varying KKT multiplier β x , β y puts different strength into pushing the variational distribution to align with the unit Gaussian prior. A proper choice of β x , β y can balance the trade-off between reconstruction loss and the limitation.
Collaborative Learning Mechanism To improve the generalization recommendation performance of variational CF model VAE x , we use VAE y as the teacher to provide review semantic content in the form of the posterior probability q φy to guide the learning process of VAE x . To measure the match of two posterior distributions q φx and q φy , we adopt KL divergence. The KL distance from q φy to q φx is computed as:
KL(q φx (z u |x u )||q φy (r u |y u )).
Algorithm 1 VCM collaborative training with anneal stochastic gradient descent Input: Click matrix X ∈ N U ×I , Bag of word representation of review Y ∈ N U ×V , β, Anneal steps 1: Randomly initialize φ, θ 2: for iteration in Anneal steps do 3:
Sample a batch of users U
4:
for all u ∈ U do 5:
Compute z u and r u via reparameterization trick 6:
Compute noisy gradient ∇ φ L, ∇ θ L with z u and r u 7: end for 8:
Average noisy gradient from batch 9: β x = β y = min(β, iteration/Anneal steps) 10:
Update φ and θ by taking gradient update with β x , β y 11: end for 12: return φ, θ Similarly, to improve the ability to learn representation of semantic meaning for VAE y , we use VAE x as teacher to provide click behavior preference information in form of its posterior q φx to guide the VAE y to capture the semantic content for review document, so the KL distance from q φx to q φy is computed as:
KL(q φy (r u |y u )||q φx (z u |x u )).
We adopt this bi-directional KL Divergence to make the probabilistic representation learned from two data views to match itself with each other, so that allows the VCM to fully leverage the two probabilistic representation.
Objective Function
We form the objective for user u with collaborative learning mechanism as (we can get the objective function of the dataset by averaging the objective function for all users):
L(φ, θ; x u , z u , y u , r u , β x , β y ) =L x (θ x , φ x ; x u , z u , β x ) − β x KL(q φx (z u |x u )||q φy (r u |y u )) +L y (θ y , φ y ; y u , r u , β y ) − β y KL(q φy (r u |y u )||q φx (z u |x u )) Note the parameters need optimize is φ = {φ x , φ y },θ = {θ x , θ y }.
We can obtain an unbiased estimate of L by sampling z u ∼ q φx , and r u ∼ q φy , then perform stochastic gradient ascent to optimize it. And by doing reparameterization trick (Kingma and Welling 2013): we sample ε ∼ N (0, I K ), and reparameterize z u = µ φx (x u ) + ε σ φx (x u ), r u = µ φy (y u )+ε σ φy (y u ), the stochasticity of the sampling process is isolated, the gradient with respect to φ can be backpropagated through the sampled z u and r u . With L as the final lower bound, we train this two VAEs synchronously at each iteration according to Algorithm 1.
Prediction
We now describe how we make predictions given a trained model. Given a user's click history x u , we rank all the items based on the predicted multinomial probability π u . The latent variables z u for x u is constructed as follows: we simply
z u = µ φx (x u ).
We denote this prediction method as VCM.
Benefit from collaborative learning, our model allows for bi-directional prediction (review2click and click2review). In order to predict click behavior corresponding to user's review semantic content, we infer the latent variables r u by presenting the reviews y u to encoder of VAE y , we also simply take the mean r u = µ φy (y u ) to construct the latent variable r u and use the decoder of VAE x with r u as input to generate the predicted multinomial probability π u . So now only given a user's review document, our model can encode the text into the latent variables and decode it to click behavior. And we denote this Cross-Domain prediction method as VCM-CD.
Experiment Datasets
We experimented with three publicly accessible datasets from various domains with different scale and sparsity. the consumption records with reviews from Amazon.com. We use the the clothing shoes and jewelry category 5core (He and McAuley 2016). We only keep users with at least five products in their shopping record and products that are bought by at least 5 users. • Amazon Movies (Movies): This data (He and McAuley 2016) contains the user-movie rating from Movies and TV 5-core with reviews. We only keep user with 5 watching record and movies that are played at least 10 users.
For each data set, we binaries the explicit data by maintaining ratings of four or higher and interpret it as implicit feedback. We merge each user's reviews into one document, then we follow the same process to remove the stop words as (Miao, Yu, and Blunsom 2016) for each document, and keep the most common V = 10, 000 words in all documents as the vocabulary. Table 1 summarizes the characteristics of all the datasets after pre-processing.
Metric
We use two ranking-based metrics: the truncated normalized discounted cumulative gain (NDCG@R) and Recall@R.
2 https://www.kaggle.com/c/yelp-recsys-2013/data For each user, both the metrics compare the predicted rank of the held-out items with their true rank. Moreover, we get the predicted rank by sorting the multinomial probability π u . Formally, we define w(r) as the item at rank r, I[·] is the indicator function, and I u is the set of the held-out items that user u clicked on.
Recall@R(u, w) = R r=1 I[w(r) ∈ I u ] min(R, I u )
The expression in the denominator is the minimum of the number of items clicked by user u and R. While Recall@R considers all items ranked with the first R to be equally important, and it reaches to the maximum of 1 when the model ranks all relevant items in position. And the truncated discounted cumulative gain (DCG@R) is
DCG@R(u, w) = R r=1 2 I[w(r)∈Iu] − 1 log(r + 1)
DCG@R assign higher scores to the higher ranks versus lower ones. NDCG@R is the DCG@R linearly normalized to [0, 1] after dividing by the best possible DCG@R when all the held-out items are ranked at the top.
Baselines
As the previous works (Wang, Wang, and Yeung 2015;Li and She 2017;Zheng, Noroozi, and Yu 2017) has demonstrated, the performance of hybrid recommendation with auxiliary information is significantly better than CF-based models, so only hybrid models are used for comparison. The baselines included in our comparison are as follows:
• CDL: Collaborative Deep Learning (Wang, Wang, and Yeung 2015) tightly combines the SDAE with the PMF. The middle layer of the neural network acts as a bridge between the SDAE and the PMF. • CVAE: Collaborative VariationalAutoencoder (Li and She 2017) is a probabilistic feedforward model for joint learning of VAE and collaborative filtering. CVAE is a very strong baseline and achieves the best performance among our baseline methods. • DeepCoNN: Deep Cooperative Neural Networks (Zheng, Noroozi, and Yu 2017) jointly models user and item from textual reviews for rating prediction. To make it comparable, we revise the model to suitable for implicit feed back with negative sampling (He et al. 2017).
Experimental setup
We randomly split the interaction data into training, validation, test sets. For each user, we take 60% of the entire click history as x u and review document as y u to train models. For evaluation, we use 20% click history as the validation set to tune hyper-parameters, and 20% held-out click history as the test set. We can take click history in train set (VCM prediction) or the review document (VCM-CD prediction) to learn the necessary users' representations, and then compute metrics by looking how well the model ranks the rest unseen click items from the held-out set.
We select models hyper-parameters and architectures by evaluating NDCG@100 on the validation sets. For VCM, we explore Multilayer perceptron (MLP) with 0,1 and 2 hidden layers, and we find the best overall architecture for VCM would be [I → 600 → K → 600 → I] for VAE x and [V → 500 → K → V ] for VAE y . Moreover, we find that going deeper does not improve performance. We use tanh as the activation function between layers. Note that since the output of l φx and l φy are used as the mean and variance of the Gaussian random variables, we do not apply an activation function on it. We apply dropout at the input layer with probability 0.5 for VAE x . We do not apply the weight decay for any parts. We train our model using Adam (Kingma and Ba 2015) with the batch size of 128 users for 200 epoch on both datasets. We save the model with the best validation NDCG@100 and report test set metrics with it. For simplicity, we set β x and β y with the same value and anneal them linearly for 40, 000 anneal steps, using the schedule described in Algorithm 1. Figure 2 shows the NDCG@100 on Clothing validation set during training. Also, we empirically studied the effects of two important parameters of VCM: the latent dimension, the regularization coefficient β x and β y . Figure 3 shows the performance of VCM on the validation set of Clothing with varying K from 50 to 250 and β x , β y from 0.2 to 1.0 to investigate its sensitivity. As it can be seen, the best regularization coefficient is 0.4, and it does not improve the performance when the dimension of the latent space is greater than 100 in β = 0.4. Results on Movies and Yelp show the same trend, and thus they are omitted due to the space limitation. As it can be seen, CVAE is a very strong baseline and outperform the other baselines in most situations. Compared with CDL, it can be seen that the inference network learns a better probabilistic representation of a latent variable for auxiliary information than CDL, leading to better performance. While CDL need additional noisy criteria in auxiliary information observation space, which makes it not robust. The inferior results of DeepCoNN may be due to that it only uses a single learner to learn user/item representations, only with auxiliary information as input compared to hybrid model. Therefore it cannot capture implicit relationships between users stored in interaction data very well.
To focus more specifically on the comparison of CVAE and VCM, we can see that although both CVAE and VCM use deep learning models to extract representation for auxiliary information, the proposed VCM achieves better and more robust recommendation, especially for large R. This is because VCM learns the user probabilistic representation by two stream VAEs set up, instead of learning the user/item latent factor through the point estimate of PMF. Besides, the collaborative learning mechanism allows the model to fully leverage the Bayesian deep representation from two views of information and lets the two learners be optimized synchronously. On the other hand, due to the point nature of the latent factor learned by PMF and alternative optimization, CVAE fails to achieve this robust performance. VCM-CD that uses the cross-domain inference to make the prediction can achieve better performance than VCM cause the review text we used here contains more specific information about users preference when the interaction data is extremely sparse. This promotion is especially obvious in the most sparse Clothing dataset. For each subplot, a paired t-test is performed, and † indicates statistical significance at p < 0.01, compared to the best baseline. We could not finish DeepCoNN within a reasonable amount of time on Movies.
Ablation Study
In this subsection, we do the ablation study to understand better how the collaborative learning mechanism work, we develop:
• VCM-Se: The collaborative learning mechanism of VCM is removed. And the VCM is separated as two independent variational models. • VCM-OD: We first train VAE y on the reviews alone without the influence of VAE x . Then we fix VAE y , and train VAE x with KL(q φx (z u |x u )||q φy (r u |y u )). This means the information only can flow from VAE y to VAE x in one direction which is different with the bi-directional flow in collaborative learning mechanism. • VCM-NV: The bi-directional KL regularization in collaborative learning mechanism is replaced with a constraint: ||µ φx − µ φy || 2 2 , which does not consider the variance σ φx and σ φy of the probabilistic representations. The performance of VCM and its variants on Movies, Yelp, Clothing are given in Table 2. To demonstrate that the cooperation between two VAE can enhance recommendation performance, VCM-Se uses two independent VAEs for training without the collaborative learning mechanism. In this manner, we learn the two variational distribution q φx and q φy without considering the informative information from each other. As it can be seen in Table 2, VCM achieves the best performance. It verifies that modeling users' preference from two views does augment the performance of VAE x . To investigate the importance of the bi-directional information flow in collaborative learning mechanism, VCM-OD is introduced that only consider one-directional information flow. Moreover, the performance gap between VCM-OD and VCM suggests that using collaborative synchronous training scheme is better than only using VAE y to enhance VAE x . Furthermore, although VCM-NV can also learn probabilistic representation for two views of data, this constraint that is without considering variance makes two learners can not leverage all information stored in representations, leading the performance VCM-NV drops like CVAE with the same reason.
The impact of collaborative learning on VAE x
It is natural to wonder how the collaborative learning promote the performance of VAE x . Intuitively, by modeling users latent variable with click behavior and review text collaboratively, VCM can learn the more expressive representation than VCM-Se. Therefore VCM could be more Table 3: NDCG@10 and approximation of Capacityĉ u for users with increasing level of activity, and the activity level is measured by how many items a user clicked on. The larger the value ofĉ u is, the more information the distribution q φx contains. Although details vary across datasets, VCM consistently improves NDCG@10 andĉ u for the user of different levels. The relative improvement is shown in bracket. robust when user's click behavior data is scarce. To study this, we break down users into five groups based on their activity level. The activity level represents the number of items each user has clicked on. According to complementary slackness KKT condition (Kuhn 1951;Karush 1939), we can use KL(q φx (z u |x u )||p zu ) as the approximation of the capacity limitationĉ u after optimization. It indicates the amount of information stored in the variational distribution. We compute NDCG@10 andĉ u for each group using VCM and VCM-Se. Table.3 summarizes how performance differs across users of different active levels.
VCM-
It is interesting to find that, as the activity level increase, the variational distribution capacity of VCM and VCM-Se also monotonically increase. This phenomenon shows that, by using VAE x to learn the probabilistic representation of user latent variable, both VCM and VCM-Se can automatically allocate a proper user-level capacity for users of different levels to store the information.
We can also find that the variational distribution capacity of VCM is all greater than VCM-Se for users of different levels in the three data sets. This shows the collaborative learning mechanism allows the information in review text flows from q φy to q φx , which makes q φx more expressive with more information, and then the VAE x automatically allocates more capacity to store the more comprehensive information. The promotion of capacity between the two models is particularly prominent for users who only click a small number of items (shown in bold in Table 3). The live events are excellent. You can spend days in this museum if you love music so be prepared! try to catch the Apollonia music show that happens everyday. The museum restaurant was also really impressive. We had a chestnut soup that was spectacular.
The live events are excellent. You can spend days in this museum if you love music so be prepared! try to catch the Apollonia music show that happens everyday. The museum restaurant was also really impressive. We had a chestnut soup that was spectacular. The impact of collaborative learning on VAE y
The multinomial distribution p u is to model the probability of each word appearing in the review document y u for user u. Without collaborative learning, the likelihood of the review document rewards the VAE y for only putting probability mass on the high-frequency words in y u . However, with the influence of VAE x under the collaborative learning mechanism, VAE y should also assign more probability mass on the keywords that can represent user preference. We highlight words that have high probability p uv in Figure 5. We randomly sample the review example from two users in Yelp dataset. Words with the highest probability are colored dark-green, high probability words are lightedgreen, and low/medium probability words are not colored. In Figure 5, we compare the p u of VCM-Se and VCM model. For convenient comparison, we use blue and red rectangles to emphasize their differences.
For User I, VAE y of VCM puts more probability on "vegetarian," "healthy," "vegan," "sauce" words which show that the user may be a vegetarian and put more attention on healthy habit. While, without the collaborative learning mechanism, VAE y of VCM-Se puts more probability on some meaningless words such as "helpful," "wrong," "large." A similar result is observed for user II. The words "music" and "museum" show the obvious preference. This demonstrates the collaborative learning mechanism has a beneficial influence on both two learners, which not only can enhance the recommendation performance for VAE y but also make VAE y capture the more representative words.
Conclusion
This paper proposes the variational collaborative model that jointly model the generation of auxiliary information and interaction data collaboratively. It is a deep generative probabilistic model that learns a probabilistic representation of user latent variable through VAE, leading to robust recommendation performance. To the best of our knowledge, VCM is the first pure deep learning model that can fully leverage the probabilistic representation learned from different sources of data due to the synchronous collaborative learning mechanism. The experiment has shown that the proposed VCM can significantly outperform the state-ofthe-art methods for the hybrid recommendation with more robust performance.
| 5,178 |
1809.08400
|
2892371805
|
Collaborative filtering (CF) has been successfully employed by many modern recommender systems. Conventional CF-based methods use the user-item interaction data as the sole information source to recommend items to users. However, CF-based methods are known for suffering from cold start problems and data sparsity problems. Hybrid models that utilize auxiliary information on top of interaction data have increasingly gained attention. A few "collaborative learning"-based models, which tightly bridges two heterogeneous learners through mutual regularization, are recently proposed for the hybrid recommendation. However, the "collaboration" in the existing methods are actually asynchronous due to the alternative optimization of the two learners. Leveraging the recent advances in variational autoencoder (VAE), we here propose a model consisting of two streams of mutual linked VAEs, named variational collaborative model (VCM). Unlike the mutual regularization used in previous works where two learners are optimized asynchronously, VCM enables a synchronous collaborative learning mechanism. Besides, the two stream VAEs setup allows VCM to fully leverages the Bayesian probabilistic representations in collaborative learning. Extensive experiments on three real-life datasets have shown that VCM outperforms several state-of-art methods.
|
There is also another line of research that only utilizes one single learner with only auxiliary information such as review text as input for rating regression @cite_9 @cite_17 , DeepCoNN @cite_1 that models users and items using review text for rating prediction problems have shown promising result. Although they utilize word-embedding technique @cite_18 and Convolutional neural network @cite_22 (CNN) to learn good representation for text data, compared to the hybrid model, it only uses one single learner to learn user item representation only with the auxiliary information as input, so it can not capture the implicit relationship between users stored in interaction data well.
|
{
"abstract": [
"The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.",
"We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including: part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements.",
"Reviews information is dominant for users to make online purchasing decisions in e-commerces. However, the usefulness of reviews is varied. We argue that less-useful reviews hurt model’s performance, and are also less meaningful for user’s reference. While some existing models utilize reviews for improving the performance of recommender systems, few of them consider the usefulness of reviews for recommendation quality. In this paper, we introduce a novel attention mechanism to explore the usefulness of reviews, and propose a Neural Attentional Regression model with Review-level Explanations (NARRE) for recommendation. Specifically, NARRE can not only predict precise ratings, but also learn the usefulness of each review simultaneously. Therefore, the highly-useful reviews are obtained which provide review-level explanations to help users make better and faster decisions. Extensive experiments on benchmark datasets of Amazon and Yelp on different domains show that the proposed NARRE model consistently outperforms the state-of-the-art recommendation approaches, including PMF, NMF, SVD++, HFT, and DeepCoNN in terms of rating prediction, by the proposed attention model that takes review usefulness into consideration. Furthermore, the selected reviews are shown to be effective when taking existing review-usefulness ratings in the system as ground truth. Besides, crowd-sourcing based evaluations reveal that in most cases, NARRE achieves equal or even better performances than system’s usefulness rating method in selecting reviews. And it is flexible to offer great help on the dominant cases in real e-commerce scenarios when the ratings on review-usefulness are not available in the system.",
"A large amount of information exists in reviews written by users. This source of information has been ignored by most of the current recommender systems while it can potentially alleviate the sparsity problem and improve the quality of recommendations. In this paper, we present a deep model to learn item properties and user behaviors jointly from review text. The proposed model, named Deep Cooperative Neural Networks (DeepCoNN), consists of two parallel neural networks coupled in the last layers. One of the networks focuses on learning user behaviors exploiting reviews written by the user, and the other one learns item properties from the reviews written for the item. A shared layer is introduced on the top to couple these two networks together. The shared layer enables latent factors learned for users and items to interact with each other in a manner similar to factorization machine techniques. Experimental results demonstrate that DeepCoNN significantly outperforms all baseline recommender systems on a variety of datasets.",
"Recently, many e-commerce websites have encouraged their users to rate shopping items and write review texts. This review information has been very useful for understanding user preferences and item properties, as well as enhancing the capability to make personalized recommendations of these websites. In this paper, we propose to model user preferences and item properties using convolutional neural networks (CNNs) with dual local and global attention, motivated by the superiority of CNNs to extract complex features. By using aggregated review texts from a user and aggregated review text for an item, our model can learn the unique features (embedding) of each user and each item. These features are then used to predict ratings. We train these user and item networks jointly which enable the interaction between users and items in a similar way as matrix factorization. The local attention provides us insight on a user's preferences or an item's properties. The global attention helps CNNs focus on the semantic meaning of the whole review text. Thus, the combined local and global attentions enable an interpretable and better-learned representation of users and items. We validate the proposed models by testing on popular review datasets in Yelp and Amazon and compare the results with matrix factorization (MF), the hidden factor and topical (HFT) model, and the recently proposed convolutional matrix factorization (ConvMF+). Our proposed CNNs with dual attention model outperforms HFT and ConvMF+ in terms of mean square errors (MSE). In addition, we compare the user item embeddings learned from these models for classification and recommendation. These results also confirm the superior quality of user item embeddings learned from our model."
],
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_9",
"@cite_1",
"@cite_17"
],
"mid": [
"2950133940",
"2952230511",
"2788376297",
"2575006718",
"2749348810"
]
}
|
Variational Collaborative Learning for User Probabilistic Representation
|
With the rapid growth of information online, recommender systems have been playing an increasingly important role in alleviating the information overload. Existing models for recommender systems can be broadly classified into three categories (Adomavicius and Tuzhilin 2005): content-based models, CF-based models, and hybrid models. The contentbased models (Lang 1995;Pazzani and Billsus 1997) recommend items similar to what the user liked in the past utilizing user profiles or item descriptions. CF-based methods (Mnih and Salakhutdinov 2008;He et al. 2017;Liang et al. 2018) model user preferences based on historic user-item interactions and recommend what people with similar preference have liked. Although CF-based models generally achieve higher recommendation accuracy than content-based methods, their accuracy drops significantly in the case of sparse interaction data. Therefore, hybrid methods (Li, Yeung, and Copyright c 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Zhang 2011;Wang and Blei 2011), utilizing both interaction data and auxiliary information, have been largely adopted in real-world recommender systems.
Collaborative Deep Learning (CDL) (Wang, Wang, and Yeung 2015) and Collaborative Variational Autoencoder (CVAE) (Li and She 2017) have recently been proposed as unified models to integrate interaction data and auxiliary information and shown promising results. Both methods leverage Probabilistic matrix factorization (PMF) (Mnih and Salakhutdinov 2008) to learn user/item latent factors from interaction data through point estimation. At the meanwhile, a stacked denoising autoencoder (SDAE) (Vincent et al. 2010) (or a VAE (Kingma and Welling 2013)) is employed to learn latent representation from the auxiliary information. The two learners are integrated through mutual regularization, i.e., the latent representation in SDAE/VAE and the corresponding latent factor in PMF are used to regularize with each other. However, the two learners are actually optimized alternatively, making the "collaboration" asynchronous: one-directional regularization in any iteration. Besides, due to the point estimation nature of latent factors in PMF, the regularization here fails to fully leverage the Bayesian representation of the latent variable from SDAE/VAE.
To address aforementioned problems, we propose a deep generative probabilistic model under the collaborative learning framework named variational collaborative model for user preference (VCM). The overall architecture of the model is illustrated in Figure 1. Two parallel extended VAEs are collaboratively employed to simultaneously learn comprehensive representations of user latent variable from user interaction data and auxiliary review text data.
Unlike CVAE and CDL, which learn separate user/item latent factors with point estimation nature through PMF, the VCM use VAE for CF (Liang et al. 2018) to efficiently infer the variational distribution from interaction data as the probabilistic representation of user latent variable (without item). We also provide an alternative interpretation of the Kullback Leibler (KL) divergence regularization in VAE for CF: we view it as an upper bound of the amount of the information that preserved in the variational distribution, which can allocate proper user-level capacity and avoid over-fitting especially for the sparse signals from inactive users.
Benefit from the probabilistic representations for both the interaction data and auxiliary information, we design a synchronous collaborative learning mechanism: unlike the asynchronous "collaboration" of CDL and CVAE, we adopt KL Divergence to make the probabilistic representation learned from two data views to match with each other at each iteration of the optimization. Compared with previous works, it provides a simple but more effective way to make the information flows between user interaction data and auxiliary user information in bi-direction rather than onedirection. Furthermore, because of the versatility of VAE, the VCM model is not limited to taking the review as the auxiliary information. Different multimedia modalities, e.g., images and other texts, are unified in the framework. Our contribution can be summarized as follows:
• Unlike previous hybrid models that learns user/item latent factors by attaining maximum a posterior estimates for interaction data, we propose to use two stream VAEs set up to learn the probabilistic representation of user latent variable and provides user-level capacity. • Unlike the asynchronous mutual regularization used in previous models, we have the two components learning with each other under a synchronous collaborative learning mechanism, which allows the model to make full use of the Bayesian probabilistic representations from interaction data and auxiliary information. • Extensive experiment on three real-world datasets has shown that VCM can significantly outperform the state of the art models. Ablation studies have further proved that improvements come from specific components.
Methodology
Similar to the work in (Hu, Koren, and Volinsky 2008), the recommendation task we processed in this paper accepts implicit feedback. We use a binary matrix X ∈ N U ×I to indicate the click 1 history among user and item. We use u ∈ 1, . . . , U to indicate users and i ∈ 1, . . . , I to indicate items. The lower case x u = [x u1 , . . . , x uI ] ∈ N I is a binary vector indicating the click history for each item from user u. Each user's reviews are merged into one document, let Y ∈ N U ×V be the bag-of-words representation for review documents of U users (where V is the length of the vocabulary). We use v ∈ 1, . . . , V to indicate each word. The lower case y u = [y u1 , . . . , y uV ] ∈ N V is a bag-of-words vector with the number of each word from the document of user u.
Architecture
The architecture of our proposed model is shown in Figure 1. The model is consists of two parallel extended VAEs, one VAE (VAE x ) takes users' click history x u as input and output the probability over items, one VAE (VAE y ) takes users' review text data y u as input and output the probability over words. Each VAE uses the encoder to compresses the input to the variational distribution then transfers the latent variable sampled from the posterior to the decoder to 1 we use the verb "click" for concreteness to indicate any interactions, including "check-in," "purchase," "watch" Figure 1: VCM model architecture.
get the generative distribution for prediction. The KL divergence between two variational distributions is employed for the cooperation between VAE x and VAE y .
Encoders We assume that the interaction data click history x u can be generated by user latent variable z u ∈ R K , and the auxiliary information review document y u can be generated by the another user latent variable r u ∈ R K . We introduce the variational distribution q φx (z u |x u ) and q φy (r u |y u ) to approach the true posteriors p(z u |x u ) and p(r u |y u ), which represent the user click behavior preference and review document semantic content, respectively. Here, we employ the parameterised diagonal Gaussian N (µ φx , diag{σ 2 φx }) as q φx (z u |x u ), and employ N (µ φy , diag{σ 2 φy }) as q φy (r u |y u ). So we define the inference process of the probabilistic encoders as below: 1. Construct vector representations of observed data for user u:
j u = f DNN φx (x u )
, e u = f DNN φy (y u ). 2. Parameterise the variational distribution over the user latent variables r u and z u :
[µ φx (x u ), σ φx (x u )] = l φx (j u ) ∈ R 2K ,
[µ φy (y u ), σ φy (y u )] = l φy (e u ) ∈ R 2K .
f DNN φx (·) and f DNN φy (·) can be any type of deep neural networks (DNN) that are suitable for the observed data. l φx (·) and l φy (·) are linear transformation, computing the parameters of the variational distributions. And φ x is consist of the parameters of f DNN φx (·) and l φx , whereas φ y is consist of the parameters of f DNN φy (·) and l φy . Decoders We define the generation process of two softmax decoders as below: 1. Draw samples z u ∈ R K and r u ∈ R K from variational posterior q φx (z u |x u ) and q φy (r u |y u ), respectively. 2. Produce the probabilistic distribution over I items and V words for each user through DNN and softmax function:
π ui = exp(f DNN θx (z u ) i ) I i exp(f DNN θx (z u ) i ) , p uv = exp(f DNN θy (r u ) v ) V v exp(f DNN θy (r u ) v )
.
3. Reconstruct the data from two multinomial distributions, respectively:
x u ∼ Mult(N u , π u ), y u ∼ Mult(W u , p u ).
where f DNN θx and f DNN θy are two DNN with parameters θ x and θ y . N u = I i x ui is the sum of clicks, and W u = V v y uv is the sum of words in review document of user u, the observed data x u and y u can be generated from the two multinomial distribution respectively. Therefore, a suitable goal for learning the distribution of latent variable z u is to maximize the marginal log-likelihood function of click behavior data in expectation over the whole distribution of z u , max
θx,φx E q φx (zu|xu) [log p θx (x u |z u )], log p θx (x u |z u ) = I i x ui log π ui .
And we can also get similar likelihood function of review document, we omitted the similar process for space limitation.
User-level Capacity We introduce a limitation over q φ (z u |x u ) to control the capacity of different users. This can be achieved if we match q φ (z u |x u ) with the uninformative prior, such as the isotropic unit Gaussian used in (Higgins et al. 2016;Higgins et al. 2017). Hence, we get the constrained optimization problem for the marginal log-likelihood function of click behavior data as: max
θx,φx E q φ (zu|xu) [log p θx (x u |z u )], subject to KL(q φ (z u |x u )||p(z u )) < c u , KL(q φx (z u |x u )||p(z u )
) has the property of being zero if the posterior distribution is equal to the uninformative prior, which means the model learn nothing from the data. Thus, the hidden variable c u can be seen as the upper bound of the amount of information that preserved in the variational distribution for each user's preference. According to complementary slackness KKT conditions (Kuhn 1951;Karush 1939), solving this optimization problem is equivalent to maximize the lower bound as below:
L x (θ x , φ x ; x u , z u , β x ) = E q φx (zu|xu) [log p θx (x u |z u )] Reconstruction loss −β x KL(q φx (z u |x u )||p(z u ))
Capacity limitation regularization So far, we get the lower bound L x for VAE x , similar process can be done to obtain the lower bound L y for VAE y as:
L y (θ y , φ y ; y u , r u , β y ) = E q φy (ru|yu) [log p θy (y u |r u )] − β y KL(q φy (r u |y u )||p(r u ))
Varying KKT multiplier β x , β y puts different strength into pushing the variational distribution to align with the unit Gaussian prior. A proper choice of β x , β y can balance the trade-off between reconstruction loss and the limitation.
Collaborative Learning Mechanism To improve the generalization recommendation performance of variational CF model VAE x , we use VAE y as the teacher to provide review semantic content in the form of the posterior probability q φy to guide the learning process of VAE x . To measure the match of two posterior distributions q φx and q φy , we adopt KL divergence. The KL distance from q φy to q φx is computed as:
KL(q φx (z u |x u )||q φy (r u |y u )).
Algorithm 1 VCM collaborative training with anneal stochastic gradient descent Input: Click matrix X ∈ N U ×I , Bag of word representation of review Y ∈ N U ×V , β, Anneal steps 1: Randomly initialize φ, θ 2: for iteration in Anneal steps do 3:
Sample a batch of users U
4:
for all u ∈ U do 5:
Compute z u and r u via reparameterization trick 6:
Compute noisy gradient ∇ φ L, ∇ θ L with z u and r u 7: end for 8:
Average noisy gradient from batch 9: β x = β y = min(β, iteration/Anneal steps) 10:
Update φ and θ by taking gradient update with β x , β y 11: end for 12: return φ, θ Similarly, to improve the ability to learn representation of semantic meaning for VAE y , we use VAE x as teacher to provide click behavior preference information in form of its posterior q φx to guide the VAE y to capture the semantic content for review document, so the KL distance from q φx to q φy is computed as:
KL(q φy (r u |y u )||q φx (z u |x u )).
We adopt this bi-directional KL Divergence to make the probabilistic representation learned from two data views to match itself with each other, so that allows the VCM to fully leverage the two probabilistic representation.
Objective Function
We form the objective for user u with collaborative learning mechanism as (we can get the objective function of the dataset by averaging the objective function for all users):
L(φ, θ; x u , z u , y u , r u , β x , β y ) =L x (θ x , φ x ; x u , z u , β x ) − β x KL(q φx (z u |x u )||q φy (r u |y u )) +L y (θ y , φ y ; y u , r u , β y ) − β y KL(q φy (r u |y u )||q φx (z u |x u )) Note the parameters need optimize is φ = {φ x , φ y },θ = {θ x , θ y }.
We can obtain an unbiased estimate of L by sampling z u ∼ q φx , and r u ∼ q φy , then perform stochastic gradient ascent to optimize it. And by doing reparameterization trick (Kingma and Welling 2013): we sample ε ∼ N (0, I K ), and reparameterize z u = µ φx (x u ) + ε σ φx (x u ), r u = µ φy (y u )+ε σ φy (y u ), the stochasticity of the sampling process is isolated, the gradient with respect to φ can be backpropagated through the sampled z u and r u . With L as the final lower bound, we train this two VAEs synchronously at each iteration according to Algorithm 1.
Prediction
We now describe how we make predictions given a trained model. Given a user's click history x u , we rank all the items based on the predicted multinomial probability π u . The latent variables z u for x u is constructed as follows: we simply
z u = µ φx (x u ).
We denote this prediction method as VCM.
Benefit from collaborative learning, our model allows for bi-directional prediction (review2click and click2review). In order to predict click behavior corresponding to user's review semantic content, we infer the latent variables r u by presenting the reviews y u to encoder of VAE y , we also simply take the mean r u = µ φy (y u ) to construct the latent variable r u and use the decoder of VAE x with r u as input to generate the predicted multinomial probability π u . So now only given a user's review document, our model can encode the text into the latent variables and decode it to click behavior. And we denote this Cross-Domain prediction method as VCM-CD.
Experiment Datasets
We experimented with three publicly accessible datasets from various domains with different scale and sparsity. the consumption records with reviews from Amazon.com. We use the the clothing shoes and jewelry category 5core (He and McAuley 2016). We only keep users with at least five products in their shopping record and products that are bought by at least 5 users. • Amazon Movies (Movies): This data (He and McAuley 2016) contains the user-movie rating from Movies and TV 5-core with reviews. We only keep user with 5 watching record and movies that are played at least 10 users.
For each data set, we binaries the explicit data by maintaining ratings of four or higher and interpret it as implicit feedback. We merge each user's reviews into one document, then we follow the same process to remove the stop words as (Miao, Yu, and Blunsom 2016) for each document, and keep the most common V = 10, 000 words in all documents as the vocabulary. Table 1 summarizes the characteristics of all the datasets after pre-processing.
Metric
We use two ranking-based metrics: the truncated normalized discounted cumulative gain (NDCG@R) and Recall@R.
2 https://www.kaggle.com/c/yelp-recsys-2013/data For each user, both the metrics compare the predicted rank of the held-out items with their true rank. Moreover, we get the predicted rank by sorting the multinomial probability π u . Formally, we define w(r) as the item at rank r, I[·] is the indicator function, and I u is the set of the held-out items that user u clicked on.
Recall@R(u, w) = R r=1 I[w(r) ∈ I u ] min(R, I u )
The expression in the denominator is the minimum of the number of items clicked by user u and R. While Recall@R considers all items ranked with the first R to be equally important, and it reaches to the maximum of 1 when the model ranks all relevant items in position. And the truncated discounted cumulative gain (DCG@R) is
DCG@R(u, w) = R r=1 2 I[w(r)∈Iu] − 1 log(r + 1)
DCG@R assign higher scores to the higher ranks versus lower ones. NDCG@R is the DCG@R linearly normalized to [0, 1] after dividing by the best possible DCG@R when all the held-out items are ranked at the top.
Baselines
As the previous works (Wang, Wang, and Yeung 2015;Li and She 2017;Zheng, Noroozi, and Yu 2017) has demonstrated, the performance of hybrid recommendation with auxiliary information is significantly better than CF-based models, so only hybrid models are used for comparison. The baselines included in our comparison are as follows:
• CDL: Collaborative Deep Learning (Wang, Wang, and Yeung 2015) tightly combines the SDAE with the PMF. The middle layer of the neural network acts as a bridge between the SDAE and the PMF. • CVAE: Collaborative VariationalAutoencoder (Li and She 2017) is a probabilistic feedforward model for joint learning of VAE and collaborative filtering. CVAE is a very strong baseline and achieves the best performance among our baseline methods. • DeepCoNN: Deep Cooperative Neural Networks (Zheng, Noroozi, and Yu 2017) jointly models user and item from textual reviews for rating prediction. To make it comparable, we revise the model to suitable for implicit feed back with negative sampling (He et al. 2017).
Experimental setup
We randomly split the interaction data into training, validation, test sets. For each user, we take 60% of the entire click history as x u and review document as y u to train models. For evaluation, we use 20% click history as the validation set to tune hyper-parameters, and 20% held-out click history as the test set. We can take click history in train set (VCM prediction) or the review document (VCM-CD prediction) to learn the necessary users' representations, and then compute metrics by looking how well the model ranks the rest unseen click items from the held-out set.
We select models hyper-parameters and architectures by evaluating NDCG@100 on the validation sets. For VCM, we explore Multilayer perceptron (MLP) with 0,1 and 2 hidden layers, and we find the best overall architecture for VCM would be [I → 600 → K → 600 → I] for VAE x and [V → 500 → K → V ] for VAE y . Moreover, we find that going deeper does not improve performance. We use tanh as the activation function between layers. Note that since the output of l φx and l φy are used as the mean and variance of the Gaussian random variables, we do not apply an activation function on it. We apply dropout at the input layer with probability 0.5 for VAE x . We do not apply the weight decay for any parts. We train our model using Adam (Kingma and Ba 2015) with the batch size of 128 users for 200 epoch on both datasets. We save the model with the best validation NDCG@100 and report test set metrics with it. For simplicity, we set β x and β y with the same value and anneal them linearly for 40, 000 anneal steps, using the schedule described in Algorithm 1. Figure 2 shows the NDCG@100 on Clothing validation set during training. Also, we empirically studied the effects of two important parameters of VCM: the latent dimension, the regularization coefficient β x and β y . Figure 3 shows the performance of VCM on the validation set of Clothing with varying K from 50 to 250 and β x , β y from 0.2 to 1.0 to investigate its sensitivity. As it can be seen, the best regularization coefficient is 0.4, and it does not improve the performance when the dimension of the latent space is greater than 100 in β = 0.4. Results on Movies and Yelp show the same trend, and thus they are omitted due to the space limitation. As it can be seen, CVAE is a very strong baseline and outperform the other baselines in most situations. Compared with CDL, it can be seen that the inference network learns a better probabilistic representation of a latent variable for auxiliary information than CDL, leading to better performance. While CDL need additional noisy criteria in auxiliary information observation space, which makes it not robust. The inferior results of DeepCoNN may be due to that it only uses a single learner to learn user/item representations, only with auxiliary information as input compared to hybrid model. Therefore it cannot capture implicit relationships between users stored in interaction data very well.
To focus more specifically on the comparison of CVAE and VCM, we can see that although both CVAE and VCM use deep learning models to extract representation for auxiliary information, the proposed VCM achieves better and more robust recommendation, especially for large R. This is because VCM learns the user probabilistic representation by two stream VAEs set up, instead of learning the user/item latent factor through the point estimate of PMF. Besides, the collaborative learning mechanism allows the model to fully leverage the Bayesian deep representation from two views of information and lets the two learners be optimized synchronously. On the other hand, due to the point nature of the latent factor learned by PMF and alternative optimization, CVAE fails to achieve this robust performance. VCM-CD that uses the cross-domain inference to make the prediction can achieve better performance than VCM cause the review text we used here contains more specific information about users preference when the interaction data is extremely sparse. This promotion is especially obvious in the most sparse Clothing dataset. For each subplot, a paired t-test is performed, and † indicates statistical significance at p < 0.01, compared to the best baseline. We could not finish DeepCoNN within a reasonable amount of time on Movies.
Ablation Study
In this subsection, we do the ablation study to understand better how the collaborative learning mechanism work, we develop:
• VCM-Se: The collaborative learning mechanism of VCM is removed. And the VCM is separated as two independent variational models. • VCM-OD: We first train VAE y on the reviews alone without the influence of VAE x . Then we fix VAE y , and train VAE x with KL(q φx (z u |x u )||q φy (r u |y u )). This means the information only can flow from VAE y to VAE x in one direction which is different with the bi-directional flow in collaborative learning mechanism. • VCM-NV: The bi-directional KL regularization in collaborative learning mechanism is replaced with a constraint: ||µ φx − µ φy || 2 2 , which does not consider the variance σ φx and σ φy of the probabilistic representations. The performance of VCM and its variants on Movies, Yelp, Clothing are given in Table 2. To demonstrate that the cooperation between two VAE can enhance recommendation performance, VCM-Se uses two independent VAEs for training without the collaborative learning mechanism. In this manner, we learn the two variational distribution q φx and q φy without considering the informative information from each other. As it can be seen in Table 2, VCM achieves the best performance. It verifies that modeling users' preference from two views does augment the performance of VAE x . To investigate the importance of the bi-directional information flow in collaborative learning mechanism, VCM-OD is introduced that only consider one-directional information flow. Moreover, the performance gap between VCM-OD and VCM suggests that using collaborative synchronous training scheme is better than only using VAE y to enhance VAE x . Furthermore, although VCM-NV can also learn probabilistic representation for two views of data, this constraint that is without considering variance makes two learners can not leverage all information stored in representations, leading the performance VCM-NV drops like CVAE with the same reason.
The impact of collaborative learning on VAE x
It is natural to wonder how the collaborative learning promote the performance of VAE x . Intuitively, by modeling users latent variable with click behavior and review text collaboratively, VCM can learn the more expressive representation than VCM-Se. Therefore VCM could be more Table 3: NDCG@10 and approximation of Capacityĉ u for users with increasing level of activity, and the activity level is measured by how many items a user clicked on. The larger the value ofĉ u is, the more information the distribution q φx contains. Although details vary across datasets, VCM consistently improves NDCG@10 andĉ u for the user of different levels. The relative improvement is shown in bracket. robust when user's click behavior data is scarce. To study this, we break down users into five groups based on their activity level. The activity level represents the number of items each user has clicked on. According to complementary slackness KKT condition (Kuhn 1951;Karush 1939), we can use KL(q φx (z u |x u )||p zu ) as the approximation of the capacity limitationĉ u after optimization. It indicates the amount of information stored in the variational distribution. We compute NDCG@10 andĉ u for each group using VCM and VCM-Se. Table.3 summarizes how performance differs across users of different active levels.
VCM-
It is interesting to find that, as the activity level increase, the variational distribution capacity of VCM and VCM-Se also monotonically increase. This phenomenon shows that, by using VAE x to learn the probabilistic representation of user latent variable, both VCM and VCM-Se can automatically allocate a proper user-level capacity for users of different levels to store the information.
We can also find that the variational distribution capacity of VCM is all greater than VCM-Se for users of different levels in the three data sets. This shows the collaborative learning mechanism allows the information in review text flows from q φy to q φx , which makes q φx more expressive with more information, and then the VAE x automatically allocates more capacity to store the more comprehensive information. The promotion of capacity between the two models is particularly prominent for users who only click a small number of items (shown in bold in Table 3). The live events are excellent. You can spend days in this museum if you love music so be prepared! try to catch the Apollonia music show that happens everyday. The museum restaurant was also really impressive. We had a chestnut soup that was spectacular.
The live events are excellent. You can spend days in this museum if you love music so be prepared! try to catch the Apollonia music show that happens everyday. The museum restaurant was also really impressive. We had a chestnut soup that was spectacular. The impact of collaborative learning on VAE y
The multinomial distribution p u is to model the probability of each word appearing in the review document y u for user u. Without collaborative learning, the likelihood of the review document rewards the VAE y for only putting probability mass on the high-frequency words in y u . However, with the influence of VAE x under the collaborative learning mechanism, VAE y should also assign more probability mass on the keywords that can represent user preference. We highlight words that have high probability p uv in Figure 5. We randomly sample the review example from two users in Yelp dataset. Words with the highest probability are colored dark-green, high probability words are lightedgreen, and low/medium probability words are not colored. In Figure 5, we compare the p u of VCM-Se and VCM model. For convenient comparison, we use blue and red rectangles to emphasize their differences.
For User I, VAE y of VCM puts more probability on "vegetarian," "healthy," "vegan," "sauce" words which show that the user may be a vegetarian and put more attention on healthy habit. While, without the collaborative learning mechanism, VAE y of VCM-Se puts more probability on some meaningless words such as "helpful," "wrong," "large." A similar result is observed for user II. The words "music" and "museum" show the obvious preference. This demonstrates the collaborative learning mechanism has a beneficial influence on both two learners, which not only can enhance the recommendation performance for VAE y but also make VAE y capture the more representative words.
Conclusion
This paper proposes the variational collaborative model that jointly model the generation of auxiliary information and interaction data collaboratively. It is a deep generative probabilistic model that learns a probabilistic representation of user latent variable through VAE, leading to robust recommendation performance. To the best of our knowledge, VCM is the first pure deep learning model that can fully leverage the probabilistic representation learned from different sources of data due to the synchronous collaborative learning mechanism. The experiment has shown that the proposed VCM can significantly outperform the state-ofthe-art methods for the hybrid recommendation with more robust performance.
| 5,178 |
1906.06142
|
2949302783
|
This research attempts to construct a network that can convert online and offline handwritten characters to each other. The proposed network consists of two Variational Auto-Encoders (VAEs) with a shared latent space. The VAEs are trained to generate online and offline handwritten Latin characters simultaneously. In this way, we create a cross-modal VAE (Cross-VAE). During training, the proposed Cross-VAE is trained to minimize the reconstruction loss of the two modalities, the distribution loss of the two VAEs, and a novel third loss called the space sharing loss. This third, space sharing loss is used to encourage the modalities to share the same latent space by calculating the distance between the latent variables. Through the proposed method mutual conversion of online and offline handwritten characters is possible. In this paper, we demonstrate the performance of the Cross-VAE through qualitative and quantitative analysis.
|
Recently, there are two common approaches that have become popular which use neural networks to learn latent representations, Encoder-Decoders and Generative Adversarial Networks (GAN) @cite_4 . Encoder-Decoders, such as an Autoencoder @cite_6 , compress data by encoding the inputs into a latent vector which is then uncompressed by the decoder. The Autoencoder is trained by minimizes the difference between the input and the output of the decoder. GANs take the opposite approach and use a generator, similar to an encoder, then uses a discriminator to maximize the authenticity of the generated data. Where Encoder-Decoders learn the latent representations directly, GANs learn to construct data from random latent representations.
|
{
"abstract": [
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"Autoencoders play a fundamental role in unsupervised learning and in deep architectures for transfer learning and other tasks. In spite of their fundamental role, only linear autoencoders over the real numbers have been solved analytically. Here we present a general mathematical framework for the study of both linear and non-linear autoencoders. The framework allows one to derive an analytical treatment for the most non-linear autoencoder, the Boolean autoencoder. Learning in the Boolean autoencoder is equivalent to a clustering problem that can be solved in polynomial time when the number of clusters is small and becomes NP complete when the number of clusters is large. The framework sheds light on the different kinds of autoencoders, their learning complexity, their horizontal and vertical composability in deep architectures, their critical points, and their fundamental connections to clustering, Hebbian learning, and information theory."
],
"cite_N": [
"@cite_4",
"@cite_6"
],
"mid": [
"2099471712",
"2617585083"
]
}
|
Modality Conversion of Handwritten Patterns by Cross Variational Autoencoders
|
Handwritten characters inherently have two modalities: image and temporal trajectory. This is because a handwritten character image is comprised of a single or multiple strokes and each stroke is originally generated as a temporal trajectory along with the pen movement. This dual-modality is essential and unique to handwritten characters. Therefore, we can expect unique and more accurate recognition methods and applications by utilizing the dual-modality of handwritten characters. This expectation emphasizes the necessity of the methodologies to convert one modality to the other.
Modality conversion from a temporal trajectory to an image is so-called inking. For multi-stroke character recognition, inking is a reasonable strategy to remove stroke-order variations. In the past, many hybrid character recognition methods (e.g., [1]) have been proposed, where two recognition engines are used for the original trajectory pattern and its "inked" image, respectively. In other methods (e.g., [2]), the local direction of the temporal trajectory is embedded into the inked image as an extra feature channel.
Modality conversion from a handwritten character image to a temporal trajectory representing the stroke writing order is so-called stroke recovery [3]. Comparing to the inking method, stroke recovery is far more difficult because it is the inverse problem of inferring the lost temporal information from a handwritten image.
In this paper, we propose a Cross-Variational Autoencoder (Cross-VAE), a neural network-based modality conversion method for handwritten characters. Cross-VAE has the ability to convert a handwritten character image into its original = inking stroke recovery Figure 1. Outline of the proposed Cross-VAE for modality conversion of handwritten characters. Two VAEs are prepared for two modalities, i.e., bitmap image and temporal trajectory, and co-trained so that their latent variables become the same for the same handwritten characters in different modalities. The trained Cross-VAE realizes inking and stroke recovery, as indicated by orange and purple paths, respectively. temporal trajectory and vice versa. In other words, the Cross-VAE can realize stroke recovery as well as inking by itself. This means that the Cross-VAE can manage the dual-modality of handwritten characters.
As shown in Fig. 1, the Cross-VAE is compounded from two VAEs. Each VAE [4] is a generation model which is decomposed into two neural networks: an encoder that obtains latent variable z from data X and a decoder that obtains output Y close to X from z, i.e., X ∼ Y . In general, the dimensionality of z is lower than X and Y and thus the latent variable z represents fundamental information of X in a compressed manner. One VAE of Cross-VAE is trained for a handwritten character image (i.e., image
X b → z b → image Y b (∼ X b )
) and the other VAE is trained for a temporal writing trajectory (i.e., temporal trajectory X t → z t → temporal trajectory Y t (∼ X t )). Note that the suffixes b and t indicate bitmap image and temporal trajectory, respectively.
The technical highlight of Cross-VAE is that those two VAEs are trained by considering the dual-modality of handwritten characters. Assume that the input image X b is generated from a temporal trajectory X t by inking, then we expect that their corresponding latent variables can be the same, that is, z b = z t . This is because X b and X t are the same handwritten character in different modalities and thus their fundamental information should be the same. Consequently, if we can co-train two VAEs under the condition z b = z t , we realize, for example, stroke recovery by the following steps:
X b → z b = z t → Y t (∼ X t ).
The main contributions of this paper are summarized as follows:
• A cross-modal VAE is proposed for online and offline handwriting conversion. The Cross-VAE is the combination of two VAEs with different modalities with a shared latent space and a dual-modality training process. • A novel loss function called the space sharing loss is introduced. The space sharing loss encourages the latent variables of the VAEs to use the same latent space. The shared latent space is what allows for an input modality to be represented by both output modalities simultaneously. • Quantitative and qualitative analyses are performed on the proposed method. We show that the Cross-VAE was able to successfully model both online and offline handwriting as well as be used for cross-modal conversion.
III. CROSS-MODAL VARIATIONAL AUTOENCODER (CROSS-VAE)
VAEs [4] are Autoencoders which use a variational Bayesian approach to learn the latent representation. VAEs have been used to generate time series data [19], including speech synthesis [20] and language generation [21]. They have also been used for image data [22] and data augmentation [23], [24].
A. Variational Autoencoder (VAE)
A VAE [4] consists of an encoder and a decoder. Given an input X ∈ R I , the encoder estimates the posterior distribution of a latent variable z ∈ R J . 1 The decoder, in turn, generates an output Y ∈ R I based on a latent variable sampled from the estimated posterior distribution. The VAE is trained end-to-end using a combination of the reconstruction loss L RE and the distribution loss L KL , or:
L VAE = L KL + L RE .(1)
The reconstruction loss L RE is the cross-entropy between the input and the output of the decoder. It is determined by:
L RE = − I i=1 X i log Y i +(1−X i ) log (1−Y i ),(2)
assuming that Y follows the multivariate Bernoulli distribution. In Eq. (2), X i and Y i are the i-th element of X and Y , respectively. The difference between a traditional Autoencoder or Encoder-Decoder network is that the VAE models the latent space using a Gaussian model and uses a variational lower bound to infer the posterior distribution of a latent variable. This is done by including a loss between the latent variables and the unit Gaussian distribution. Specifically, the distribution loss L KL is based on the Kullback-Leibler (KL) divergence, or:
L KL = − 1 2 J j=1 1 + log (σ 2 j ) − µ 2 j − σ 2 j ,(3)
assuming that the prior distribution of the latent variable z follows the multivariate Gaussian distribution of N (0, I). In Eq. (3), and µ and σ 2 are the mean and variance of the posterior distribution of z.
B. Cross-VAE
We propose the use of a Cross-modal VAE (Cross-VAE) to be used to perform online and offline handwritten character conversion, as illustrated in Fig. 2. The network in red is a VAE for online handwritten characters and the network in blue is for offline handwritten characters. The Cross-VAE is constructed from the joining of two different single Figure 2. Details of the proposed Cross-VAE. Xt is a time series input, X b is an image input. The illustrations of the time series, Xt, Yt→t, and X b→t , are colored from pink to yellow according to their sequence order. L KL is the distribution loss, L RE is the reconstruction loss, and L RE is the space sharing loss. Yt→t and Y b→b are the intra-modal outputs and Y t→b and Y b→t are the cross-modal outputs.
t t t b (0, ) b t b b t→t t→b b→b b→t ℒ RE(t→t) ℒ RE(b→b) ℒ RE(b→t) ℒ RE(t→b) ℒ LS ℒ KL(t) ℒ KL(b)
modality VAEs into one multi-modal VAE with a shared cross-modal latent space. Furthermore, we use a cross-modal loss function to ensure that the latent space is shared between the modalities.
During training, the two modalities are trained simultaneously. A time series input X t and an image input X b are entered into the encoders and four outputs are extracted from the decoders. For each input X t and X b , there are respective time series outputs, Y t→t and Y b→t , and respective image outputs Y t→b and Y b→b . The outputs Y t→t and Y b→b are intra-modal and the outputs Y t→b and Y b→t are cross-modal.
The loss function of the Cross-VAE is:
L Cross = L KL + L RE + L LS ,(4)
where L KL is the distribution loss and L RE is the reconstruction loss as described in Section III-A. The third loss, L LS , is the proposed space sharing loss. Due to training with the two inputs, X t and X b , two latent representations are created z t and z b , respectively. Therefore, the traditional VAE losses, L KL and L RE , need to be modified for Cross-VAE. Due to the two latent representations, the total distribution loss L KL is calculated by combining the individual distribution losses, L KL(t) and L KL(b) , or:
L KL = αL KL(t) + βL KL(b) ,(5)
where α and β are weights. The distribution loss of the individual input modalities is calculated using Eq. 3. Next, the reconstruction loss L RE takes into account the reconstruction of Y t→t and Y b→b , as well as the conversion of Y t→b and Y b→t . Thus:
L RE = γ t→t L RE(t→t) + γ b→b L RE(b→b) +γ t→b L RE(t→b) + γ b→t L RE(b→t) ,(6)
where L RE(t→t) and L RE(b→t) are the losses calculated by Eq.
(2) to input X t and L RE(b→b) and L RE(t→b) are to input X b . Also, γ t→t , γ b→b , γ t→b , γ b→t are weight of each respective loss.
C. Space Sharing Loss
While the Cross-VAE is trained using the combination of the reconstruction and distribution losses for the different modalities, we propose the use of a space sharing loss function to encourage the latent variable to share the same latent space. The space sharing loss L LS gives the square error of the latent variable z t obtained from the online character VAE and the latent variable z b of the offline character VAE. Specifically:
L LS = δ 1 2 z t − z b 2 ,(7)
where δ is a weight and · is the Euclidean norm.
IV. ONLINE AND OFFLINE CONVERSION OF HANDWRITTEN CHARACTERS USING CROSS-VAE
A. Dataset
For the experiment, we used handwritten uppercase characters from the Unipen online handwritten character dataset [25]. The online handwritten characters consist of time series made of (x, y) coordinates. The online characters were normalized to fit within a square bound by (0, 0) and (1, 1). In order to use a second modality, the online characters were rendered into images. The images were 32 × 32 pixels with 0 as the background and 1 as the foreground. Examples of the image renderings can be found in Fig. 3.
B. Architecture Details
The image-based encoder and decoder were constructed from a Convolutional Neural Network (CNN) with a similar structure as a ConvDeconv network [26]. The image encoder consists of four 3 × 3 convolutional layers with Rectified Linear Unit (ReLU) activations and corresponding 2 × 2 maxpooling layers. The number of nodes are detailed in Fig. 2. The decoder is a reflection of the encoder which uses unpooling and deconvolutions. Between the convolutional layers, there exist three fully-connected layers. One belonging to each, the encoder and decoder, and one for the latent variable.
For the time series-based encoder and decoder, there were two architectures chosen. The first is a CNN-based approach with 1D convolutions and no pooling. The second is a Recurrent Neural Network (RNN) approach using Long Short Term Memory (LSTM) [27] layers. Both the CNNbased approach and the LSTM-based approach have three fully-connected layers, one for the encoder, one for the latent variable, and one for the decoder. The two layer types were chosen to compare the difference between the LSTM layers which were designed specifically for time series and convolutional layers which are traditionally used for images.
C. Conversion Result
The results of the Cross-VAE are shown in Fig. 4. Fig. 4 (a) is from using LSTM layers for the online encoder and decoder and Fig. 4 (b) is from using convolutional layers in the online encoder and decoder. The results Y b→b and Y t→b are the images generated by the inputs X t and X b , respectively. The results Y t→t and Y b→t are renderings of the time series colored from pink to yellow in chronological order. Notably, the output Y b→t is the trajectory prediction based on the image input X b .
By examining Fig. 4, it can be seen that the mutual conversion of the modalities was accurately performed. This shows that the shared latent space learned by the simultaneous encoding of X b and X t is able to accurately represent both image data and time series data. In addition, not only was the stroke trajectory inferred, the results show that the shared latent space was able to encode temporal information about what is expected from the characters. For example, the "B" in Fig. 4 (a) is missing information, yet the time series results Y b→t and Y t→t were able to restore Figure 5. Multiple example results for the letter "A" using convolutional layers for the online encoder and decoder the character. The results from Fig. 4 qualitatively confirm that the Cross-VAE is able to do mutual modality conversion between the online and offline handwritten characters.
The letter "A" is another character that would normally be difficult to recover lost time series information due to having multiple variations. In some cases, the left-most stroke is drawn downwards and in some, it is drawn upwards depending on the author. Fig. 5 are examples of many different "A"s generated by the Cross-VAE. The figure shows that the Cross-VAE was able to correctly estimate most of the strokes of the "A"s. In particular, the results from Y b→t was able to not only correctly predict the stroke order but also was able to replicate the stroke velocity. Note the stroke that crosses the center of the "A." This further enforces the success of the proposed Cross-VAE.
D. Quantitative Evaluation of Conversion
In order to evaluate the method quantitatively, we constructed the following three measures to determine the quality of the generated characters:
PSNR: Peak signal-to-noise ratio (PSNR) calculates the similarity between the input images and the generated output images. PSNR is the ratio between the maximum luminance MAX and the amount of noise, or:
PSNR = 10 log 10 MAX 2 MSE ,(8)
where MSE is the mean squared error between X b and Y t→b . PSNR is measured in decibels (dB) with a larger value being better. SSIM: Structural Similarity (SSIM) predicts the perceived difference between images. Similar to PSNR, this acts as a similarity measure between X b and Y t→b . The equation for SSIM is:
SSIM = (2µ X b µ Y t→b + C 1 ) + (2σ X b Y t→b + C 2 ) µ 2 X b + µ 2 Y t→b + C 1 )(σ 2 X b + σ 2 Y t→b + C 2 ,(9)
where C 1 and C 2 are stabilizing constants set to C 1 = (0.01 × 255) 2 and C 2 = (0.03 × 255) 2 . µ is the average luminance, σ 2 is the variance, and σ is the covariance. SSIM is a value from 0 to 1 with a larger value meaning more similar. DTW: Dynamic time warping (DTW) was used as an evaluation for the time series generation as a method of measuring the stroke trajectory estimation. DTW is a robust distance measure between time series which uses dynamic programming to optimally match sequence elements. In this case, we use the average DTW-distance between the input time series X t and the cross-modality output X t→b . Smaller the DTW-distances between X t and X t→b means that the patterns are more similar and the Cross-VAE was able to replicate the original input time series. Thus, a smaller value is better.
The results of quantitative evaluations are shown in Table I. In the table, we evaluate the difference between using LSTM layers and convolutional layers in the time series encoder and decoder. The results are compared to the images and time series of the average pattern in each respective class. PSNR and SSIM are used for the cross-modal conversion from X t to Y t→b and DTW is used for the evaluation of the cross-modal conversion from X b to Y b→t .
For online to offline handwritten character conversion, or inking, the Cross-VAE did much better than the class average. In addition, the time series encoder and decoder with convolutional layers performed better than the LSTM. This shows that, despite being time series data, the convolutional layers were able to encode the information into the latent space better than the LSTM layers.
Similarly, for the offline to online handwritten character conversion, the Cross-VAE performed better than the average and the convolutional layer based time series encoder and decoder did better in reconstructing the time series. The DTW results specifically demonstrate that the Cross-VAE is able to predict the trajectories of the strokes. This information is normally lost during the rendering, however, the Cross-VAE is able to infer the stroke trajectory from the shared latent space.
Both evaluations found that using convolutional layers was better than using LSTM layers. This is justified for this data target because handwritten characters are spatial coordinates where the relevance of every element depends on its neighbors. Structured data such as this is well suited to convolutional layers, whereas the advantages of maintaining long-term dependencies in LSTMs is lost. We believe that due to this, the convolutional layer based encoder and decoder for the time series modality produces better results.
V. CONCLUSION
In this paper, we proposed a VAE for mutual modality conversion called a Cross-VAE. The Cross-VAE is made from the merging of two VAEs of different modalities by enforcing a shared latent space. To train the Cross-VAE, we propose using the combination of reconstruction loss and distribution loss from the original VAE and an additional space sharing loss. The space sharing loss encourages the different modalities of the Cross-VAE to use the same latent space embedding. In the experiments, we used online and offline handwritten characters to verify the ability of the Cross-VAE. The results show that the mutual conversion was possible and that the proposed Cross-VAE could accurately reconstruct the images and time series.
In the future, we will continue to improve the model and apply it to other applications. The Cross-VAE can be used for other types of data and tackle other tasks. Furthermore, this work opens the way for embedding different modalities into one shared latent space which can be used as a tool for representing those modalities in one space.
| 3,147 |
1906.06142
|
2949302783
|
This research attempts to construct a network that can convert online and offline handwritten characters to each other. The proposed network consists of two Variational Auto-Encoders (VAEs) with a shared latent space. The VAEs are trained to generate online and offline handwritten Latin characters simultaneously. In this way, we create a cross-modal VAE (Cross-VAE). During training, the proposed Cross-VAE is trained to minimize the reconstruction loss of the two modalities, the distribution loss of the two VAEs, and a novel third loss called the space sharing loss. This third, space sharing loss is used to encourage the modalities to share the same latent space by calculating the distance between the latent variables. Through the proposed method mutual conversion of online and offline handwritten characters is possible. In this paper, we demonstrate the performance of the Cross-VAE through qualitative and quantitative analysis.
|
For offline and online handwriting conversion, it has traditionally been done using classical feature-based methods @cite_17 but there has been some recent work using neural networks. @cite_9 used a CNN and RNN-based Encoder-Decoder network for handwriting trajectory recovery. Attempts were also made using neural networks to identify graph features @cite_20 and for sequential stroke prediction using regression CNNs @cite_21 .
|
{
"abstract": [
"In this paper, we introduce a novel technique to recover the pen trajectory of offline characters which is a crucial step for handwritten character recognition. Generally, online acquisition approach has more advantage than its offline counterpart as the online technique keeps track of the pen movement. Hence, pen tip trajectory retrieval from offline text can bridge the gap between online and offline methods. Our proposed framework employs sequence to sequence model which consists of an encoder-decoder LSTM module. The proposed encoder module consists of Convolutional LSTM network, which takes an offline character image as the input and encodes the feature sequence to a hidden representation. The output of the encoder is fed to a decoder LSTM and we get the successive coordinate points from every time step of the decoder LSTM. Although the sequence to sequence model is a popular paradigm in various computer vision and language translation tasks, the main contribution of our work lies in designing an end-to-end network for a decade old popular problem in document image analysis community. Tamil, Telugu and Devanagari characters of LIPI Toolkit dataset are used for our experiments. Our proposed method has achieved superior performance compared to the other conventional approaches.",
"Pen Tip Motion Prediction (PTMP) is the key step for Chinese handwriting order recovery (DOR), which is a challenge topic in the past few decades. We proposed a novel algorithm framework using Convolutional Neural Network (CNN) to predict pen tip movement for human handwriting pictures. The network is a regression CNN model, whose inputs are a series of part-drawn handwriting images and output is a vector that represents the probability of next stroke point position. The predicted output vector is utilized by an iteration framework to generate pen movement sequences. Experiments on public Chinese and English online handwriting database have indicated that the proposed model performs competitively in multi-writer handwriting PTMP and DOR tasks. Furthermore, the experiment demonstrated that characters belong to different languages shares some common writing patterns and the proposed method could learn these laws effectively.",
"Restoration of writing order from a single-stroked handwriting image can be seen as the problem of finding the smoothest path in its graph representation. In this paper, a 3-phase approach to restore a writing order is proposed within the framework of the edge continuity relation (ECR). In the initial, local phase, in order to obtain possible ECRs at an even-degree node, a neural network is used for the node of degree 4 and a theoretical approach is presented for the node of degree higher than 4 by introducing certain reasonable assumptions. In the second phase, we identify double-traced lines by employing maximum weighted matching. This makes it possible to transform the problem of obtaining possible ECRs at odd-degree node to that at even-degree node. In the final, global phase, we find all the candidates of single-stroked paths by depth first search and select the best one by evaluating SLALOM smoothness. Experiments on static images converted from online data in the Unipen database show that our method achieves a restoration rate of 96.0 percent",
""
],
"cite_N": [
"@cite_9",
"@cite_21",
"@cite_20",
"@cite_17"
],
"mid": [
"2964103813",
"2902343479",
"2096252661",
""
]
}
|
Modality Conversion of Handwritten Patterns by Cross Variational Autoencoders
|
Handwritten characters inherently have two modalities: image and temporal trajectory. This is because a handwritten character image is comprised of a single or multiple strokes and each stroke is originally generated as a temporal trajectory along with the pen movement. This dual-modality is essential and unique to handwritten characters. Therefore, we can expect unique and more accurate recognition methods and applications by utilizing the dual-modality of handwritten characters. This expectation emphasizes the necessity of the methodologies to convert one modality to the other.
Modality conversion from a temporal trajectory to an image is so-called inking. For multi-stroke character recognition, inking is a reasonable strategy to remove stroke-order variations. In the past, many hybrid character recognition methods (e.g., [1]) have been proposed, where two recognition engines are used for the original trajectory pattern and its "inked" image, respectively. In other methods (e.g., [2]), the local direction of the temporal trajectory is embedded into the inked image as an extra feature channel.
Modality conversion from a handwritten character image to a temporal trajectory representing the stroke writing order is so-called stroke recovery [3]. Comparing to the inking method, stroke recovery is far more difficult because it is the inverse problem of inferring the lost temporal information from a handwritten image.
In this paper, we propose a Cross-Variational Autoencoder (Cross-VAE), a neural network-based modality conversion method for handwritten characters. Cross-VAE has the ability to convert a handwritten character image into its original = inking stroke recovery Figure 1. Outline of the proposed Cross-VAE for modality conversion of handwritten characters. Two VAEs are prepared for two modalities, i.e., bitmap image and temporal trajectory, and co-trained so that their latent variables become the same for the same handwritten characters in different modalities. The trained Cross-VAE realizes inking and stroke recovery, as indicated by orange and purple paths, respectively. temporal trajectory and vice versa. In other words, the Cross-VAE can realize stroke recovery as well as inking by itself. This means that the Cross-VAE can manage the dual-modality of handwritten characters.
As shown in Fig. 1, the Cross-VAE is compounded from two VAEs. Each VAE [4] is a generation model which is decomposed into two neural networks: an encoder that obtains latent variable z from data X and a decoder that obtains output Y close to X from z, i.e., X ∼ Y . In general, the dimensionality of z is lower than X and Y and thus the latent variable z represents fundamental information of X in a compressed manner. One VAE of Cross-VAE is trained for a handwritten character image (i.e., image
X b → z b → image Y b (∼ X b )
) and the other VAE is trained for a temporal writing trajectory (i.e., temporal trajectory X t → z t → temporal trajectory Y t (∼ X t )). Note that the suffixes b and t indicate bitmap image and temporal trajectory, respectively.
The technical highlight of Cross-VAE is that those two VAEs are trained by considering the dual-modality of handwritten characters. Assume that the input image X b is generated from a temporal trajectory X t by inking, then we expect that their corresponding latent variables can be the same, that is, z b = z t . This is because X b and X t are the same handwritten character in different modalities and thus their fundamental information should be the same. Consequently, if we can co-train two VAEs under the condition z b = z t , we realize, for example, stroke recovery by the following steps:
X b → z b = z t → Y t (∼ X t ).
The main contributions of this paper are summarized as follows:
• A cross-modal VAE is proposed for online and offline handwriting conversion. The Cross-VAE is the combination of two VAEs with different modalities with a shared latent space and a dual-modality training process. • A novel loss function called the space sharing loss is introduced. The space sharing loss encourages the latent variables of the VAEs to use the same latent space. The shared latent space is what allows for an input modality to be represented by both output modalities simultaneously. • Quantitative and qualitative analyses are performed on the proposed method. We show that the Cross-VAE was able to successfully model both online and offline handwriting as well as be used for cross-modal conversion.
III. CROSS-MODAL VARIATIONAL AUTOENCODER (CROSS-VAE)
VAEs [4] are Autoencoders which use a variational Bayesian approach to learn the latent representation. VAEs have been used to generate time series data [19], including speech synthesis [20] and language generation [21]. They have also been used for image data [22] and data augmentation [23], [24].
A. Variational Autoencoder (VAE)
A VAE [4] consists of an encoder and a decoder. Given an input X ∈ R I , the encoder estimates the posterior distribution of a latent variable z ∈ R J . 1 The decoder, in turn, generates an output Y ∈ R I based on a latent variable sampled from the estimated posterior distribution. The VAE is trained end-to-end using a combination of the reconstruction loss L RE and the distribution loss L KL , or:
L VAE = L KL + L RE .(1)
The reconstruction loss L RE is the cross-entropy between the input and the output of the decoder. It is determined by:
L RE = − I i=1 X i log Y i +(1−X i ) log (1−Y i ),(2)
assuming that Y follows the multivariate Bernoulli distribution. In Eq. (2), X i and Y i are the i-th element of X and Y , respectively. The difference between a traditional Autoencoder or Encoder-Decoder network is that the VAE models the latent space using a Gaussian model and uses a variational lower bound to infer the posterior distribution of a latent variable. This is done by including a loss between the latent variables and the unit Gaussian distribution. Specifically, the distribution loss L KL is based on the Kullback-Leibler (KL) divergence, or:
L KL = − 1 2 J j=1 1 + log (σ 2 j ) − µ 2 j − σ 2 j ,(3)
assuming that the prior distribution of the latent variable z follows the multivariate Gaussian distribution of N (0, I). In Eq. (3), and µ and σ 2 are the mean and variance of the posterior distribution of z.
B. Cross-VAE
We propose the use of a Cross-modal VAE (Cross-VAE) to be used to perform online and offline handwritten character conversion, as illustrated in Fig. 2. The network in red is a VAE for online handwritten characters and the network in blue is for offline handwritten characters. The Cross-VAE is constructed from the joining of two different single Figure 2. Details of the proposed Cross-VAE. Xt is a time series input, X b is an image input. The illustrations of the time series, Xt, Yt→t, and X b→t , are colored from pink to yellow according to their sequence order. L KL is the distribution loss, L RE is the reconstruction loss, and L RE is the space sharing loss. Yt→t and Y b→b are the intra-modal outputs and Y t→b and Y b→t are the cross-modal outputs.
t t t b (0, ) b t b b t→t t→b b→b b→t ℒ RE(t→t) ℒ RE(b→b) ℒ RE(b→t) ℒ RE(t→b) ℒ LS ℒ KL(t) ℒ KL(b)
modality VAEs into one multi-modal VAE with a shared cross-modal latent space. Furthermore, we use a cross-modal loss function to ensure that the latent space is shared between the modalities.
During training, the two modalities are trained simultaneously. A time series input X t and an image input X b are entered into the encoders and four outputs are extracted from the decoders. For each input X t and X b , there are respective time series outputs, Y t→t and Y b→t , and respective image outputs Y t→b and Y b→b . The outputs Y t→t and Y b→b are intra-modal and the outputs Y t→b and Y b→t are cross-modal.
The loss function of the Cross-VAE is:
L Cross = L KL + L RE + L LS ,(4)
where L KL is the distribution loss and L RE is the reconstruction loss as described in Section III-A. The third loss, L LS , is the proposed space sharing loss. Due to training with the two inputs, X t and X b , two latent representations are created z t and z b , respectively. Therefore, the traditional VAE losses, L KL and L RE , need to be modified for Cross-VAE. Due to the two latent representations, the total distribution loss L KL is calculated by combining the individual distribution losses, L KL(t) and L KL(b) , or:
L KL = αL KL(t) + βL KL(b) ,(5)
where α and β are weights. The distribution loss of the individual input modalities is calculated using Eq. 3. Next, the reconstruction loss L RE takes into account the reconstruction of Y t→t and Y b→b , as well as the conversion of Y t→b and Y b→t . Thus:
L RE = γ t→t L RE(t→t) + γ b→b L RE(b→b) +γ t→b L RE(t→b) + γ b→t L RE(b→t) ,(6)
where L RE(t→t) and L RE(b→t) are the losses calculated by Eq.
(2) to input X t and L RE(b→b) and L RE(t→b) are to input X b . Also, γ t→t , γ b→b , γ t→b , γ b→t are weight of each respective loss.
C. Space Sharing Loss
While the Cross-VAE is trained using the combination of the reconstruction and distribution losses for the different modalities, we propose the use of a space sharing loss function to encourage the latent variable to share the same latent space. The space sharing loss L LS gives the square error of the latent variable z t obtained from the online character VAE and the latent variable z b of the offline character VAE. Specifically:
L LS = δ 1 2 z t − z b 2 ,(7)
where δ is a weight and · is the Euclidean norm.
IV. ONLINE AND OFFLINE CONVERSION OF HANDWRITTEN CHARACTERS USING CROSS-VAE
A. Dataset
For the experiment, we used handwritten uppercase characters from the Unipen online handwritten character dataset [25]. The online handwritten characters consist of time series made of (x, y) coordinates. The online characters were normalized to fit within a square bound by (0, 0) and (1, 1). In order to use a second modality, the online characters were rendered into images. The images were 32 × 32 pixels with 0 as the background and 1 as the foreground. Examples of the image renderings can be found in Fig. 3.
B. Architecture Details
The image-based encoder and decoder were constructed from a Convolutional Neural Network (CNN) with a similar structure as a ConvDeconv network [26]. The image encoder consists of four 3 × 3 convolutional layers with Rectified Linear Unit (ReLU) activations and corresponding 2 × 2 maxpooling layers. The number of nodes are detailed in Fig. 2. The decoder is a reflection of the encoder which uses unpooling and deconvolutions. Between the convolutional layers, there exist three fully-connected layers. One belonging to each, the encoder and decoder, and one for the latent variable.
For the time series-based encoder and decoder, there were two architectures chosen. The first is a CNN-based approach with 1D convolutions and no pooling. The second is a Recurrent Neural Network (RNN) approach using Long Short Term Memory (LSTM) [27] layers. Both the CNNbased approach and the LSTM-based approach have three fully-connected layers, one for the encoder, one for the latent variable, and one for the decoder. The two layer types were chosen to compare the difference between the LSTM layers which were designed specifically for time series and convolutional layers which are traditionally used for images.
C. Conversion Result
The results of the Cross-VAE are shown in Fig. 4. Fig. 4 (a) is from using LSTM layers for the online encoder and decoder and Fig. 4 (b) is from using convolutional layers in the online encoder and decoder. The results Y b→b and Y t→b are the images generated by the inputs X t and X b , respectively. The results Y t→t and Y b→t are renderings of the time series colored from pink to yellow in chronological order. Notably, the output Y b→t is the trajectory prediction based on the image input X b .
By examining Fig. 4, it can be seen that the mutual conversion of the modalities was accurately performed. This shows that the shared latent space learned by the simultaneous encoding of X b and X t is able to accurately represent both image data and time series data. In addition, not only was the stroke trajectory inferred, the results show that the shared latent space was able to encode temporal information about what is expected from the characters. For example, the "B" in Fig. 4 (a) is missing information, yet the time series results Y b→t and Y t→t were able to restore Figure 5. Multiple example results for the letter "A" using convolutional layers for the online encoder and decoder the character. The results from Fig. 4 qualitatively confirm that the Cross-VAE is able to do mutual modality conversion between the online and offline handwritten characters.
The letter "A" is another character that would normally be difficult to recover lost time series information due to having multiple variations. In some cases, the left-most stroke is drawn downwards and in some, it is drawn upwards depending on the author. Fig. 5 are examples of many different "A"s generated by the Cross-VAE. The figure shows that the Cross-VAE was able to correctly estimate most of the strokes of the "A"s. In particular, the results from Y b→t was able to not only correctly predict the stroke order but also was able to replicate the stroke velocity. Note the stroke that crosses the center of the "A." This further enforces the success of the proposed Cross-VAE.
D. Quantitative Evaluation of Conversion
In order to evaluate the method quantitatively, we constructed the following three measures to determine the quality of the generated characters:
PSNR: Peak signal-to-noise ratio (PSNR) calculates the similarity between the input images and the generated output images. PSNR is the ratio between the maximum luminance MAX and the amount of noise, or:
PSNR = 10 log 10 MAX 2 MSE ,(8)
where MSE is the mean squared error between X b and Y t→b . PSNR is measured in decibels (dB) with a larger value being better. SSIM: Structural Similarity (SSIM) predicts the perceived difference between images. Similar to PSNR, this acts as a similarity measure between X b and Y t→b . The equation for SSIM is:
SSIM = (2µ X b µ Y t→b + C 1 ) + (2σ X b Y t→b + C 2 ) µ 2 X b + µ 2 Y t→b + C 1 )(σ 2 X b + σ 2 Y t→b + C 2 ,(9)
where C 1 and C 2 are stabilizing constants set to C 1 = (0.01 × 255) 2 and C 2 = (0.03 × 255) 2 . µ is the average luminance, σ 2 is the variance, and σ is the covariance. SSIM is a value from 0 to 1 with a larger value meaning more similar. DTW: Dynamic time warping (DTW) was used as an evaluation for the time series generation as a method of measuring the stroke trajectory estimation. DTW is a robust distance measure between time series which uses dynamic programming to optimally match sequence elements. In this case, we use the average DTW-distance between the input time series X t and the cross-modality output X t→b . Smaller the DTW-distances between X t and X t→b means that the patterns are more similar and the Cross-VAE was able to replicate the original input time series. Thus, a smaller value is better.
The results of quantitative evaluations are shown in Table I. In the table, we evaluate the difference between using LSTM layers and convolutional layers in the time series encoder and decoder. The results are compared to the images and time series of the average pattern in each respective class. PSNR and SSIM are used for the cross-modal conversion from X t to Y t→b and DTW is used for the evaluation of the cross-modal conversion from X b to Y b→t .
For online to offline handwritten character conversion, or inking, the Cross-VAE did much better than the class average. In addition, the time series encoder and decoder with convolutional layers performed better than the LSTM. This shows that, despite being time series data, the convolutional layers were able to encode the information into the latent space better than the LSTM layers.
Similarly, for the offline to online handwritten character conversion, the Cross-VAE performed better than the average and the convolutional layer based time series encoder and decoder did better in reconstructing the time series. The DTW results specifically demonstrate that the Cross-VAE is able to predict the trajectories of the strokes. This information is normally lost during the rendering, however, the Cross-VAE is able to infer the stroke trajectory from the shared latent space.
Both evaluations found that using convolutional layers was better than using LSTM layers. This is justified for this data target because handwritten characters are spatial coordinates where the relevance of every element depends on its neighbors. Structured data such as this is well suited to convolutional layers, whereas the advantages of maintaining long-term dependencies in LSTMs is lost. We believe that due to this, the convolutional layer based encoder and decoder for the time series modality produces better results.
V. CONCLUSION
In this paper, we proposed a VAE for mutual modality conversion called a Cross-VAE. The Cross-VAE is made from the merging of two VAEs of different modalities by enforcing a shared latent space. To train the Cross-VAE, we propose using the combination of reconstruction loss and distribution loss from the original VAE and an additional space sharing loss. The space sharing loss encourages the different modalities of the Cross-VAE to use the same latent space embedding. In the experiments, we used online and offline handwritten characters to verify the ability of the Cross-VAE. The results show that the mutual conversion was possible and that the proposed Cross-VAE could accurately reconstruct the images and time series.
In the future, we will continue to improve the model and apply it to other applications. The Cross-VAE can be used for other types of data and tackle other tasks. Furthermore, this work opens the way for embedding different modalities into one shared latent space which can be used as a tool for representing those modalities in one space.
| 3,147 |
1906.05506
|
2949479297
|
This paper proposes a novel Recurrent Neural Network (RNN) language model that takes advantage of character information. We focus on character n-grams based on research in the field of word embedding construction ( 2016). Our proposed method constructs word embeddings from character n-gram embeddings and combines them with ordinary word embeddings. We demonstrate that the proposed method achieves the best perplexities on the language modeling datasets: Penn Treebank, WikiText-2, and WikiText-103. Moreover, we conduct experiments on application tasks: machine translation and headline generation. The experimental results indicate that our proposed method also positively affects these tasks.
|
@cite_10 introduced RNN into language modeling to handle arbitrary-length sequences in computing conditional probability @math . They demonstrated that the RNN language model outperformed the Kneser-Ney smoothed 5-gram language model @cite_3 , which is a sophisticated @math -gram language model.
|
{
"abstract": [
"A new recurrent neural network based language model (RNN LM) with applications to speech recognition is presented. Results indicate that it is possible to obtain around 50 reduction of perplexity by using mixture of several RNN LMs, compared to a state of the art backoff language model. Speech recognition experiments show around 18 reduction of word error rate on the Wall Street Journal task when comparing models trained on the same amount of data, and around 5 on the much harder NIST RT05 task, even when the backoff model is trained on much more data than the RNN LM. We provide ample empirical evidence to suggest that connectionist language models are superior to standard n-gram techniques, except their high computational (training) complexity. Index Terms: language modeling, recurrent neural networks, speech recognition",
"We survey the most widely-used algorithms for smoothing models for language n -gram modeling. We then present an extensive empirical comparison of several of these smoothing techniques, including those described by Jelinek and Mercer (1980); Katz (1987); Bell, Cleary and Witten (1990); Ney, Essen and Kneser (1994), and Kneser and Ney (1995). We investigate how factors such as training data size, training corpus (e.g. Brown vs. Wall Street Journal), count cutoffs, and n -gram order (bigram vs. trigram) affect the relative performance of these methods, which is measured through the cross-entropy of test data. We find that these factors can significantly affect the relative performance of models, with the most significant factor being training data size. Since no previous comparisons have examined these factors systematically, this is the first thorough characterization of the relative performance of various algorithms. In addition, we introduce methodologies for analyzing smoothing algorithm efficacy in detail, and using these techniques we motivate a novel variation of Kneser?Ney smoothing that consistently outperforms all other algorithms evaluated. Finally, results showing that improved language model smoothing leads to improved speech recognition performance are presented."
],
"cite_N": [
"@cite_10",
"@cite_3"
],
"mid": [
"179875071",
"2158195707"
]
}
|
Character n-gram Embeddings to Improve RNN Language Models
| 0 |
|
1906.05506
|
2949479297
|
This paper proposes a novel Recurrent Neural Network (RNN) language model that takes advantage of character information. We focus on character n-grams based on research in the field of word embedding construction ( 2016). Our proposed method constructs word embeddings from character n-gram embeddings and combines them with ordinary word embeddings. We demonstrate that the proposed method achieves the best perplexities on the language modeling datasets: Penn Treebank, WikiText-2, and WikiText-103. Moreover, we conduct experiments on application tasks: machine translation and headline generation. The experimental results indicate that our proposed method also positively affects these tasks.
|
@cite_9 drastically improved the performance of language modeling by applying LSTM and the dropout technique @cite_27 . @cite_9 applied dropout to all the connections except for recurrent connections but @cite_39 proposed variational inference based dropout to regularize recurrent connections. @cite_19 demonstrated that the standard LSTM can achieve superior performance by selecting appropriate hyperparameters. Finally, @cite_26 introduced DropConnect @cite_13 and averaged SGD @cite_7 into the LSTM language model and achieved state-of-the-art perplexities on PTB and WT2. For WT103, @cite_17 found that QRNN @cite_40 , which is a faster architecture than LSTM, achieved the best perplexity. Our experimental results show that the proposed char @math -MS-vec improved the performance of these state-of-the-art language models.
|
{
"abstract": [
"In this paper, we consider the specific problem of word-level language modeling and investigate strategies for regularizing and optimizing LSTM-based models. We propose the weight-dropped LSTM, which uses DropConnect on hidden-to-hidden weights, as a form of recurrent regularization. Further, we introduce NT-ASGD, a non-monotonically triggered (NT) variant of the averaged stochastic gradient method (ASGD), wherein the averaging trigger is determined using a NT condition as opposed to being tuned by the user. Using these and other regularization strategies, our ASGD Weight-Dropped LSTM (AWD-LSTM) achieves state-of-the-art word level perplexities on two data sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the effectiveness of a neural cache in conjunction with our proposed model, we achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and 52.0 on WikiText-2. We also explore the viability of the proposed regularization and optimization strategies in the context of the quasi-recurrent neural network (QRNN) and demonstrate comparable performance to the AWD-LSTM counterpart. The code for reproducing the results is open sourced and is available at https: github.com salesforce awd-lstm-lm.",
"A new recursive algorithm of stochastic approximation type with the averaging of trajectories is investigated. Convergence with probability one is proved for a variety of classical optimization and identification problems. It is also demonstrated for these problems that the proposed algorithm achieves the highest possible rate of convergence.",
"We present a simple regularization technique for Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM) units. Dropout, the most successful technique for regularizing neural networks, does not work well with RNNs and LSTMs. In this paper, we show how to correctly apply dropout to LSTMs, and show that it substantially reduces overfitting on a variety of tasks. These tasks include language modeling, speech recognition, image caption generation, and machine translation.",
"Recurrent neural networks (RNNs) stand at the forefront of many recent developments in deep learning. Yet a major difficulty with these models is their tendency to overfit, with dropout shown to fail when applied to recurrent layers. Recent results at the intersection of Bayesian modelling and deep learning offer a Bayesian interpretation of common deep learning techniques such as dropout. This grounding of dropout in approximate Bayesian inference suggests an extension of the theoretical results, offering insights into the use of dropout with RNN models. We apply this new variational inference based dropout technique in LSTM and GRU models, assessing it on language modelling and sentiment analysis tasks. The new approach outperforms existing techniques, and to the best of our knowledge improves on the single model state-of-the-art in language modelling with the Penn Treebank (73.4 test perplexity). This extends our arsenal of variational tools in deep learning.",
"Ongoing innovations in recurrent neural network architectures have provided a steady influx of apparently state-of-the-art results on language modelling benchmarks. However, these have been evaluated using differing codebases and limited computational resources, which represent uncontrolled sources of experimental variation. We reevaluate several popular architectures and regularisation methods with large-scale automatic black-box hyperparameter tuning and arrive at the somewhat surprising conclusion that standard LSTM architectures, when properly regularised, outperform more recent models. We establish a new state of the art on the Penn Treebank and Wikitext-2 corpora, as well as strong baselines on the Hutter Prize dataset.",
"Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different \"thinned\" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.",
"Recurrent neural networks are a powerful tool for modeling sequential data, but the dependence of each timestep's computation on the previous timestep's output limits parallelism and makes RNNs unwieldy for very long sequences. We introduce quasi-recurrent neural networks (QRNNs), an approach to neural sequence modeling that alternates convolutional layers, which apply in parallel across timesteps, and a minimalist recurrent pooling function that applies in parallel across channels. Despite lacking trainable recurrent layers, stacked QRNNs have better predictive accuracy than stacked LSTMs of the same hidden size. Due to their increased parallelism, they are up to 16 times faster at train and test time. Experiments on language modeling, sentiment classification, and character-level neural machine translation demonstrate these advantages and underline the viability of QRNNs as a basic building block for a variety of sequence tasks.",
"We introduce DropConnect, a generalization of Dropout (, 2012), for regularizing large fully-connected layers within neural networks. When training with Dropout, a randomly selected subset of activations are set to zero within each layer. DropConnect instead sets a randomly selected subset of weights within the network to zero. Each unit thus receives input from a random subset of units in the previous layer. We derive a bound on the generalization performance of both Dropout and DropConnect. We then evaluate DropConnect on a range of datasets, comparing to Dropout, and show state-of-the-art results on several image recognition benchmarks by aggregating multiple DropConnect-trained models.",
"Many of the leading approaches in language modeling introduce novel, complex and specialized architectures. We take existing state-of-the-art word level language models based on LSTMs and QRNNs and extend them to both larger vocabularies as well as character-level granularity. When properly tuned, LSTMs and QRNNs achieve state-of-the-art results on character-level (Penn Treebank, enwik8) and word-level (WikiText-103) datasets, respectively. Results are obtained in only 12 hours (WikiText-103) to 2 days (enwik8) using a single modern GPU."
],
"cite_N": [
"@cite_26",
"@cite_7",
"@cite_9",
"@cite_39",
"@cite_19",
"@cite_27",
"@cite_40",
"@cite_13",
"@cite_17"
],
"mid": [
"2962832505",
"2086161653",
"1591801644",
"2963266340",
"2963748792",
"2095705004",
"2952436057",
"4919037",
"2792376130"
]
}
|
Character n-gram Embeddings to Improve RNN Language Models
| 0 |
|
1906.05506
|
2949479297
|
This paper proposes a novel Recurrent Neural Network (RNN) language model that takes advantage of character information. We focus on character n-grams based on research in the field of word embedding construction ( 2016). Our proposed method constructs word embeddings from character n-gram embeddings and combines them with ordinary word embeddings. We demonstrate that the proposed method achieves the best perplexities on the language modeling datasets: Penn Treebank, WikiText-2, and WikiText-103. Moreover, we conduct experiments on application tasks: machine translation and headline generation. The experimental results indicate that our proposed method also positively affects these tasks.
|
@cite_31 introduced character information into RNN language models. They applied CNN to character embeddings for word embedding construction. Their proposed method achieved perplexity competitive with the basic LSTM language model @cite_9 even though its parameter size is small. @cite_35 also applied CNN to construct word embeddings from character embeddings. They indicated that CNN also positively affected the LSTM language model in a huge corpus. @cite_33 proposed a method concatenating character embeddings with a word embedding to use character information. In contrast to these methods, we used character @math -gram embeddings to construct word embeddings. To compare the proposed method to these methods, we combined the CNN proposed by @cite_31 with the state-of-the-art LSTM language model (AWD-LSTM) @cite_26 . Our experimental results indicate that the proposed method outperformed the method using character embeddings (charCNN in Table ).
|
{
"abstract": [
"In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.",
"In this paper, we consider the specific problem of word-level language modeling and investigate strategies for regularizing and optimizing LSTM-based models. We propose the weight-dropped LSTM, which uses DropConnect on hidden-to-hidden weights, as a form of recurrent regularization. Further, we introduce NT-ASGD, a non-monotonically triggered (NT) variant of the averaged stochastic gradient method (ASGD), wherein the averaging trigger is determined using a NT condition as opposed to being tuned by the user. Using these and other regularization strategies, our ASGD Weight-Dropped LSTM (AWD-LSTM) achieves state-of-the-art word level perplexities on two data sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the effectiveness of a neural cache in conjunction with our proposed model, we achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and 52.0 on WikiText-2. We also explore the viability of the proposed regularization and optimization strategies in the context of the quasi-recurrent neural network (QRNN) and demonstrate comparable performance to the AWD-LSTM counterpart. The code for reproducing the results is open sourced and is available at https: github.com salesforce awd-lstm-lm.",
"",
"We present a simple regularization technique for Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM) units. Dropout, the most successful technique for regularizing neural networks, does not work well with RNNs and LSTMs. In this paper, we show how to correctly apply dropout to LSTMs, and show that it substantially reduces overfitting on a variety of tasks. These tasks include language modeling, speech recognition, image caption generation, and machine translation.",
"We describe a simple neural language model that relies only on character-level inputs. Predictions are still made at the word-level. Our model employs a convolutional neural network (CNN) and a highway network over characters, whose output is given to a long short-term memory (LSTM) recurrent neural network language model (RNN-LM). On the English Penn Treebank the model is on par with the existing state-of-the-art despite having 60 fewer parameters. On languages with rich morphology (Arabic, Czech, French, German, Spanish, Russian), the model outperforms word-level morpheme-level LSTM baselines, again with fewer parameters. The results suggest that on many languages, character inputs are sufficient for language modeling. Analysis of word representations obtained from the character composition part of the model reveals that the model is able to encode, from characters only, both semantic and orthographic information."
],
"cite_N": [
"@cite_35",
"@cite_26",
"@cite_33",
"@cite_9",
"@cite_31"
],
"mid": [
"2259472270",
"2962832505",
"",
"1591801644",
"1938755728"
]
}
|
Character n-gram Embeddings to Improve RNN Language Models
| 0 |
|
1906.05506
|
2949479297
|
This paper proposes a novel Recurrent Neural Network (RNN) language model that takes advantage of character information. We focus on character n-grams based on research in the field of word embedding construction ( 2016). Our proposed method constructs word embeddings from character n-gram embeddings and combines them with ordinary word embeddings. We demonstrate that the proposed method achieves the best perplexities on the language modeling datasets: Penn Treebank, WikiText-2, and WikiText-103. Moreover, we conduct experiments on application tasks: machine translation and headline generation. The experimental results indicate that our proposed method also positively affects these tasks.
|
Some previous studies focused on boosting the performance of language models during testing @cite_16 @cite_34 . For example, @cite_34 proposed dynamic evaluation that updates model parameters based on the given correct sequence during evaluation. Although these methods might further improve our proposed language model, we omitted these methods since it is unreasonable to obtain correct outputs in applications such as machine translation.
|
{
"abstract": [
"We propose an extension to neural network language models to adapt their prediction to the recent history. Our model is a simplified version of memory augmented networks, which stores past hidden activations as memory and accesses them through a dot product with the current hidden activation. This mechanism is very efficient and scales to very large memory sizes. We also draw a link between the use of external memory in neural network and cache models used with count based language models. We demonstrate on several language model datasets that our approach performs significantly better than recent memory augmented networks.",
"We present methodology for using dynamic evaluation to improve neural sequence models. Models are adapted to recent history via a gradient descent based mechanism, causing them to assign higher probabilities to re-occurring sequential patterns. Dynamic evaluation outperforms existing adaptation approaches in our comparisons. Dynamic evaluation improves the state-of-the-art word-level perplexities on the Penn Treebank and WikiText-2 datasets to 51.1 and 44.3 respectively, and the state-of-the-art character-level cross-entropies on the text8 and Hutter Prize datasets to 1.19 bits char and 1.08 bits char respectively."
],
"cite_N": [
"@cite_16",
"@cite_34"
],
"mid": [
"2571859396",
"2757047188"
]
}
|
Character n-gram Embeddings to Improve RNN Language Models
| 0 |
|
1906.05506
|
2949479297
|
This paper proposes a novel Recurrent Neural Network (RNN) language model that takes advantage of character information. We focus on character n-grams based on research in the field of word embedding construction ( 2016). Our proposed method constructs word embeddings from character n-gram embeddings and combines them with ordinary word embeddings. We demonstrate that the proposed method achieves the best perplexities on the language modeling datasets: Penn Treebank, WikiText-2, and WikiText-103. Moreover, we conduct experiments on application tasks: machine translation and headline generation. The experimental results indicate that our proposed method also positively affects these tasks.
|
Previous studies proposed various methods to construct word embeddings. @cite_0 applied Recursive Neural Networks to construct word embeddings from morphemic embeddings. @cite_45 applied bidirectional LSTMs to character embeddings for word embedding construction. On the other hand, @cite_12 and @cite_44 focused on character @math -gram. They demonstrated that the sum of character @math -gram embeddings outperformed ordinary word embeddings. In addition, @cite_44 found that the sum of character @math -gram embeddings also outperformed word embeddings constructed from character embeddings with CNN and LSTM.
|
{
"abstract": [
"Vector-space word representations have been very successful in recent years at improving performance across a variety of NLP tasks. However, common to most existing work, words are regarded as independent entities without any explicit relationship among morphologically related words being modeled. As a result, rare and complex words are often poorly estimated, and all unknown words are represented in a rather crude way using only one or a few vectors. This paper addresses this shortcoming by proposing a novel model that is capable of building representations for morphologically complex words from their morphemes. We combine recursive neural networks (RNNs), where each morpheme is a basic unit, with neural language models (NLMs) to consider contextual information in learning morphologicallyaware word representations. Our learned models outperform existing word representations by a good margin on word similarity tasks across many datasets, including a new dataset we introduce focused on rare words to complement existing ones in an interesting way.",
"",
"We introduce a model for constructing vector representations of words by composing characters using bidirectional LSTMs. Relative to traditional word representation models that have independent vectors for each word type, our model requires only a single vector per character type and a fixed set of parameters for the compositional model. Despite the compactness of this model and, more importantly, the arbitrary nature of the form-function relationship in language, our \"composed\" word representations yield state-of-the-art results in language modeling and part-of-speech tagging. Benefits over traditional baselines are particularly pronounced in morphologically rich languages (e.g., Turkish).",
"Continuous word representations, trained on large unlabeled corpora are useful for many natural language processing tasks. Popular models to learn such representations ignore the morphology of words, by assigning a distinct vector to each word. This is a limitation, especially for languages with large vocabularies and many rare words. In this paper, we propose a new approach based on the skipgram model, where each word is represented as a bag of character n-grams. A vector representation is associated to each character n-gram, words being represented as the sum of these representations. Our method is fast, allowing to train models on large corpora quickly and allows to compute word representations for words that did not appear in the training data. We evaluate our word representations on nine different languages, both on word similarity and analogy tasks. By comparing to recently proposed morphological word representations, we show that our vectors achieve state-of-the-art performance on these tasks."
],
"cite_N": [
"@cite_0",
"@cite_44",
"@cite_45",
"@cite_12"
],
"mid": [
"2251012068",
"2463895987",
"2949563612",
"2493916176"
]
}
|
Character n-gram Embeddings to Improve RNN Language Models
| 0 |
|
1906.05506
|
2949479297
|
This paper proposes a novel Recurrent Neural Network (RNN) language model that takes advantage of character information. We focus on character n-grams based on research in the field of word embedding construction ( 2016). Our proposed method constructs word embeddings from character n-gram embeddings and combines them with ordinary word embeddings. We demonstrate that the proposed method achieves the best perplexities on the language modeling datasets: Penn Treebank, WikiText-2, and WikiText-103. Moreover, we conduct experiments on application tasks: machine translation and headline generation. The experimental results indicate that our proposed method also positively affects these tasks.
|
As an encoder, previous studies argued that additive composition, which computes the (weighted) sum of embeddings, is a suitable method theoretically @cite_32 and empirically @cite_30 @cite_24 . In this paper, we used multi-dimensional self-attention to construct word embeddings because it can be interpreted as an element-wise weighted sum. Through experiments, we indicated that multi-dimensional self-attention is superior to the summation and standard self-attention as an encoder.
|
{
"abstract": [
"The field of distributional-compositional semantics has yielded a range of computational models for composing the vector of a phrase from those of constituent word vectors. Existing models have various ranges of their expressiveness, recursivity, and trainability. However, these models have not been examined closely for their compositionality. We implement and compare these models under the same conditions. The experimentally obtained results demonstrate that the model using different composition matrices for different dependency relations achieved state-ofthe-art performance on a dataset for two-word compositions (Mitchell and Lapata, 2010).",
"Additive composition ( in Discourse Process 15:285---307, 1998; Landauer and Dumais in Psychol Rev 104(2):211, 1997; Mitchell and Lapata in Cognit Sci 34(8):1388---1429, 2010) is a widely used method for computing meanings of phrases, which takes the average of vector representations of the constituent words. In this article, we prove an upper bound for the bias of additive composition, which is the first theoretical analysis on compositional frameworks from a machine learning point of view. The bound is written in terms of collocation strength; we prove that the more exclusively two successive words tend to occur together, the more accurate one can guarantee their additive composition as an approximation to the natural phrase vector. Our proof relies on properties of natural language data that are empirically verified, and can be theoretically derived from an assumption that the data is generated from a Hierarchical Pitman---Yor Process. The theory endorses additive composition as a reasonable operation for calculating meanings of phrases, and suggests ways to improve additive compositionality, including: transforming entries of distributional word vectors by a function that meets a specific condition, constructing a novel type of vector representations to make additive composition sensitive to word order, and utilizing singular value decomposition to train word vectors.",
""
],
"cite_N": [
"@cite_30",
"@cite_32",
"@cite_24"
],
"mid": [
"2250716442",
"2270550389",
"2962828454"
]
}
|
Character n-gram Embeddings to Improve RNN Language Models
| 0 |
|
1906.05374
|
2952193948
|
We present a meta-learning approach based on learning an adaptive, high-dimensional loss function that can generalize across multiple tasks and different model architectures. We develop a fully differentiable pipeline for learning a loss function targeted at maximizing the performance of an optimizee trained using this loss function. We observe that the loss landscape produced by our learned loss significantly improves upon the original task-specific loss. We evaluate our method on supervised and reinforcement learning tasks. Furthermore, we show that our pipeline is able to operate in sparse reward and self-supervised reinforcement learning scenarios.
|
Meta-learning originates in the concept of learning to learn @cite_4 @cite_16 @cite_31 . Recently, there has a been a wide interest in finding ways to improve learning speeds and generalization to new tasks through meta-learning. The main directions of the research in this area can be divided into learning representations that can be easily adapted to new tasks @cite_11 , learning unsupervised rules that can be transferred between tasks @cite_3 @cite_23 , learning optimizer policies that transform policy updates with respect to known loss or reward functions @cite_2 @cite_24 @cite_29 @cite_8 , or learning loss reward landscapes @cite_13 @cite_22 .
|
{
"abstract": [
"Preface. Part I: Overview Articles. 1. Learning to Learn: Introduction and Overview S. Thrun, L. Pratt. 2. A Survey of Connectionist Network Reuse Through Transfer L. Pratt, B. Jennings. 3. Transfer in Cognition A. Robins. Part II: Prediction. 4. Theoretical Models of Learning to Learn J. Baxter. 5. Multitask Learning R. Caruana. 6. Making a Low-Dimensional Representation Suitable for Diverse Tasks N. Intrator, S. Edelman. 7. The Canonical Distortion Measure for Vector Quantization and Function Approximation J. Baxter. 8. Lifelong Learning Algorithms S. Thrun. Part III: Relatedness. 9. The Parallel Transfer of Task Knowledge Using Dynamic Learning Rates Based on a Measure of Relatedness D.L. Silver, R.E. Mercer. 10. Clustering Learning Tasks and the Selective Cross-Task Transfer of Knowledge S. Thrun, J. O'Sullivan. Part IV: Control. 11. CHILD: A First Step Towards Continual Learning M.B. Ring. 12. Reinforcement Learning with Self-Modifying Policies J. Schmidhuber, et al 13. Creating Advice-Taking Reinforcement Learners R. Maclin, J.W. Shavlik. Contributing Authors. Index.",
"",
"",
"",
"A major goal of unsupervised learning is to discover data representations that are useful for subsequent tasks, without access to supervised labels during training. Typically, this goal is approached by minimizing a surrogate objective, such as the negative log likelihood of a generative model, with the hope that representations useful for subsequent tasks will arise as a side effect. In this work, we propose instead to directly target a later desired task by meta-learning an unsupervised learning rule, which leads to representations useful for that task. Here, our desired task (meta-objective) is the performance of the representation on semi-supervised classification, and we meta-learn an algorithm -- an unsupervised weight update rule -- that produces representations that perform well under this meta-objective. Additionally, we constrain our unsupervised update rule to a be a biologically-motivated, neuron-local function, which enables it to generalize to novel neural network architectures. We show that the meta-learned update rule produces useful features and sometimes outperforms existing unsupervised learning techniques. We show that the meta-learned unsupervised update rule generalizes to train networks with different widths, depths, and nonlinearities. It also generalizes to train on data with randomly permuted input dimensions and even generalizes from image datasets to a text task.",
"",
"",
"The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.",
"",
"",
"We propose a novel and flexible approach to meta-learning for learning-to-learn from only a few examples. Our framework is motivated by actor-critic reinforcement learning, but can be applied to both reinforcement and supervised learning. The key idea is to learn a meta-critic: an action-value function neural network that learns to criticise any actor trying to solve any specified task. For supervised learning, this corresponds to the novel idea of a trainable task-parametrised loss generator. This meta-critic approach provides a route to knowledge transfer that can flexibly deal with few-shot and semi-supervised conditions for both reinforcement and supervised learning. Promising results are shown on both reinforcement and supervised learning problems.",
"We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on two few-shot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies."
],
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_8",
"@cite_29",
"@cite_3",
"@cite_24",
"@cite_23",
"@cite_2",
"@cite_31",
"@cite_16",
"@cite_13",
"@cite_11"
],
"mid": [
"99485931",
"",
"",
"",
"2796429358",
"",
"",
"2963775850",
"",
"",
"2726717203",
"2604763608"
]
}
|
Meta-Learning via Learned Loss
|
Inspired by the remarkable capability of humans to quickly learn and adapt to new tasks, the concept of learning to learn, or meta-learning, recently became popular within the machine learning community [2,4,5]. When thinking about optimizing a policy for a reinforcement learning agent or learning a classification task, it appears sensible to not approach each individual task from scratch but to learn a learning mechanism that is common across a variety of tasks and can be reused.
Meta-Loss Network
Optimizee Optimizee inputs
Task info (target, goal, reward, …)
Optimizee outputs
Meta-Loss
Forward pass Backward pass Figure 1: Using a learned meta-loss to update an optimizee model.
The purpose of this work is to encode these learning strategies into an adaptive highdimensional loss function, or a meta-loss, which generalizes across multiple tasks and can be utilized to optimize models with different architectures. Inspired by inverse reinforcement learning [18], our work combines the learning to learn paradigm of meta-learning with the generality of learning loss landscapes. We construct a unified fully differentiable framework that can shape the loss function to provide a strong learning signal for a range of various models, such as classifiers, regressors or control policies. As the loss function is independent of the model being optimized, it is agnostic to the particular model architecture. Furthermore, by training our loss function to optimize different tasks, we can achieve generalization across multiple problems. The meta-learning framework presented in this work involves an inner and an outer loop. In the inner loop, a model or an optimizee is trained with gradient descent using the loss coming from our learned meta-loss function. Fig. 1 shows the pipeline for updating the optimizee with the meta-loss. The outer loop optimizes the meta-loss function by minimizing the task-specific losses of updated optimizees. After training the meta-loss function, the task-specific losses are no longer required since the training of optimizees can be performed entirely by using the meta-loss function alone. In this way, our meta-loss can find more efficient ways to optimize the original task loss. Furthermore, since we can choose which information to provide to our meta-loss, we can train it to work in scenarios with sparse information by only providing inputs that we expect to have at test time.
The contributions of this work are as follows: we present a framework for learning adaptive, highdimensional loss functions through back-propagation that shape the loss landscape such that it can be efficiently optimized with gradient descent; we show that our learned meta-loss functions are agnostic to the architecture of optimizee models; and we present a reinforcement learning framework that significantly improves the speed of policy training and enables learning in self-supervised and sparse reward settings.
Meta-Learning via Learned Loss
In this work, we aim to learn an adaptive loss function, which we call meta-loss, that is used to train an optimizee, e.g. a classifier, a regressor or an agent policy. In the following, we describe the general architecture of our framework, which we call Meta-Learning via Learned Loss (ML 3 ).
ML 3 framework
Let f θ be an optimizee with parameters θ. Let M φ be the meta-loss model with parameters φ. Let x be the inputs of the optimizee, f θ (x) outputs of the optimizee and g information about the task, such as a regression target, a classification target, a reward function, etc. Let p(T ) be a distribution of tasks and L Ti (θ) be the task-specific loss of the optimizee f θ for the task T i ∼ p(T ).
Fig . 2 shows the diagram of our framework architecture for a single step of the optimizee update. The optimizee is connected to the meta-loss network, which allows the gradients from the meta-loss to flow through the optimizee. The meta-loss additionally takes the inputs of the optimizee and the task information variable g. In our framework, we represent the meta-loss function using a neural network, which is subsequently referred to as a meta-loss network. It is worth noting that it is possible to train the meta-loss to perform self-supervised learning by not including g in the meta-loss network inputs. A single update of the optimizee is performed using gradient descent on the meta-loss by back-propagating the output of the meta-loss network through the optimizee keeping the parameters of the meta-loss network fixed:
θ j = θ j−1 − α∇ θj−1 E M φ (x, f θj−1 (x), g) ,(1)
where α is the learning rate, which can be either fixed or learned jointly with the meta-loss network. The objective of learning the meta-loss network is to minimize the task-specific loss over a distribution of tasks T i ∼ p(T ) and over multiple steps of optimizee training with the meta-loss:
L(φ, α) = N i=0 M j=1 L Ti (θ i,j ) = N i=0 M j=1 L Ti θ i,j−1 − α∇ θi,j−1 E M φ (x i , f θi,j−1 (x i ), g i ) ,(2)
where N is the number of tasks and M is the number of steps of updating the optimizee using the meta-loss. The task-specific objective L(φ, α) depends on the updated optimizee parameters θ j and hence on the parameters of the meta-loss network φ, making it possible to connect the meta-loss network to the task-specific loss and propagate the error back through the meta-loss network. Another variant of this objective would be to only optimize for the final performance of the optimizee at the last step M of applying the meta-loss:
L(φ, α) = N i=0 L Ti (θ i,M ).
However, this requires relying on back-propagation through a chain of all optimizee update steps. As we noticed in our experiments, including the task loss from each step and avoiding propagating it through the chain of updates by stopping the gradients at each optimizee update step works better in practice. Randomly initialize optimizees f θ0 , . . . , f θ N 8: x, g ← Sample a batch of task samples 6:
for unroll k ∈ {0, . . . , K} do 9: φ, α ← min φ,α N i=0 M j=1 L Ti θ i,j−1 − α∇ θi,j−1 E M φ (x i , f θi,j−1 (x i ), g i ) Algorithm 2 ML 3 at test time (meta-test) 1: T ∈ p(T ) ←θ ← θ − α∇ θ E [M φ (x, f θ (x), g)]
In order to facilitate the optimization of the meta-loss network for long optimizee update horizons, we split the optimization of L(φ, α) into several steps with smaller horizons, which we denote unrolls similar to [2]. Algorithm 1 summarizes the training procedure of the meta-loss network, which we later refer to as meta-train. Algorithm 2 shows the optimizee training with the learned meta-loss at test time, which we call meta-test
ML 3 for Reinforcement Learning
In this section, we introduce several modifications that allow us to apply the ML 3 framework to reinforcement learning problems. Let M = (S, A, P, R, p 0 , γ, T ) be a finite-horizon Markov Decision Process (MDP), where S and A are state and action spaces, P : S × A × S → R + is a state-transition probability function or system dynamics, R : S × A → R a reward function, p 0 : S → R + an initial state distribution, γ a reward discount factor, and T a horizon. Let τ = (s 0 , a 0 , . . . , s T , a T ) be a trajectory of states and actions and R(τ ) = T t=0 γ t R(s t , a t ) the trajectory reward. The goal of reinforcement learning is to find parameters θ of a policy π θ (a|s) that maximizes the expected discounted reward over trajectories induced by the policy: E π θ [R(τ )] where s 0 ∼ p 0 , s t+1 ∼ P (s t+1 |s t , a t ) and a t ∼ π θ (a t |s t ). In what follows, we show how to train a meta-loss network to perform effective policy updates in a reinforcement learning scenario.
To apply our ML 3 framework, we replace the optimizee f θ from the previous section with a stochastic policy π θ (a|s). We present two cases for applying ML 3 to RL tasks. In the first case, we assume availability of a differentiable system dynamics model and a reward function. In the second case, we assume a fully model-free scenario with a non-differentiable reward function.
In the case of an available differentiable system dynamics model P and a reward function R, the ML 3 objective derived in Eq. 2 can be applied directly by setting the task loss to L T (θ) = −E π θ [R(τ )] and differentiating all the way through the reward function, dynamics model and the policy that was updated using the meta-loss M φ .
In many realistic scenarios, we have to assume unknown system dynamics models and nondifferentiable reward functions. In this case, we can define a surrogate objective, which is independent of the dynamics model, as our task-specific loss [27,24,21]:
L T (θ) = −E π θ [R(τ ) log π θ (τ )] = −E π θ R(τ ) T t=0 log π θ (a t |s t )(3)
Although we are evaluating the task loss on full trajectory rewards, we perform policy updates from Eq. 1 using stochastic gradient descent (SGD) on the meta-loss with mini-batches of experience (s i , a i , r i ) for i ∈ {0, . . . , B} with batch size B, similar to [9]. The inputs of the meta-loss network are the sampled states, sampled actions, rewards and policy probabilities of the sampled actions: M φ (s, a, π θ (a|s), r). We notice that in practice, including the policy's distribution parameters directly in the meta-loss inputs, e.g. mean µ and standard deviation σ of a Gaussian policy, works better than including the probability estimate π θ (a|s), as it provides a direct way to update the distribution parameters using back-propagation through the meta-loss.
As we mentioned before, it is possible to provide different information about the task during metatrain and meta-test times. In our work, we show that by providing additional rewards in the task loss during meta-train time, we can encourage the trained meta-loss to learn exploratory behaviors. This additional information shapes the learned loss function such that the environment does not need to provide this information during meta-test time. It is also possible to train the meta-loss in a fully self-supervised fashion, where the task related input g is excluded from the meta-network input.
Experiments
In this section we evaluate the applicability and the benefits of the learned meta-loss under a variety of aspects. The questions we seek to answer are as follows.
(1) Can we learn a loss model that improves upon the original task-specific loss functions, i.e. can we shape the loss landscape to achieve better optimization performance during test time? With an example of a simple regression task, we demonstrate that our framework can generate convex loss landscapes suitable for fast optimization.
(2) Can we improve the learning speed when using our ML 3 loss function as a learning signal in complex, high-dimensional tasks? We concentrate on reinforcement learning tasks as one of the most challenging benchmarks for learning performance.
(3) Can we learn a loss function that can leverage additional information during meta-train time and can operate in sparse reward or self-supervised settings during meta-test time? (4) Can we learn a loss function that generalizes over different optimizee model architectures?
Throughout all of our experiments, the meta network is parameterized by a feed-forward neural network with two hidden layers of 40 neurons each with tanh activation function. The learning rate for the optimizee network was learned together with the loss.
Learned Loss Landscape
For visualization and illustration purposes, this set of experiments shows that our meta-learner is able to learn convex loss functions for tasks with inherently non-convex or difficult to optimize loss landscapes. Effectively, the meta-loss allows eliminating local minima for gradient-based optimization and creates well-conditioned loss landscapes. We illustrate this on an example of sine frequency regression where we fit a single parameter for the purpose of visualization simplicity. Below, we show the landscape of optimization with mean-squared loss on the outputs of the sine function using 1000 samples from the target function. The target frequency ν is indicated by a vertical red line, and the mean-squared loss is computed as 1 N N i=0 (sin(ωx i ) − sin(νx i )) 2 . As noted in [19], the landscape of this loss is highly non-convex and difficult to optimize with conventional gradient descent. In our work, we can circumvent this problem by introducing additional information about the ground truth value of the frequency at meta-train time, however only using samples from the sine function at inputs to the meta-loss network. That is, during the meta-train time, our task-specific loss is the squared distance to the ground truth frequency: (ω − ν) 2 . The inputs of the meta-loss network are the target values of the sine function: sin(νx i ), similar to the information available in the mean-squared loss. Effectively, during the meta-test time we can use the same samples as in the mean-squared loss, however achieve convex loss landscapes as depicted in Fig. 3 at the top.
Reinforcement Learning
For the remainder of the experimental section, we focus on reinforcement learning tasks. Reinforcement learning still remains one of the most challenging problems when it comes to learning performance and learning speed. In this section, we present our experiments on a variety of policy optimization problems. We use ML 3 for model-based and model-free reinforcement learning, thus demonstrating applicability of our approach in both settings. In the former, as mentioned in Section 3.2, we assume access to a differentiable reward function and dynamics model that could be available either a priori or learned from samples with differentiable function approximators, such as neural networks. This scenario formulates the task loss as a function of differentiable trajectories enabling direct gradient based optimization of the policy, similar to the trajectory optimization methods such as the iterative Linear-Quadratic Regulators (iLQR) [25].
In the model-free setting, we treat the dynamics of the system as a black box. In this case, the direct differentiation of the task loss is not possible and we formulate the learning signal for the meta-loss network as a surrogate policy gradient objective. See Section 3.2 for the detailed description. The policy π θ (a|s) is represented by a feed-forward neural network in all experiments.
Sample efficiency
We are now presenting our results for continuous control reinforcement learning tasks, by comparing task performance of a policy trained with our meta-loss, to a policy optimized with an appropriate comparison method. When a model is available, we compare the performance with a gradient based optimizer, in this case iLQR [25]. iLQR has wide-spread application in robotics [12,11] and is therefore a suitable comparison method for approaches that require the knowledge of a model. In the model-free setting, we use a popular policy gradient method -Proximal Policy Optimization (PPO) [22] for comparison. We first evaluate our method on simple, classical continuous control problems where the dynamics are known and then continue with higher-dimensional problems where we do not have full knowledge of the model. In Fig. 4a, we compare a policy optimized with the learning signal coming from the meta-loss network to trajectories optimized with iLQR. The task is a free movement task of a point mass in a 2D space with known dynamics parameters, we call this environment PointmassGoal. The state space is four-dimensional where (x, y,ẋ,ẏ) are the 2D positions and velocities, and the actions are accelerations (ẍ,ÿ). The task distribution p(T ) consists of different target positions that the point mass should reach. The task-specific loss at training time is defined by the distance from the target at the last time step during the rollout. In Fig. 4a, we average the learning performance over ten random goals. We observe that the policies optimized with the learned meta-loss converge faster and can get closer to the targets compared to the trajectories optimized with iLQR. We would like to point out that on top of the improvement in convergence rates, in contrast to iLQR our trained meta-loss does not require a differentiable dynamics model nor a differentiable reward function as its input at meta-test time as it updates the policy directly through gradient descent.
In Fig. 4b, we provide a similar comparison on the task that requires to swing up and balance an inverted pendulum. In this task, the state space is three dimensional: (sin(θ), cos(θ),θ), where θ is the angle of the pendulum. The action is a one dimensional torque. The task distribution consists of different initial angle configurations the pendulum starts in. The plot shows the averaged result over ten different initial configurations of the pendulum. From the figure we can see that the policy optimized with ML 3 is able to swing up and balance, whereas the iLQR trajectory struggles to keep the pendulum upright after swinging up the pendulum, and oscillates around the vertical configuration. In the following, we continue with the model-free evaluation. In Fig. 5, we show the performance of our framework using two continuous control tasks based on OpenAI Gym MuJoCo environments [7]: ReacherGoal and AntGoal. The ReacherGoal environment is a 2-link 2D manipulator that has to reach a specified goal location with its end-effector. The task distribution consists of initial random link configurations and random goal locations. The performance metric for this environment is the mean trajectory sum of negative distances to the goal, averaged over 10 tasks.
The AntGoal environment requires a four-legged agent to run to a goal location. The task distribution consists of random goals initialized on a circle around the initial position. The performance metric for this environment is the mean trajectory sum of differences between the initial and the current distances to the goal, averaged over 10 tasks. Fig. 5a and Fig. 5b show the results of the meta-test time performance for the ReacherGoal and the AntGoal environments respectively. We can see that ML 3 loss significantly improves optimization speed in both scenarios compared to PPO. In our experiments, we observed that on average ML 3 requires 5 times fewer samples to reach 80% of task performance in terms of our metrics for the model-free tasks.
Sparse Rewards and Self-Supervised Learning
By providing additional reward information during meta-train time, as pointed out in Section 3.2, it is possible to shape the learned reward signal such that it improves the optimization during policy training. By having access to additional information during meta-training, the meta-loss network can learn a loss function that provides exploratory strategies to the agent or allows the agent to learn in a self-supervised setting.
In Fig. 6, we show results from the MountainCar environment [17], a classical control problem where an under-actuated car has to drive up a steep hill. The propulsion force generated by the car does not allow steady climbing of the hill. To solve the task, the car has to accumulate energy by repeatedly climbing the hill forth and back. In this environment, greedy minimization of the distance to the goal often results in a failure to solve the task. The state space is two-dimensional consisting of the position and velocity of the car, the action space consists of a one-dimensional torque. In our experiments, we provide intermediate goal positions during meta-train time, which a not available during the meta-test time. The meta-loss network incorporates this behavior into its loss leading to an improved exploration during the meta-test time as can be seen in Fig. 6a. Fig. 6b shows the average distance between the car and the goal at last rollout time step over several iterations of policy updates with ML 3 and iLQR. As we observe, ML 3 can successfully bring the car to the goal in a small amount of updates, whereas iLQR is not able to solve this task.
The meta-loss network can also be trained in a fully self-supervised fashion, by removing the task related input g (i.e. rewards) from the meta-loss input. We successfully apply this setting in our experiments with the continuous control MuJoCo environments: the ReacherGoal and the AntGoal (see Fig. 5). In both cases, during meta-train time, the meta-loss network is still optimized using the rewards provided by the environments. However, during meta-test time, no external reward signal is provided and the meta-loss calculates the loss signal for the policy based solely on its environment state input.
Generalization across different model architectures
One key advantage of learning the loss function is its re-usability across different policy architectures that is impossible for the frameworks aiming to meta-train the policy directly [5,4]. To test the capability of the meta-loss to generalize across different architectures, we first meta-train our metaloss on an architecture with two layers and meta-test the same meta-loss on architectures with varied number of layers. Fig. 7a and Fig. 7b show meta-test time comparison for the ReacherGoal and the AntGoal environments in a model-free setting for four different model architectures. Each curve shows the average and the standard deviation over ten different tasks in each environment. Our comparison clearly indicates that the meta-loss can be effectively re-used across multiple architectures with a mild variation in performance compare to the overall variance of the corresponding task optimization.
Conclusions
In this work we presented a framework to meta-learn a loss function entirely from data. We showed how the meta-learned loss can become well-conditioned and suitable for an efficient optimization with gradient descent. We observed significant speed improvements in benchmark reinforcement learning tasks on a variety of environments. Furthermore, we showed that by introducing additional guiding rewards during training time we can train our meta-loss to develop exploratory strategies that can significantly improve performance during the meta-test time, even in sparse reward and selfsupervised settings. Finally, we presented experiments that demonstrated that the learned meta-loss transfers well to unseen model architectures and therefore can be applied to new policy classes.
| 3,717 |
1906.05374
|
2952193948
|
We present a meta-learning approach based on learning an adaptive, high-dimensional loss function that can generalize across multiple tasks and different model architectures. We develop a fully differentiable pipeline for learning a loss function targeted at maximizing the performance of an optimizee trained using this loss function. We observe that the loss landscape produced by our learned loss significantly improves upon the original task-specific loss. We evaluate our method on supervised and reinforcement learning tasks. Furthermore, we show that our pipeline is able to operate in sparse reward and self-supervised reinforcement learning scenarios.
|
Our framework falls into the category of learning loss landscapes; similar to @cite_2 , we aim at learning a separate optimization procedure that can be applied to various optimizee models. However, in contrast to @cite_2 and @cite_1 , our framework does not require a specific recurrent architecture of the optimizer and can operate without an explicit external loss or reward function during test time. Furthermore, as our learned loss functions are independent of the models to be optimized, they can be easily transferred to other optimizee models, in contrast to @cite_11 , where the learned representation can not be separated from the original model of the optimizee.
|
{
"abstract": [
"We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on two few-shot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies.",
"We consider learning in a Markov decision process where we are not explicitly given a reward function, but where instead we can observe an expert demonstrating the task that we want to learn to perform. This setting is useful in applications (such as the task of driving) where it may be difficult to write down an explicit reward function specifying exactly how different desiderata should be traded off. We think of the expert as trying to maximize a reward function that is expressible as a linear combination of known features, and give an algorithm for learning the task demonstrated by the expert. Our algorithm is based on using \"inverse reinforcement learning\" to try to recover the unknown reward function. We show that our algorithm terminates in a small number of iterations, and that even though we may never recover the expert's reward function, the policy output by the algorithm will attain performance close to that of the expert, where here performance is measured with respect to the expert's unknown reward function.",
"The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art."
],
"cite_N": [
"@cite_11",
"@cite_1",
"@cite_2"
],
"mid": [
"2604763608",
"1999874108",
"2963775850"
]
}
|
Meta-Learning via Learned Loss
|
Inspired by the remarkable capability of humans to quickly learn and adapt to new tasks, the concept of learning to learn, or meta-learning, recently became popular within the machine learning community [2,4,5]. When thinking about optimizing a policy for a reinforcement learning agent or learning a classification task, it appears sensible to not approach each individual task from scratch but to learn a learning mechanism that is common across a variety of tasks and can be reused.
Meta-Loss Network
Optimizee Optimizee inputs
Task info (target, goal, reward, …)
Optimizee outputs
Meta-Loss
Forward pass Backward pass Figure 1: Using a learned meta-loss to update an optimizee model.
The purpose of this work is to encode these learning strategies into an adaptive highdimensional loss function, or a meta-loss, which generalizes across multiple tasks and can be utilized to optimize models with different architectures. Inspired by inverse reinforcement learning [18], our work combines the learning to learn paradigm of meta-learning with the generality of learning loss landscapes. We construct a unified fully differentiable framework that can shape the loss function to provide a strong learning signal for a range of various models, such as classifiers, regressors or control policies. As the loss function is independent of the model being optimized, it is agnostic to the particular model architecture. Furthermore, by training our loss function to optimize different tasks, we can achieve generalization across multiple problems. The meta-learning framework presented in this work involves an inner and an outer loop. In the inner loop, a model or an optimizee is trained with gradient descent using the loss coming from our learned meta-loss function. Fig. 1 shows the pipeline for updating the optimizee with the meta-loss. The outer loop optimizes the meta-loss function by minimizing the task-specific losses of updated optimizees. After training the meta-loss function, the task-specific losses are no longer required since the training of optimizees can be performed entirely by using the meta-loss function alone. In this way, our meta-loss can find more efficient ways to optimize the original task loss. Furthermore, since we can choose which information to provide to our meta-loss, we can train it to work in scenarios with sparse information by only providing inputs that we expect to have at test time.
The contributions of this work are as follows: we present a framework for learning adaptive, highdimensional loss functions through back-propagation that shape the loss landscape such that it can be efficiently optimized with gradient descent; we show that our learned meta-loss functions are agnostic to the architecture of optimizee models; and we present a reinforcement learning framework that significantly improves the speed of policy training and enables learning in self-supervised and sparse reward settings.
Meta-Learning via Learned Loss
In this work, we aim to learn an adaptive loss function, which we call meta-loss, that is used to train an optimizee, e.g. a classifier, a regressor or an agent policy. In the following, we describe the general architecture of our framework, which we call Meta-Learning via Learned Loss (ML 3 ).
ML 3 framework
Let f θ be an optimizee with parameters θ. Let M φ be the meta-loss model with parameters φ. Let x be the inputs of the optimizee, f θ (x) outputs of the optimizee and g information about the task, such as a regression target, a classification target, a reward function, etc. Let p(T ) be a distribution of tasks and L Ti (θ) be the task-specific loss of the optimizee f θ for the task T i ∼ p(T ).
Fig . 2 shows the diagram of our framework architecture for a single step of the optimizee update. The optimizee is connected to the meta-loss network, which allows the gradients from the meta-loss to flow through the optimizee. The meta-loss additionally takes the inputs of the optimizee and the task information variable g. In our framework, we represent the meta-loss function using a neural network, which is subsequently referred to as a meta-loss network. It is worth noting that it is possible to train the meta-loss to perform self-supervised learning by not including g in the meta-loss network inputs. A single update of the optimizee is performed using gradient descent on the meta-loss by back-propagating the output of the meta-loss network through the optimizee keeping the parameters of the meta-loss network fixed:
θ j = θ j−1 − α∇ θj−1 E M φ (x, f θj−1 (x), g) ,(1)
where α is the learning rate, which can be either fixed or learned jointly with the meta-loss network. The objective of learning the meta-loss network is to minimize the task-specific loss over a distribution of tasks T i ∼ p(T ) and over multiple steps of optimizee training with the meta-loss:
L(φ, α) = N i=0 M j=1 L Ti (θ i,j ) = N i=0 M j=1 L Ti θ i,j−1 − α∇ θi,j−1 E M φ (x i , f θi,j−1 (x i ), g i ) ,(2)
where N is the number of tasks and M is the number of steps of updating the optimizee using the meta-loss. The task-specific objective L(φ, α) depends on the updated optimizee parameters θ j and hence on the parameters of the meta-loss network φ, making it possible to connect the meta-loss network to the task-specific loss and propagate the error back through the meta-loss network. Another variant of this objective would be to only optimize for the final performance of the optimizee at the last step M of applying the meta-loss:
L(φ, α) = N i=0 L Ti (θ i,M ).
However, this requires relying on back-propagation through a chain of all optimizee update steps. As we noticed in our experiments, including the task loss from each step and avoiding propagating it through the chain of updates by stopping the gradients at each optimizee update step works better in practice. Randomly initialize optimizees f θ0 , . . . , f θ N 8: x, g ← Sample a batch of task samples 6:
for unroll k ∈ {0, . . . , K} do 9: φ, α ← min φ,α N i=0 M j=1 L Ti θ i,j−1 − α∇ θi,j−1 E M φ (x i , f θi,j−1 (x i ), g i ) Algorithm 2 ML 3 at test time (meta-test) 1: T ∈ p(T ) ←θ ← θ − α∇ θ E [M φ (x, f θ (x), g)]
In order to facilitate the optimization of the meta-loss network for long optimizee update horizons, we split the optimization of L(φ, α) into several steps with smaller horizons, which we denote unrolls similar to [2]. Algorithm 1 summarizes the training procedure of the meta-loss network, which we later refer to as meta-train. Algorithm 2 shows the optimizee training with the learned meta-loss at test time, which we call meta-test
ML 3 for Reinforcement Learning
In this section, we introduce several modifications that allow us to apply the ML 3 framework to reinforcement learning problems. Let M = (S, A, P, R, p 0 , γ, T ) be a finite-horizon Markov Decision Process (MDP), where S and A are state and action spaces, P : S × A × S → R + is a state-transition probability function or system dynamics, R : S × A → R a reward function, p 0 : S → R + an initial state distribution, γ a reward discount factor, and T a horizon. Let τ = (s 0 , a 0 , . . . , s T , a T ) be a trajectory of states and actions and R(τ ) = T t=0 γ t R(s t , a t ) the trajectory reward. The goal of reinforcement learning is to find parameters θ of a policy π θ (a|s) that maximizes the expected discounted reward over trajectories induced by the policy: E π θ [R(τ )] where s 0 ∼ p 0 , s t+1 ∼ P (s t+1 |s t , a t ) and a t ∼ π θ (a t |s t ). In what follows, we show how to train a meta-loss network to perform effective policy updates in a reinforcement learning scenario.
To apply our ML 3 framework, we replace the optimizee f θ from the previous section with a stochastic policy π θ (a|s). We present two cases for applying ML 3 to RL tasks. In the first case, we assume availability of a differentiable system dynamics model and a reward function. In the second case, we assume a fully model-free scenario with a non-differentiable reward function.
In the case of an available differentiable system dynamics model P and a reward function R, the ML 3 objective derived in Eq. 2 can be applied directly by setting the task loss to L T (θ) = −E π θ [R(τ )] and differentiating all the way through the reward function, dynamics model and the policy that was updated using the meta-loss M φ .
In many realistic scenarios, we have to assume unknown system dynamics models and nondifferentiable reward functions. In this case, we can define a surrogate objective, which is independent of the dynamics model, as our task-specific loss [27,24,21]:
L T (θ) = −E π θ [R(τ ) log π θ (τ )] = −E π θ R(τ ) T t=0 log π θ (a t |s t )(3)
Although we are evaluating the task loss on full trajectory rewards, we perform policy updates from Eq. 1 using stochastic gradient descent (SGD) on the meta-loss with mini-batches of experience (s i , a i , r i ) for i ∈ {0, . . . , B} with batch size B, similar to [9]. The inputs of the meta-loss network are the sampled states, sampled actions, rewards and policy probabilities of the sampled actions: M φ (s, a, π θ (a|s), r). We notice that in practice, including the policy's distribution parameters directly in the meta-loss inputs, e.g. mean µ and standard deviation σ of a Gaussian policy, works better than including the probability estimate π θ (a|s), as it provides a direct way to update the distribution parameters using back-propagation through the meta-loss.
As we mentioned before, it is possible to provide different information about the task during metatrain and meta-test times. In our work, we show that by providing additional rewards in the task loss during meta-train time, we can encourage the trained meta-loss to learn exploratory behaviors. This additional information shapes the learned loss function such that the environment does not need to provide this information during meta-test time. It is also possible to train the meta-loss in a fully self-supervised fashion, where the task related input g is excluded from the meta-network input.
Experiments
In this section we evaluate the applicability and the benefits of the learned meta-loss under a variety of aspects. The questions we seek to answer are as follows.
(1) Can we learn a loss model that improves upon the original task-specific loss functions, i.e. can we shape the loss landscape to achieve better optimization performance during test time? With an example of a simple regression task, we demonstrate that our framework can generate convex loss landscapes suitable for fast optimization.
(2) Can we improve the learning speed when using our ML 3 loss function as a learning signal in complex, high-dimensional tasks? We concentrate on reinforcement learning tasks as one of the most challenging benchmarks for learning performance.
(3) Can we learn a loss function that can leverage additional information during meta-train time and can operate in sparse reward or self-supervised settings during meta-test time? (4) Can we learn a loss function that generalizes over different optimizee model architectures?
Throughout all of our experiments, the meta network is parameterized by a feed-forward neural network with two hidden layers of 40 neurons each with tanh activation function. The learning rate for the optimizee network was learned together with the loss.
Learned Loss Landscape
For visualization and illustration purposes, this set of experiments shows that our meta-learner is able to learn convex loss functions for tasks with inherently non-convex or difficult to optimize loss landscapes. Effectively, the meta-loss allows eliminating local minima for gradient-based optimization and creates well-conditioned loss landscapes. We illustrate this on an example of sine frequency regression where we fit a single parameter for the purpose of visualization simplicity. Below, we show the landscape of optimization with mean-squared loss on the outputs of the sine function using 1000 samples from the target function. The target frequency ν is indicated by a vertical red line, and the mean-squared loss is computed as 1 N N i=0 (sin(ωx i ) − sin(νx i )) 2 . As noted in [19], the landscape of this loss is highly non-convex and difficult to optimize with conventional gradient descent. In our work, we can circumvent this problem by introducing additional information about the ground truth value of the frequency at meta-train time, however only using samples from the sine function at inputs to the meta-loss network. That is, during the meta-train time, our task-specific loss is the squared distance to the ground truth frequency: (ω − ν) 2 . The inputs of the meta-loss network are the target values of the sine function: sin(νx i ), similar to the information available in the mean-squared loss. Effectively, during the meta-test time we can use the same samples as in the mean-squared loss, however achieve convex loss landscapes as depicted in Fig. 3 at the top.
Reinforcement Learning
For the remainder of the experimental section, we focus on reinforcement learning tasks. Reinforcement learning still remains one of the most challenging problems when it comes to learning performance and learning speed. In this section, we present our experiments on a variety of policy optimization problems. We use ML 3 for model-based and model-free reinforcement learning, thus demonstrating applicability of our approach in both settings. In the former, as mentioned in Section 3.2, we assume access to a differentiable reward function and dynamics model that could be available either a priori or learned from samples with differentiable function approximators, such as neural networks. This scenario formulates the task loss as a function of differentiable trajectories enabling direct gradient based optimization of the policy, similar to the trajectory optimization methods such as the iterative Linear-Quadratic Regulators (iLQR) [25].
In the model-free setting, we treat the dynamics of the system as a black box. In this case, the direct differentiation of the task loss is not possible and we formulate the learning signal for the meta-loss network as a surrogate policy gradient objective. See Section 3.2 for the detailed description. The policy π θ (a|s) is represented by a feed-forward neural network in all experiments.
Sample efficiency
We are now presenting our results for continuous control reinforcement learning tasks, by comparing task performance of a policy trained with our meta-loss, to a policy optimized with an appropriate comparison method. When a model is available, we compare the performance with a gradient based optimizer, in this case iLQR [25]. iLQR has wide-spread application in robotics [12,11] and is therefore a suitable comparison method for approaches that require the knowledge of a model. In the model-free setting, we use a popular policy gradient method -Proximal Policy Optimization (PPO) [22] for comparison. We first evaluate our method on simple, classical continuous control problems where the dynamics are known and then continue with higher-dimensional problems where we do not have full knowledge of the model. In Fig. 4a, we compare a policy optimized with the learning signal coming from the meta-loss network to trajectories optimized with iLQR. The task is a free movement task of a point mass in a 2D space with known dynamics parameters, we call this environment PointmassGoal. The state space is four-dimensional where (x, y,ẋ,ẏ) are the 2D positions and velocities, and the actions are accelerations (ẍ,ÿ). The task distribution p(T ) consists of different target positions that the point mass should reach. The task-specific loss at training time is defined by the distance from the target at the last time step during the rollout. In Fig. 4a, we average the learning performance over ten random goals. We observe that the policies optimized with the learned meta-loss converge faster and can get closer to the targets compared to the trajectories optimized with iLQR. We would like to point out that on top of the improvement in convergence rates, in contrast to iLQR our trained meta-loss does not require a differentiable dynamics model nor a differentiable reward function as its input at meta-test time as it updates the policy directly through gradient descent.
In Fig. 4b, we provide a similar comparison on the task that requires to swing up and balance an inverted pendulum. In this task, the state space is three dimensional: (sin(θ), cos(θ),θ), where θ is the angle of the pendulum. The action is a one dimensional torque. The task distribution consists of different initial angle configurations the pendulum starts in. The plot shows the averaged result over ten different initial configurations of the pendulum. From the figure we can see that the policy optimized with ML 3 is able to swing up and balance, whereas the iLQR trajectory struggles to keep the pendulum upright after swinging up the pendulum, and oscillates around the vertical configuration. In the following, we continue with the model-free evaluation. In Fig. 5, we show the performance of our framework using two continuous control tasks based on OpenAI Gym MuJoCo environments [7]: ReacherGoal and AntGoal. The ReacherGoal environment is a 2-link 2D manipulator that has to reach a specified goal location with its end-effector. The task distribution consists of initial random link configurations and random goal locations. The performance metric for this environment is the mean trajectory sum of negative distances to the goal, averaged over 10 tasks.
The AntGoal environment requires a four-legged agent to run to a goal location. The task distribution consists of random goals initialized on a circle around the initial position. The performance metric for this environment is the mean trajectory sum of differences between the initial and the current distances to the goal, averaged over 10 tasks. Fig. 5a and Fig. 5b show the results of the meta-test time performance for the ReacherGoal and the AntGoal environments respectively. We can see that ML 3 loss significantly improves optimization speed in both scenarios compared to PPO. In our experiments, we observed that on average ML 3 requires 5 times fewer samples to reach 80% of task performance in terms of our metrics for the model-free tasks.
Sparse Rewards and Self-Supervised Learning
By providing additional reward information during meta-train time, as pointed out in Section 3.2, it is possible to shape the learned reward signal such that it improves the optimization during policy training. By having access to additional information during meta-training, the meta-loss network can learn a loss function that provides exploratory strategies to the agent or allows the agent to learn in a self-supervised setting.
In Fig. 6, we show results from the MountainCar environment [17], a classical control problem where an under-actuated car has to drive up a steep hill. The propulsion force generated by the car does not allow steady climbing of the hill. To solve the task, the car has to accumulate energy by repeatedly climbing the hill forth and back. In this environment, greedy minimization of the distance to the goal often results in a failure to solve the task. The state space is two-dimensional consisting of the position and velocity of the car, the action space consists of a one-dimensional torque. In our experiments, we provide intermediate goal positions during meta-train time, which a not available during the meta-test time. The meta-loss network incorporates this behavior into its loss leading to an improved exploration during the meta-test time as can be seen in Fig. 6a. Fig. 6b shows the average distance between the car and the goal at last rollout time step over several iterations of policy updates with ML 3 and iLQR. As we observe, ML 3 can successfully bring the car to the goal in a small amount of updates, whereas iLQR is not able to solve this task.
The meta-loss network can also be trained in a fully self-supervised fashion, by removing the task related input g (i.e. rewards) from the meta-loss input. We successfully apply this setting in our experiments with the continuous control MuJoCo environments: the ReacherGoal and the AntGoal (see Fig. 5). In both cases, during meta-train time, the meta-loss network is still optimized using the rewards provided by the environments. However, during meta-test time, no external reward signal is provided and the meta-loss calculates the loss signal for the policy based solely on its environment state input.
Generalization across different model architectures
One key advantage of learning the loss function is its re-usability across different policy architectures that is impossible for the frameworks aiming to meta-train the policy directly [5,4]. To test the capability of the meta-loss to generalize across different architectures, we first meta-train our metaloss on an architecture with two layers and meta-test the same meta-loss on architectures with varied number of layers. Fig. 7a and Fig. 7b show meta-test time comparison for the ReacherGoal and the AntGoal environments in a model-free setting for four different model architectures. Each curve shows the average and the standard deviation over ten different tasks in each environment. Our comparison clearly indicates that the meta-loss can be effectively re-used across multiple architectures with a mild variation in performance compare to the overall variance of the corresponding task optimization.
Conclusions
In this work we presented a framework to meta-learn a loss function entirely from data. We showed how the meta-learned loss can become well-conditioned and suitable for an efficient optimization with gradient descent. We observed significant speed improvements in benchmark reinforcement learning tasks on a variety of environments. Furthermore, we showed that by introducing additional guiding rewards during training time we can train our meta-loss to develop exploratory strategies that can significantly improve performance during the meta-test time, even in sparse reward and selfsupervised settings. Finally, we presented experiments that demonstrated that the learned meta-loss transfers well to unseen model architectures and therefore can be applied to new policy classes.
| 3,717 |
1906.05374
|
2952193948
|
We present a meta-learning approach based on learning an adaptive, high-dimensional loss function that can generalize across multiple tasks and different model architectures. We develop a fully differentiable pipeline for learning a loss function targeted at maximizing the performance of an optimizee trained using this loss function. We observe that the loss landscape produced by our learned loss significantly improves upon the original task-specific loss. We evaluate our method on supervised and reinforcement learning tasks. Furthermore, we show that our pipeline is able to operate in sparse reward and self-supervised reinforcement learning scenarios.
|
A range of recent works demonstrate advantages of meta-learning for improving exploration strategies in RL settings, especially in the presence of sparse rewards. @cite_27 , an agent is trained to mimic expert demonstrations while only having access to a sparse reward signal during test time. @cite_26 and @cite_21 , a structured latent exploration space is learned from prior experience, which enables fast exploration in novel tasks. @cite_5 proposes a method for automatically learning potential-based reward shaping by learning the Q-function parameters during the meta-training phase, such that at meta-test time the Q-function can adapt quickly to new tasks. In our work, we also demonstrate that we can significantly improve the RL sample efficiency by training our meta-loss to optimize an actor policy, even when providing only limited or no reward information to the learned loss function at test time.
|
{
"abstract": [
"Reward shaping is one of the most effective methods to tackle the crucial yet challenging problem of credit assignment in Reinforcement Learning (RL). However, designing shaping functions usually requires much expert knowledge and hand-engineering, and the difficulties are further exacerbated given multiple similar tasks to solve. In this paper, we consider reward shaping on a distribution of tasks, and propose a general meta-learning framework to automatically learn the efficient reward shaping on newly sampled tasks, assuming only shared state space but not necessarily action space. We first derive the theoretically optimal reward shaping in terms of credit assignment in model-free RL. We then propose a value-based meta-learning algorithm to extract an effective prior over the optimal reward shaping. The prior can be applied directly to new tasks, or provably adapted to the task-posterior while solving the task within few gradient updates. We demonstrate the effectiveness of our shaping through significantly improved learning efficiency and interpretable visualizations across various settings, including notably a successful transfer from DQN to DDPG.",
"",
"We present a method for reinforcement learning of closely related skills that are parameterized via a skill embedding space. We learn such skills by taking advantage of latent variables and exploiting a connection between reinforcement learning and variational inference. The main contribution of our work is an entropy-regularized policy gradient formulation for hierarchical policies, and an associated, data-efficient and robust off-policy gradient algorithm based on stochastic value gradients. We demonstrate the effectiveness of our method on several simulated robotic manipulation tasks. We find that our method allows for discovery of multiple solutions and is capable of learning the minimum number of distinct skills that are necessary to solve a given set of tasks. In addition, our results indicate that the hereby proposed technique can interpolate and or sequence previously learned skills in order to accomplish more complex tasks, even in the presence of sparse rewards.",
"Exploration is a fundamental challenge in reinforcement learning (RL). Many of the current exploration methods for deep RL use task-agnostic objectives, such as information gain or bonuses based on state visitation. However, many practical applications of RL involve learning more than a single task, and prior tasks can be used to inform how exploration should be performed in new tasks. In this work, we explore how prior tasks can inform an agent about how to explore effectively in new situations. We introduce a novel gradient-based fast adaptation algorithm -- model agnostic exploration with structured noise (MAESN) -- to learn exploration strategies from prior experience. The prior experience is used both to initialize a policy and to acquire a latent exploration space that can inject structured stochasticity into a policy, producing exploration strategies that are informed by prior knowledge and are more effective than random action-space noise. We show that MAESN is more effective at learning exploration strategies when compared to prior meta-RL methods, RL without learned exploration strategies, and task-agnostic exploration methods. We evaluate our method on a variety of simulated tasks: locomotion with a wheeled robot, locomotion with a quadrupedal walker, and object manipulation."
],
"cite_N": [
"@cite_5",
"@cite_27",
"@cite_26",
"@cite_21"
],
"mid": [
"2913350117",
"",
"2785342287",
"2788904251"
]
}
|
Meta-Learning via Learned Loss
|
Inspired by the remarkable capability of humans to quickly learn and adapt to new tasks, the concept of learning to learn, or meta-learning, recently became popular within the machine learning community [2,4,5]. When thinking about optimizing a policy for a reinforcement learning agent or learning a classification task, it appears sensible to not approach each individual task from scratch but to learn a learning mechanism that is common across a variety of tasks and can be reused.
Meta-Loss Network
Optimizee Optimizee inputs
Task info (target, goal, reward, …)
Optimizee outputs
Meta-Loss
Forward pass Backward pass Figure 1: Using a learned meta-loss to update an optimizee model.
The purpose of this work is to encode these learning strategies into an adaptive highdimensional loss function, or a meta-loss, which generalizes across multiple tasks and can be utilized to optimize models with different architectures. Inspired by inverse reinforcement learning [18], our work combines the learning to learn paradigm of meta-learning with the generality of learning loss landscapes. We construct a unified fully differentiable framework that can shape the loss function to provide a strong learning signal for a range of various models, such as classifiers, regressors or control policies. As the loss function is independent of the model being optimized, it is agnostic to the particular model architecture. Furthermore, by training our loss function to optimize different tasks, we can achieve generalization across multiple problems. The meta-learning framework presented in this work involves an inner and an outer loop. In the inner loop, a model or an optimizee is trained with gradient descent using the loss coming from our learned meta-loss function. Fig. 1 shows the pipeline for updating the optimizee with the meta-loss. The outer loop optimizes the meta-loss function by minimizing the task-specific losses of updated optimizees. After training the meta-loss function, the task-specific losses are no longer required since the training of optimizees can be performed entirely by using the meta-loss function alone. In this way, our meta-loss can find more efficient ways to optimize the original task loss. Furthermore, since we can choose which information to provide to our meta-loss, we can train it to work in scenarios with sparse information by only providing inputs that we expect to have at test time.
The contributions of this work are as follows: we present a framework for learning adaptive, highdimensional loss functions through back-propagation that shape the loss landscape such that it can be efficiently optimized with gradient descent; we show that our learned meta-loss functions are agnostic to the architecture of optimizee models; and we present a reinforcement learning framework that significantly improves the speed of policy training and enables learning in self-supervised and sparse reward settings.
Meta-Learning via Learned Loss
In this work, we aim to learn an adaptive loss function, which we call meta-loss, that is used to train an optimizee, e.g. a classifier, a regressor or an agent policy. In the following, we describe the general architecture of our framework, which we call Meta-Learning via Learned Loss (ML 3 ).
ML 3 framework
Let f θ be an optimizee with parameters θ. Let M φ be the meta-loss model with parameters φ. Let x be the inputs of the optimizee, f θ (x) outputs of the optimizee and g information about the task, such as a regression target, a classification target, a reward function, etc. Let p(T ) be a distribution of tasks and L Ti (θ) be the task-specific loss of the optimizee f θ for the task T i ∼ p(T ).
Fig . 2 shows the diagram of our framework architecture for a single step of the optimizee update. The optimizee is connected to the meta-loss network, which allows the gradients from the meta-loss to flow through the optimizee. The meta-loss additionally takes the inputs of the optimizee and the task information variable g. In our framework, we represent the meta-loss function using a neural network, which is subsequently referred to as a meta-loss network. It is worth noting that it is possible to train the meta-loss to perform self-supervised learning by not including g in the meta-loss network inputs. A single update of the optimizee is performed using gradient descent on the meta-loss by back-propagating the output of the meta-loss network through the optimizee keeping the parameters of the meta-loss network fixed:
θ j = θ j−1 − α∇ θj−1 E M φ (x, f θj−1 (x), g) ,(1)
where α is the learning rate, which can be either fixed or learned jointly with the meta-loss network. The objective of learning the meta-loss network is to minimize the task-specific loss over a distribution of tasks T i ∼ p(T ) and over multiple steps of optimizee training with the meta-loss:
L(φ, α) = N i=0 M j=1 L Ti (θ i,j ) = N i=0 M j=1 L Ti θ i,j−1 − α∇ θi,j−1 E M φ (x i , f θi,j−1 (x i ), g i ) ,(2)
where N is the number of tasks and M is the number of steps of updating the optimizee using the meta-loss. The task-specific objective L(φ, α) depends on the updated optimizee parameters θ j and hence on the parameters of the meta-loss network φ, making it possible to connect the meta-loss network to the task-specific loss and propagate the error back through the meta-loss network. Another variant of this objective would be to only optimize for the final performance of the optimizee at the last step M of applying the meta-loss:
L(φ, α) = N i=0 L Ti (θ i,M ).
However, this requires relying on back-propagation through a chain of all optimizee update steps. As we noticed in our experiments, including the task loss from each step and avoiding propagating it through the chain of updates by stopping the gradients at each optimizee update step works better in practice. Randomly initialize optimizees f θ0 , . . . , f θ N 8: x, g ← Sample a batch of task samples 6:
for unroll k ∈ {0, . . . , K} do 9: φ, α ← min φ,α N i=0 M j=1 L Ti θ i,j−1 − α∇ θi,j−1 E M φ (x i , f θi,j−1 (x i ), g i ) Algorithm 2 ML 3 at test time (meta-test) 1: T ∈ p(T ) ←θ ← θ − α∇ θ E [M φ (x, f θ (x), g)]
In order to facilitate the optimization of the meta-loss network for long optimizee update horizons, we split the optimization of L(φ, α) into several steps with smaller horizons, which we denote unrolls similar to [2]. Algorithm 1 summarizes the training procedure of the meta-loss network, which we later refer to as meta-train. Algorithm 2 shows the optimizee training with the learned meta-loss at test time, which we call meta-test
ML 3 for Reinforcement Learning
In this section, we introduce several modifications that allow us to apply the ML 3 framework to reinforcement learning problems. Let M = (S, A, P, R, p 0 , γ, T ) be a finite-horizon Markov Decision Process (MDP), where S and A are state and action spaces, P : S × A × S → R + is a state-transition probability function or system dynamics, R : S × A → R a reward function, p 0 : S → R + an initial state distribution, γ a reward discount factor, and T a horizon. Let τ = (s 0 , a 0 , . . . , s T , a T ) be a trajectory of states and actions and R(τ ) = T t=0 γ t R(s t , a t ) the trajectory reward. The goal of reinforcement learning is to find parameters θ of a policy π θ (a|s) that maximizes the expected discounted reward over trajectories induced by the policy: E π θ [R(τ )] where s 0 ∼ p 0 , s t+1 ∼ P (s t+1 |s t , a t ) and a t ∼ π θ (a t |s t ). In what follows, we show how to train a meta-loss network to perform effective policy updates in a reinforcement learning scenario.
To apply our ML 3 framework, we replace the optimizee f θ from the previous section with a stochastic policy π θ (a|s). We present two cases for applying ML 3 to RL tasks. In the first case, we assume availability of a differentiable system dynamics model and a reward function. In the second case, we assume a fully model-free scenario with a non-differentiable reward function.
In the case of an available differentiable system dynamics model P and a reward function R, the ML 3 objective derived in Eq. 2 can be applied directly by setting the task loss to L T (θ) = −E π θ [R(τ )] and differentiating all the way through the reward function, dynamics model and the policy that was updated using the meta-loss M φ .
In many realistic scenarios, we have to assume unknown system dynamics models and nondifferentiable reward functions. In this case, we can define a surrogate objective, which is independent of the dynamics model, as our task-specific loss [27,24,21]:
L T (θ) = −E π θ [R(τ ) log π θ (τ )] = −E π θ R(τ ) T t=0 log π θ (a t |s t )(3)
Although we are evaluating the task loss on full trajectory rewards, we perform policy updates from Eq. 1 using stochastic gradient descent (SGD) on the meta-loss with mini-batches of experience (s i , a i , r i ) for i ∈ {0, . . . , B} with batch size B, similar to [9]. The inputs of the meta-loss network are the sampled states, sampled actions, rewards and policy probabilities of the sampled actions: M φ (s, a, π θ (a|s), r). We notice that in practice, including the policy's distribution parameters directly in the meta-loss inputs, e.g. mean µ and standard deviation σ of a Gaussian policy, works better than including the probability estimate π θ (a|s), as it provides a direct way to update the distribution parameters using back-propagation through the meta-loss.
As we mentioned before, it is possible to provide different information about the task during metatrain and meta-test times. In our work, we show that by providing additional rewards in the task loss during meta-train time, we can encourage the trained meta-loss to learn exploratory behaviors. This additional information shapes the learned loss function such that the environment does not need to provide this information during meta-test time. It is also possible to train the meta-loss in a fully self-supervised fashion, where the task related input g is excluded from the meta-network input.
Experiments
In this section we evaluate the applicability and the benefits of the learned meta-loss under a variety of aspects. The questions we seek to answer are as follows.
(1) Can we learn a loss model that improves upon the original task-specific loss functions, i.e. can we shape the loss landscape to achieve better optimization performance during test time? With an example of a simple regression task, we demonstrate that our framework can generate convex loss landscapes suitable for fast optimization.
(2) Can we improve the learning speed when using our ML 3 loss function as a learning signal in complex, high-dimensional tasks? We concentrate on reinforcement learning tasks as one of the most challenging benchmarks for learning performance.
(3) Can we learn a loss function that can leverage additional information during meta-train time and can operate in sparse reward or self-supervised settings during meta-test time? (4) Can we learn a loss function that generalizes over different optimizee model architectures?
Throughout all of our experiments, the meta network is parameterized by a feed-forward neural network with two hidden layers of 40 neurons each with tanh activation function. The learning rate for the optimizee network was learned together with the loss.
Learned Loss Landscape
For visualization and illustration purposes, this set of experiments shows that our meta-learner is able to learn convex loss functions for tasks with inherently non-convex or difficult to optimize loss landscapes. Effectively, the meta-loss allows eliminating local minima for gradient-based optimization and creates well-conditioned loss landscapes. We illustrate this on an example of sine frequency regression where we fit a single parameter for the purpose of visualization simplicity. Below, we show the landscape of optimization with mean-squared loss on the outputs of the sine function using 1000 samples from the target function. The target frequency ν is indicated by a vertical red line, and the mean-squared loss is computed as 1 N N i=0 (sin(ωx i ) − sin(νx i )) 2 . As noted in [19], the landscape of this loss is highly non-convex and difficult to optimize with conventional gradient descent. In our work, we can circumvent this problem by introducing additional information about the ground truth value of the frequency at meta-train time, however only using samples from the sine function at inputs to the meta-loss network. That is, during the meta-train time, our task-specific loss is the squared distance to the ground truth frequency: (ω − ν) 2 . The inputs of the meta-loss network are the target values of the sine function: sin(νx i ), similar to the information available in the mean-squared loss. Effectively, during the meta-test time we can use the same samples as in the mean-squared loss, however achieve convex loss landscapes as depicted in Fig. 3 at the top.
Reinforcement Learning
For the remainder of the experimental section, we focus on reinforcement learning tasks. Reinforcement learning still remains one of the most challenging problems when it comes to learning performance and learning speed. In this section, we present our experiments on a variety of policy optimization problems. We use ML 3 for model-based and model-free reinforcement learning, thus demonstrating applicability of our approach in both settings. In the former, as mentioned in Section 3.2, we assume access to a differentiable reward function and dynamics model that could be available either a priori or learned from samples with differentiable function approximators, such as neural networks. This scenario formulates the task loss as a function of differentiable trajectories enabling direct gradient based optimization of the policy, similar to the trajectory optimization methods such as the iterative Linear-Quadratic Regulators (iLQR) [25].
In the model-free setting, we treat the dynamics of the system as a black box. In this case, the direct differentiation of the task loss is not possible and we formulate the learning signal for the meta-loss network as a surrogate policy gradient objective. See Section 3.2 for the detailed description. The policy π θ (a|s) is represented by a feed-forward neural network in all experiments.
Sample efficiency
We are now presenting our results for continuous control reinforcement learning tasks, by comparing task performance of a policy trained with our meta-loss, to a policy optimized with an appropriate comparison method. When a model is available, we compare the performance with a gradient based optimizer, in this case iLQR [25]. iLQR has wide-spread application in robotics [12,11] and is therefore a suitable comparison method for approaches that require the knowledge of a model. In the model-free setting, we use a popular policy gradient method -Proximal Policy Optimization (PPO) [22] for comparison. We first evaluate our method on simple, classical continuous control problems where the dynamics are known and then continue with higher-dimensional problems where we do not have full knowledge of the model. In Fig. 4a, we compare a policy optimized with the learning signal coming from the meta-loss network to trajectories optimized with iLQR. The task is a free movement task of a point mass in a 2D space with known dynamics parameters, we call this environment PointmassGoal. The state space is four-dimensional where (x, y,ẋ,ẏ) are the 2D positions and velocities, and the actions are accelerations (ẍ,ÿ). The task distribution p(T ) consists of different target positions that the point mass should reach. The task-specific loss at training time is defined by the distance from the target at the last time step during the rollout. In Fig. 4a, we average the learning performance over ten random goals. We observe that the policies optimized with the learned meta-loss converge faster and can get closer to the targets compared to the trajectories optimized with iLQR. We would like to point out that on top of the improvement in convergence rates, in contrast to iLQR our trained meta-loss does not require a differentiable dynamics model nor a differentiable reward function as its input at meta-test time as it updates the policy directly through gradient descent.
In Fig. 4b, we provide a similar comparison on the task that requires to swing up and balance an inverted pendulum. In this task, the state space is three dimensional: (sin(θ), cos(θ),θ), where θ is the angle of the pendulum. The action is a one dimensional torque. The task distribution consists of different initial angle configurations the pendulum starts in. The plot shows the averaged result over ten different initial configurations of the pendulum. From the figure we can see that the policy optimized with ML 3 is able to swing up and balance, whereas the iLQR trajectory struggles to keep the pendulum upright after swinging up the pendulum, and oscillates around the vertical configuration. In the following, we continue with the model-free evaluation. In Fig. 5, we show the performance of our framework using two continuous control tasks based on OpenAI Gym MuJoCo environments [7]: ReacherGoal and AntGoal. The ReacherGoal environment is a 2-link 2D manipulator that has to reach a specified goal location with its end-effector. The task distribution consists of initial random link configurations and random goal locations. The performance metric for this environment is the mean trajectory sum of negative distances to the goal, averaged over 10 tasks.
The AntGoal environment requires a four-legged agent to run to a goal location. The task distribution consists of random goals initialized on a circle around the initial position. The performance metric for this environment is the mean trajectory sum of differences between the initial and the current distances to the goal, averaged over 10 tasks. Fig. 5a and Fig. 5b show the results of the meta-test time performance for the ReacherGoal and the AntGoal environments respectively. We can see that ML 3 loss significantly improves optimization speed in both scenarios compared to PPO. In our experiments, we observed that on average ML 3 requires 5 times fewer samples to reach 80% of task performance in terms of our metrics for the model-free tasks.
Sparse Rewards and Self-Supervised Learning
By providing additional reward information during meta-train time, as pointed out in Section 3.2, it is possible to shape the learned reward signal such that it improves the optimization during policy training. By having access to additional information during meta-training, the meta-loss network can learn a loss function that provides exploratory strategies to the agent or allows the agent to learn in a self-supervised setting.
In Fig. 6, we show results from the MountainCar environment [17], a classical control problem where an under-actuated car has to drive up a steep hill. The propulsion force generated by the car does not allow steady climbing of the hill. To solve the task, the car has to accumulate energy by repeatedly climbing the hill forth and back. In this environment, greedy minimization of the distance to the goal often results in a failure to solve the task. The state space is two-dimensional consisting of the position and velocity of the car, the action space consists of a one-dimensional torque. In our experiments, we provide intermediate goal positions during meta-train time, which a not available during the meta-test time. The meta-loss network incorporates this behavior into its loss leading to an improved exploration during the meta-test time as can be seen in Fig. 6a. Fig. 6b shows the average distance between the car and the goal at last rollout time step over several iterations of policy updates with ML 3 and iLQR. As we observe, ML 3 can successfully bring the car to the goal in a small amount of updates, whereas iLQR is not able to solve this task.
The meta-loss network can also be trained in a fully self-supervised fashion, by removing the task related input g (i.e. rewards) from the meta-loss input. We successfully apply this setting in our experiments with the continuous control MuJoCo environments: the ReacherGoal and the AntGoal (see Fig. 5). In both cases, during meta-train time, the meta-loss network is still optimized using the rewards provided by the environments. However, during meta-test time, no external reward signal is provided and the meta-loss calculates the loss signal for the policy based solely on its environment state input.
Generalization across different model architectures
One key advantage of learning the loss function is its re-usability across different policy architectures that is impossible for the frameworks aiming to meta-train the policy directly [5,4]. To test the capability of the meta-loss to generalize across different architectures, we first meta-train our metaloss on an architecture with two layers and meta-test the same meta-loss on architectures with varied number of layers. Fig. 7a and Fig. 7b show meta-test time comparison for the ReacherGoal and the AntGoal environments in a model-free setting for four different model architectures. Each curve shows the average and the standard deviation over ten different tasks in each environment. Our comparison clearly indicates that the meta-loss can be effectively re-used across multiple architectures with a mild variation in performance compare to the overall variance of the corresponding task optimization.
Conclusions
In this work we presented a framework to meta-learn a loss function entirely from data. We showed how the meta-learned loss can become well-conditioned and suitable for an efficient optimization with gradient descent. We observed significant speed improvements in benchmark reinforcement learning tasks on a variety of environments. Furthermore, we showed that by introducing additional guiding rewards during training time we can train our meta-loss to develop exploratory strategies that can significantly improve performance during the meta-test time, even in sparse reward and selfsupervised settings. Finally, we presented experiments that demonstrated that the learned meta-loss transfers well to unseen model architectures and therefore can be applied to new policy classes.
| 3,717 |
1906.05374
|
2952193948
|
We present a meta-learning approach based on learning an adaptive, high-dimensional loss function that can generalize across multiple tasks and different model architectures. We develop a fully differentiable pipeline for learning a loss function targeted at maximizing the performance of an optimizee trained using this loss function. We observe that the loss landscape produced by our learned loss significantly improves upon the original task-specific loss. We evaluate our method on supervised and reinforcement learning tasks. Furthermore, we show that our pipeline is able to operate in sparse reward and self-supervised reinforcement learning scenarios.
|
Closest to our method are the works on @cite_30 , @cite_14 and @cite_13 . In contrast to using an evolutionary approach as in @cite_30 , we design a differentiable framework and describe a way to optimize the loss function with gradient descent in both supervised and reinforcement learning settings. In @cite_14 , instead of learning a differentiable loss function directly, a teacher network is trained to predict parameters of a manually designed loss function, whereas each new loss function class requires a new teacher network design and training. Our method does not require manual design of the loss function parameterization as our loss functions are learned entirely from data. Finally, in @cite_13 a is learned to provide a value function conditioned on a task, used to train an actor policy. Although training a meta-critic in the supervised setting reduces to learning a loss function similar to our work, in the reinforcement learning setting we show that it is possible to use learned loss functions to optimize policies directly with gradient descent.
|
{
"abstract": [
"We propose a met alearning approach for learning gradient-based reinforcement learning (RL) algorithms. The idea is to evolve a differentiable loss function, such that an agent, which optimizes its policy to minimize this loss, will achieve high rewards. The loss is parametrized via temporal convolutions over the agent's experience. Because this loss is highly flexible in its ability to take into account the agent's history, it enables fast task learning. Empirical results show that our evolved policy gradient algorithm (EPG) achieves faster learning on several randomized environments compared to an off-the-shelf policy gradient method. We also demonstrate that EPG's learned loss can generalize to out-of-distribution test time tasks, and exhibits qualitatively different behavior from other popular met alearning algorithms.",
"Teaching is critical to human society: it is with teaching that prospective students are educated and human civilization can be inherited and advanced. A good teacher not only provides his her students with qualified teaching materials (e.g., textbooks), but also sets up appropriate learning objectives (e.g., course projects and exams) considering different situations of a student. When it comes to artificial intelligence, treating machine learning models as students, the loss functions that are optimized act as perfect counterparts of the learning objective set by the teacher. In this work, we explore the possibility of imitating human teaching behaviors by dynamically and automatically outputting appropriate loss functions to train machine learning models. Different from typical learning settings in which the loss function of a machine learning model is predefined and fixed, in our framework, the loss function of a machine learning model (we call it student) is defined by another machine learning model (we call it teacher). The ultimate goal of teacher model is cultivating the student to have better performance measured on development dataset. Towards that end, similar to human teaching, the teacher, a parametric model, dynamically outputs different loss functions that will be used and optimized by its student model at different training stages. We develop an efficient learning method for the teacher model that makes gradient based optimization possible, exempt of the ineffective solutions such as policy optimization. We name our method as learning to teach with dynamic loss functions'' (L2T-DLF for short). Extensive experiments on real world tasks including image classification and neural machine translation demonstrate that our method significantly improves the quality of various student models.",
"We propose a novel and flexible approach to meta-learning for learning-to-learn from only a few examples. Our framework is motivated by actor-critic reinforcement learning, but can be applied to both reinforcement and supervised learning. The key idea is to learn a meta-critic: an action-value function neural network that learns to criticise any actor trying to solve any specified task. For supervised learning, this corresponds to the novel idea of a trainable task-parametrised loss generator. This meta-critic approach provides a route to knowledge transfer that can flexibly deal with few-shot and semi-supervised conditions for both reinforcement and supervised learning. Promising results are shown on both reinforcement and supervised learning problems."
],
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_13"
],
"mid": [
"2964227899",
"2963780286",
"2726717203"
]
}
|
Meta-Learning via Learned Loss
|
Inspired by the remarkable capability of humans to quickly learn and adapt to new tasks, the concept of learning to learn, or meta-learning, recently became popular within the machine learning community [2,4,5]. When thinking about optimizing a policy for a reinforcement learning agent or learning a classification task, it appears sensible to not approach each individual task from scratch but to learn a learning mechanism that is common across a variety of tasks and can be reused.
Meta-Loss Network
Optimizee Optimizee inputs
Task info (target, goal, reward, …)
Optimizee outputs
Meta-Loss
Forward pass Backward pass Figure 1: Using a learned meta-loss to update an optimizee model.
The purpose of this work is to encode these learning strategies into an adaptive highdimensional loss function, or a meta-loss, which generalizes across multiple tasks and can be utilized to optimize models with different architectures. Inspired by inverse reinforcement learning [18], our work combines the learning to learn paradigm of meta-learning with the generality of learning loss landscapes. We construct a unified fully differentiable framework that can shape the loss function to provide a strong learning signal for a range of various models, such as classifiers, regressors or control policies. As the loss function is independent of the model being optimized, it is agnostic to the particular model architecture. Furthermore, by training our loss function to optimize different tasks, we can achieve generalization across multiple problems. The meta-learning framework presented in this work involves an inner and an outer loop. In the inner loop, a model or an optimizee is trained with gradient descent using the loss coming from our learned meta-loss function. Fig. 1 shows the pipeline for updating the optimizee with the meta-loss. The outer loop optimizes the meta-loss function by minimizing the task-specific losses of updated optimizees. After training the meta-loss function, the task-specific losses are no longer required since the training of optimizees can be performed entirely by using the meta-loss function alone. In this way, our meta-loss can find more efficient ways to optimize the original task loss. Furthermore, since we can choose which information to provide to our meta-loss, we can train it to work in scenarios with sparse information by only providing inputs that we expect to have at test time.
The contributions of this work are as follows: we present a framework for learning adaptive, highdimensional loss functions through back-propagation that shape the loss landscape such that it can be efficiently optimized with gradient descent; we show that our learned meta-loss functions are agnostic to the architecture of optimizee models; and we present a reinforcement learning framework that significantly improves the speed of policy training and enables learning in self-supervised and sparse reward settings.
Meta-Learning via Learned Loss
In this work, we aim to learn an adaptive loss function, which we call meta-loss, that is used to train an optimizee, e.g. a classifier, a regressor or an agent policy. In the following, we describe the general architecture of our framework, which we call Meta-Learning via Learned Loss (ML 3 ).
ML 3 framework
Let f θ be an optimizee with parameters θ. Let M φ be the meta-loss model with parameters φ. Let x be the inputs of the optimizee, f θ (x) outputs of the optimizee and g information about the task, such as a regression target, a classification target, a reward function, etc. Let p(T ) be a distribution of tasks and L Ti (θ) be the task-specific loss of the optimizee f θ for the task T i ∼ p(T ).
Fig . 2 shows the diagram of our framework architecture for a single step of the optimizee update. The optimizee is connected to the meta-loss network, which allows the gradients from the meta-loss to flow through the optimizee. The meta-loss additionally takes the inputs of the optimizee and the task information variable g. In our framework, we represent the meta-loss function using a neural network, which is subsequently referred to as a meta-loss network. It is worth noting that it is possible to train the meta-loss to perform self-supervised learning by not including g in the meta-loss network inputs. A single update of the optimizee is performed using gradient descent on the meta-loss by back-propagating the output of the meta-loss network through the optimizee keeping the parameters of the meta-loss network fixed:
θ j = θ j−1 − α∇ θj−1 E M φ (x, f θj−1 (x), g) ,(1)
where α is the learning rate, which can be either fixed or learned jointly with the meta-loss network. The objective of learning the meta-loss network is to minimize the task-specific loss over a distribution of tasks T i ∼ p(T ) and over multiple steps of optimizee training with the meta-loss:
L(φ, α) = N i=0 M j=1 L Ti (θ i,j ) = N i=0 M j=1 L Ti θ i,j−1 − α∇ θi,j−1 E M φ (x i , f θi,j−1 (x i ), g i ) ,(2)
where N is the number of tasks and M is the number of steps of updating the optimizee using the meta-loss. The task-specific objective L(φ, α) depends on the updated optimizee parameters θ j and hence on the parameters of the meta-loss network φ, making it possible to connect the meta-loss network to the task-specific loss and propagate the error back through the meta-loss network. Another variant of this objective would be to only optimize for the final performance of the optimizee at the last step M of applying the meta-loss:
L(φ, α) = N i=0 L Ti (θ i,M ).
However, this requires relying on back-propagation through a chain of all optimizee update steps. As we noticed in our experiments, including the task loss from each step and avoiding propagating it through the chain of updates by stopping the gradients at each optimizee update step works better in practice. Randomly initialize optimizees f θ0 , . . . , f θ N 8: x, g ← Sample a batch of task samples 6:
for unroll k ∈ {0, . . . , K} do 9: φ, α ← min φ,α N i=0 M j=1 L Ti θ i,j−1 − α∇ θi,j−1 E M φ (x i , f θi,j−1 (x i ), g i ) Algorithm 2 ML 3 at test time (meta-test) 1: T ∈ p(T ) ←θ ← θ − α∇ θ E [M φ (x, f θ (x), g)]
In order to facilitate the optimization of the meta-loss network for long optimizee update horizons, we split the optimization of L(φ, α) into several steps with smaller horizons, which we denote unrolls similar to [2]. Algorithm 1 summarizes the training procedure of the meta-loss network, which we later refer to as meta-train. Algorithm 2 shows the optimizee training with the learned meta-loss at test time, which we call meta-test
ML 3 for Reinforcement Learning
In this section, we introduce several modifications that allow us to apply the ML 3 framework to reinforcement learning problems. Let M = (S, A, P, R, p 0 , γ, T ) be a finite-horizon Markov Decision Process (MDP), where S and A are state and action spaces, P : S × A × S → R + is a state-transition probability function or system dynamics, R : S × A → R a reward function, p 0 : S → R + an initial state distribution, γ a reward discount factor, and T a horizon. Let τ = (s 0 , a 0 , . . . , s T , a T ) be a trajectory of states and actions and R(τ ) = T t=0 γ t R(s t , a t ) the trajectory reward. The goal of reinforcement learning is to find parameters θ of a policy π θ (a|s) that maximizes the expected discounted reward over trajectories induced by the policy: E π θ [R(τ )] where s 0 ∼ p 0 , s t+1 ∼ P (s t+1 |s t , a t ) and a t ∼ π θ (a t |s t ). In what follows, we show how to train a meta-loss network to perform effective policy updates in a reinforcement learning scenario.
To apply our ML 3 framework, we replace the optimizee f θ from the previous section with a stochastic policy π θ (a|s). We present two cases for applying ML 3 to RL tasks. In the first case, we assume availability of a differentiable system dynamics model and a reward function. In the second case, we assume a fully model-free scenario with a non-differentiable reward function.
In the case of an available differentiable system dynamics model P and a reward function R, the ML 3 objective derived in Eq. 2 can be applied directly by setting the task loss to L T (θ) = −E π θ [R(τ )] and differentiating all the way through the reward function, dynamics model and the policy that was updated using the meta-loss M φ .
In many realistic scenarios, we have to assume unknown system dynamics models and nondifferentiable reward functions. In this case, we can define a surrogate objective, which is independent of the dynamics model, as our task-specific loss [27,24,21]:
L T (θ) = −E π θ [R(τ ) log π θ (τ )] = −E π θ R(τ ) T t=0 log π θ (a t |s t )(3)
Although we are evaluating the task loss on full trajectory rewards, we perform policy updates from Eq. 1 using stochastic gradient descent (SGD) on the meta-loss with mini-batches of experience (s i , a i , r i ) for i ∈ {0, . . . , B} with batch size B, similar to [9]. The inputs of the meta-loss network are the sampled states, sampled actions, rewards and policy probabilities of the sampled actions: M φ (s, a, π θ (a|s), r). We notice that in practice, including the policy's distribution parameters directly in the meta-loss inputs, e.g. mean µ and standard deviation σ of a Gaussian policy, works better than including the probability estimate π θ (a|s), as it provides a direct way to update the distribution parameters using back-propagation through the meta-loss.
As we mentioned before, it is possible to provide different information about the task during metatrain and meta-test times. In our work, we show that by providing additional rewards in the task loss during meta-train time, we can encourage the trained meta-loss to learn exploratory behaviors. This additional information shapes the learned loss function such that the environment does not need to provide this information during meta-test time. It is also possible to train the meta-loss in a fully self-supervised fashion, where the task related input g is excluded from the meta-network input.
Experiments
In this section we evaluate the applicability and the benefits of the learned meta-loss under a variety of aspects. The questions we seek to answer are as follows.
(1) Can we learn a loss model that improves upon the original task-specific loss functions, i.e. can we shape the loss landscape to achieve better optimization performance during test time? With an example of a simple regression task, we demonstrate that our framework can generate convex loss landscapes suitable for fast optimization.
(2) Can we improve the learning speed when using our ML 3 loss function as a learning signal in complex, high-dimensional tasks? We concentrate on reinforcement learning tasks as one of the most challenging benchmarks for learning performance.
(3) Can we learn a loss function that can leverage additional information during meta-train time and can operate in sparse reward or self-supervised settings during meta-test time? (4) Can we learn a loss function that generalizes over different optimizee model architectures?
Throughout all of our experiments, the meta network is parameterized by a feed-forward neural network with two hidden layers of 40 neurons each with tanh activation function. The learning rate for the optimizee network was learned together with the loss.
Learned Loss Landscape
For visualization and illustration purposes, this set of experiments shows that our meta-learner is able to learn convex loss functions for tasks with inherently non-convex or difficult to optimize loss landscapes. Effectively, the meta-loss allows eliminating local minima for gradient-based optimization and creates well-conditioned loss landscapes. We illustrate this on an example of sine frequency regression where we fit a single parameter for the purpose of visualization simplicity. Below, we show the landscape of optimization with mean-squared loss on the outputs of the sine function using 1000 samples from the target function. The target frequency ν is indicated by a vertical red line, and the mean-squared loss is computed as 1 N N i=0 (sin(ωx i ) − sin(νx i )) 2 . As noted in [19], the landscape of this loss is highly non-convex and difficult to optimize with conventional gradient descent. In our work, we can circumvent this problem by introducing additional information about the ground truth value of the frequency at meta-train time, however only using samples from the sine function at inputs to the meta-loss network. That is, during the meta-train time, our task-specific loss is the squared distance to the ground truth frequency: (ω − ν) 2 . The inputs of the meta-loss network are the target values of the sine function: sin(νx i ), similar to the information available in the mean-squared loss. Effectively, during the meta-test time we can use the same samples as in the mean-squared loss, however achieve convex loss landscapes as depicted in Fig. 3 at the top.
Reinforcement Learning
For the remainder of the experimental section, we focus on reinforcement learning tasks. Reinforcement learning still remains one of the most challenging problems when it comes to learning performance and learning speed. In this section, we present our experiments on a variety of policy optimization problems. We use ML 3 for model-based and model-free reinforcement learning, thus demonstrating applicability of our approach in both settings. In the former, as mentioned in Section 3.2, we assume access to a differentiable reward function and dynamics model that could be available either a priori or learned from samples with differentiable function approximators, such as neural networks. This scenario formulates the task loss as a function of differentiable trajectories enabling direct gradient based optimization of the policy, similar to the trajectory optimization methods such as the iterative Linear-Quadratic Regulators (iLQR) [25].
In the model-free setting, we treat the dynamics of the system as a black box. In this case, the direct differentiation of the task loss is not possible and we formulate the learning signal for the meta-loss network as a surrogate policy gradient objective. See Section 3.2 for the detailed description. The policy π θ (a|s) is represented by a feed-forward neural network in all experiments.
Sample efficiency
We are now presenting our results for continuous control reinforcement learning tasks, by comparing task performance of a policy trained with our meta-loss, to a policy optimized with an appropriate comparison method. When a model is available, we compare the performance with a gradient based optimizer, in this case iLQR [25]. iLQR has wide-spread application in robotics [12,11] and is therefore a suitable comparison method for approaches that require the knowledge of a model. In the model-free setting, we use a popular policy gradient method -Proximal Policy Optimization (PPO) [22] for comparison. We first evaluate our method on simple, classical continuous control problems where the dynamics are known and then continue with higher-dimensional problems where we do not have full knowledge of the model. In Fig. 4a, we compare a policy optimized with the learning signal coming from the meta-loss network to trajectories optimized with iLQR. The task is a free movement task of a point mass in a 2D space with known dynamics parameters, we call this environment PointmassGoal. The state space is four-dimensional where (x, y,ẋ,ẏ) are the 2D positions and velocities, and the actions are accelerations (ẍ,ÿ). The task distribution p(T ) consists of different target positions that the point mass should reach. The task-specific loss at training time is defined by the distance from the target at the last time step during the rollout. In Fig. 4a, we average the learning performance over ten random goals. We observe that the policies optimized with the learned meta-loss converge faster and can get closer to the targets compared to the trajectories optimized with iLQR. We would like to point out that on top of the improvement in convergence rates, in contrast to iLQR our trained meta-loss does not require a differentiable dynamics model nor a differentiable reward function as its input at meta-test time as it updates the policy directly through gradient descent.
In Fig. 4b, we provide a similar comparison on the task that requires to swing up and balance an inverted pendulum. In this task, the state space is three dimensional: (sin(θ), cos(θ),θ), where θ is the angle of the pendulum. The action is a one dimensional torque. The task distribution consists of different initial angle configurations the pendulum starts in. The plot shows the averaged result over ten different initial configurations of the pendulum. From the figure we can see that the policy optimized with ML 3 is able to swing up and balance, whereas the iLQR trajectory struggles to keep the pendulum upright after swinging up the pendulum, and oscillates around the vertical configuration. In the following, we continue with the model-free evaluation. In Fig. 5, we show the performance of our framework using two continuous control tasks based on OpenAI Gym MuJoCo environments [7]: ReacherGoal and AntGoal. The ReacherGoal environment is a 2-link 2D manipulator that has to reach a specified goal location with its end-effector. The task distribution consists of initial random link configurations and random goal locations. The performance metric for this environment is the mean trajectory sum of negative distances to the goal, averaged over 10 tasks.
The AntGoal environment requires a four-legged agent to run to a goal location. The task distribution consists of random goals initialized on a circle around the initial position. The performance metric for this environment is the mean trajectory sum of differences between the initial and the current distances to the goal, averaged over 10 tasks. Fig. 5a and Fig. 5b show the results of the meta-test time performance for the ReacherGoal and the AntGoal environments respectively. We can see that ML 3 loss significantly improves optimization speed in both scenarios compared to PPO. In our experiments, we observed that on average ML 3 requires 5 times fewer samples to reach 80% of task performance in terms of our metrics for the model-free tasks.
Sparse Rewards and Self-Supervised Learning
By providing additional reward information during meta-train time, as pointed out in Section 3.2, it is possible to shape the learned reward signal such that it improves the optimization during policy training. By having access to additional information during meta-training, the meta-loss network can learn a loss function that provides exploratory strategies to the agent or allows the agent to learn in a self-supervised setting.
In Fig. 6, we show results from the MountainCar environment [17], a classical control problem where an under-actuated car has to drive up a steep hill. The propulsion force generated by the car does not allow steady climbing of the hill. To solve the task, the car has to accumulate energy by repeatedly climbing the hill forth and back. In this environment, greedy minimization of the distance to the goal often results in a failure to solve the task. The state space is two-dimensional consisting of the position and velocity of the car, the action space consists of a one-dimensional torque. In our experiments, we provide intermediate goal positions during meta-train time, which a not available during the meta-test time. The meta-loss network incorporates this behavior into its loss leading to an improved exploration during the meta-test time as can be seen in Fig. 6a. Fig. 6b shows the average distance between the car and the goal at last rollout time step over several iterations of policy updates with ML 3 and iLQR. As we observe, ML 3 can successfully bring the car to the goal in a small amount of updates, whereas iLQR is not able to solve this task.
The meta-loss network can also be trained in a fully self-supervised fashion, by removing the task related input g (i.e. rewards) from the meta-loss input. We successfully apply this setting in our experiments with the continuous control MuJoCo environments: the ReacherGoal and the AntGoal (see Fig. 5). In both cases, during meta-train time, the meta-loss network is still optimized using the rewards provided by the environments. However, during meta-test time, no external reward signal is provided and the meta-loss calculates the loss signal for the policy based solely on its environment state input.
Generalization across different model architectures
One key advantage of learning the loss function is its re-usability across different policy architectures that is impossible for the frameworks aiming to meta-train the policy directly [5,4]. To test the capability of the meta-loss to generalize across different architectures, we first meta-train our metaloss on an architecture with two layers and meta-test the same meta-loss on architectures with varied number of layers. Fig. 7a and Fig. 7b show meta-test time comparison for the ReacherGoal and the AntGoal environments in a model-free setting for four different model architectures. Each curve shows the average and the standard deviation over ten different tasks in each environment. Our comparison clearly indicates that the meta-loss can be effectively re-used across multiple architectures with a mild variation in performance compare to the overall variance of the corresponding task optimization.
Conclusions
In this work we presented a framework to meta-learn a loss function entirely from data. We showed how the meta-learned loss can become well-conditioned and suitable for an efficient optimization with gradient descent. We observed significant speed improvements in benchmark reinforcement learning tasks on a variety of environments. Furthermore, we showed that by introducing additional guiding rewards during training time we can train our meta-loss to develop exploratory strategies that can significantly improve performance during the meta-test time, even in sparse reward and selfsupervised settings. Finally, we presented experiments that demonstrated that the learned meta-loss transfers well to unseen model architectures and therefore can be applied to new policy classes.
| 3,717 |
1809.07282
|
2952771416
|
In this paper, we propose a deep, globally normalized topic model that incorporates structural relationships connecting documents in socially generated corpora, such as online forums. Our model (1) captures discursive interactions along observed reply links in addition to traditional topic information, and (2) incorporates latent distributed representations arranged in a deep architecture, which enables a GPU-based mean-field inference procedure that scales efficiently to large data. We apply our model to a new social media dataset consisting of 13M comments mined from the popular internet forum Reddit, a domain that poses significant challenges to models that do not account for relationships connecting user comments. We evaluate against existing methods across multiple metrics including perplexity and metadata prediction, and qualitatively analyze the learned interaction patterns.
|
Many topic models such as LDA @cite_7 treat documents as independent mixtures, yet this approach fails to model how comments interact with one another throughout a larger discourse if such connections exist in the data. Other work has considered modeling hierarchy in topics @cite_2 . These models form hierarchical representations of topics themselves, but still treat documents as independent. While this approach can succeed in learning topics of various granularities, it does not explicitly track how topics interact in the context of a nested conversation.
|
{
"abstract": [
"We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.",
"We address the problem of learning topic hierarchies from data. The model selection problem in this domain is daunting—which of the large collection of possible trees to use? We take a Bayesian approach, generating an appropriate prior via a distribution on partitions that we refer to as the nested Chinese restaurant process. This nonparametric prior allows arbitrarily large branching factors and readily accommodates growing data collections. We build a hierarchical topic model by combining this prior with a likelihood that is based on a hierarchical variant of latent Dirichlet allocation. We illustrate our approach on simulated data and with an application to the modeling of NIPS abstracts."
],
"cite_N": [
"@cite_7",
"@cite_2"
],
"mid": [
"1880262756",
"2132827946"
]
}
|
Modeling Online Discourse with Coupled Distributed Topics
|
Topic models have become one of the most common unsupervised methods for uncovering latent semantic information in natural language data, and have found a wide variety of applications across the sciences. However, many common modelssuch as Latent Dirichlet Allocation (Ng and Jordan, 2003) -make an explicit exchangeability assumption that treats documents as independent samples from a generative prior, thereby ignoring important aspects of text corpora which are generated by nonergodic, interconnected social systems. While the direct application of such models to datasets such as transcripts of The French Revolution (Barron et al., 2017) and discussions on Twitter (Zhao et al., 2011) have yielded sensible topics and exciting insights, their exclusion of document-to-document interactions imposes limitations on the scope of their applicability and the analyses they support.
For instance, on many social media platforms, comments are short (the average Reddit comment is 10 words long), making them difficult to treat as full documents, yet they do cohere as a collection, suggesting that contextual relationships should be considered. Moreover, analysis of social data is often principally concerned with understanding relationships between documents (such as question-asking and -answering), so a model able to capture such features is of direct scientific relevance.
To address these issues, we propose a design that models representations of comments jointly along observed reply links. Specifically, we attach a vector of latent binary variables to each comment in a collection of social data, which in turn connect to each other according to the observed reply-link structure of the dataset. The inferred representations can provide information about the rhetorical moves and linguistic elements that characterize an evolving discourse. An added benefit is that while previous work such as Sequential LDA (Du et al., 2012) has focused on modeling a linear progression, the model we present applies to a more general class of acyclic graphs such as tree-structured comment threads ubiquitous on the web.
Online data can be massive, which presents a scalability issue for traditional methods. Our approach uses latent binary variables similar to a Restricted Boltzmann Machine (RBM); related models such as Replicated Softmax (RS) (Salakhutdinov and Hinton, 2009) have previously seen success in capturing latent properties of language, and found substantial speedups over previous methods due to their GPU amenable training procedure. RS was also shown to deal well with documents of significantly different length, another key characteristic of online data. While RBMs permit exact inference, the additional coupling potentials present in our model make inference intractable. However, the choice of bilinear potentials and latent features admits a mean-field inference procedure which takes the form of a series of dense matrix multiplications followed by nonlinearities, which is particularly amenable to GPU computation and lets us scale efficiently to large data. Our model outperforms LDA and RS baselines on perplexity and downstream tasks including metadata prediction and document retrieval when evaluated on a new dataset mined from Reddit. We also qualitatively analyze the learned topics and discuss the social phenomena uncovered.
Model
We now present an overview of our model. Specifically, it will take the probabilistic form of an undirected graphical model whose architecture mirrors the tree structure of the threads in our data.
Motivating Dataset
We evaluate on a corpus mined from Reddit, an internet forum which ranks as the fourth most trafficked site in the US (Alexa, 2018) and sees millions of daily comments (Reddit, 2015). Discourse on Reddit follows a branching pattern, shown in Figure 1. The largest unit of discourse is a thread, beginning with a link to external content or a natural language prompt, posted to a relevant subreddit based on its subject matter. Users comment in response to the original post (OP), or to any other comment. The result is a structure which splits at many points into more specific or tangential discussions that while locally coherent may differ substantially from each other. The data reflect features of the underlying memory and network structure of the generating process; comments are serially correlated and highly cross-referential. We treat individual comments as "documents" under the standard topic modeling paradigm, but use observed reply structure to induce a tree of documents for every thread.
Description of Discursive Distributed Topic Model
We now introduce the Discursive Distributed Topic Model (DDTM) (illustrated in Figure 1). For each comment in the thread, DDTM assigns a latent vector of binary random variables (or bits) that collectively form a distributed embedding of the topical content of that comment; for instance, one bit might represent sarcastic language while another might track usage of specific acronyms -a given comment could have any combination of those features. These representations are tied to those of parent and child comments via coupling potentials (see Section 2.3), which allow them to learn discursive properties by inducing a deep undirected network over the thread. In order to encourage the model to use these comment-level representations to learn discursive and stylistic patterns as opposed to simply topics of discussion, we incorporate a single additional latent vector for the entire thread that interacts with each comment, explaining word choices that are mainly topical rather than discursive or stylistic. As we demonstrate in our experiments (see Section 6) the thread-level embedding learns distributions more reminiscent of what a traditional topic model would uncover, while the comment-level embeddings model styles of speak-ing and mannerisms that do not directly indicate specific subjects of conversation. The joint probability is defined in terms of an energy function that scores latent embeddings and observed word counts across the tree of comments within a thread using log-bilinear potentials, and is globally normalized over all word count and embedding combinations.
Probability Model
More formally, consider a thread containing N comments each of size D n with a vocabulary of size K. As depicted in Figure 1, each comment is viewed as a bag-of-words, densely connected via a log-bilinear potential to a latent embedding of size F . Let each comment be represented as as an integer vector x n ∈ Z K where x nk is number of times word k was observed in comment n, and let h n = {0, 1} F be the topic embedding for each comment, and let h 0 = {0, 1} F be the embedding for the entire thread. To model topic transitions, we score the embeddings of parent-child pairs with a separate coupling potential as shown in Figure 1 (comments with no parents or children receive additional start/stop biases respectively). Let replies be represented with sets R, P N , and C N where (n, m) ∈ R and n ∈ P m and m ∈ C n if comment m is a reply to comment n. DDTM assigns probability to a specific configuration of x, h with an energy function scored by the emission (π e ) and coupling (π c ) potentials.
E(x, h; θ) = N n=1 π e (h, x, n) Emission Potentials + (n,m)∈R π c (h, n, m) Coupling Potentials π e (h, x, n) = h n U x n + x n a + D n h n b + h 0 V x n + D n h 0 c π c (h, n, m) = h n W h m(1)
Note that the bias on embeddings is scaled by the number of words in the comment, which controls for their highly variable length. The joint probability is computed by exponentiating the energy and dividing by a normalizing constant.
p(x, h; θ) = exp(E(x, h; θ)) Z(θ) Z(θ) = x ,h exp(E(x , h ; θ))(2)
This architecture encourages the model to learn discursive maneuvers via the coupling potentials while separating within-thread variance and acrossthread variance through the comment-level and thread-level embeddings respectively. The coupling of latent variables makes factored inference impossible, meaning that even the exact computation of the partition function is no longer tractable. This necessitates approximating the gradients for learning which we will now address.
Learning and Inference
Inference in this model class in intractable, so as has been done in previous work on topic modeling (Ng and Jordan, 2003) we rely on variational methods to approximate the gradients needed during training as well as the posteriors over the topic bit vectors. Specifically, we will need the gradients of the normalizer and the sum of the energy function over the hidden variables
E(x; θ) = log h exp(E(x, h; θ))(3)
which we refer to as the marginal energy. Following the approach described for undirected models by Eisner (2011), we approximate these quantities and their gradients with respect to the model parameters θ as we will now describe (thread-level embeddings are omitted in this section for clarity).
Normalizer Approximation
We aim to train our model to maximize the marginal likelihood of the observed comment word counts, conditioned on the reply links. To do this we must compute the gradient of the normalizer Z(θ). However, this quantity is computationally intractable, as it contains a summation over all exponential choices for every word in the thread. Therefore, we must approximate Z(θ). Observe that under Jensen's Inequality, we can form the following lower bound on the normalizer using an approximate joint distribution q (Z) .
log Z(θ) = log x,h exp(E(x, h; θ)) ≥ E q (Z) [E(x, h; θ)] − E q (Z) [log q (Z) (x, h; φ, γ)](4)
We now define q (Z) as depicted in Figure 2 as a mean-field approximation that treats all variables as independent. We parameterize q (Z) with Factor graph of full joint compared to meanfield approximations to joint and posterior.
φ nf ∈ [0, 1], independent Bernoulli parameters representing the probability of h nf being equal to 1, and γ nk replicated softmaxes representing the probability of a word in x n taking the value k. Note that all words in x n are modeled as samples from this single distribution. The approximation then factors as follows:
q (Z) (x, h; φ, γ) = q (Z) (x; γ) · q (Z) (h; φ) q (Z) (x; γ) = N n=1 K k=1 (γ nk ) x nk q (Z) (h; φ) = N n=1 F f =1 (φ nf ) h nf (1 − φ nf ) (1−h nf )(5)
We optimize the parameters of q (Z) to maximize its variational lower bound, via iterative mean-field updates, which allow us to perform coordinate ascent over the parameters of q (Z) . Maximizing the lower bound with respect to particular φ nf and γ nk while holding all other parameters frozen, yields the following mean-field update equations (biases omitted for clarity):
φ n· = σ U γ n + m∈Cn W φ m + φ Pn W γ n· = σ (φ n U )(6)
We iterate over the parameters of q (Z) in an "upward-downward" manner; first updating φ for all comments with no children, then all comments whose children have been updated, and so on up to the root of the thread. Then we perform the same updates in reverse order. After updating all φ, we then update γ simultaneously (the components of γ are independent conditioned on φ). We iterate these upward-downward passes until convergence.
Marginal Energy Approximation
We can now approximate the normalizer, but still need the marginal data likelihood in order to take gradient steps on it and train our model. In order to recover the marginal likelihood, we must next approximate the marginal energy E(x; θ) as it too is intractable. This is due to the coupling potentials, which make the topics across comments dependent even when conditioned on the word counts. To do this, we form an additional variational approximation (see Figure 2) to the marginal energy, which we optimize similarly.
E(x; θ) = log h exp(E(x, h; θ)) ≥ E q (E) [E(x, h; θ)] − E q (E) [log q (E) (h; ψ)](7)
Since q (E) (h; ψ) need only model the hidden units h, we can parameterize it in the same manner as q (Z) (h; φ). Note that while these distributions factor similarly, they do not share parameters, although we find that in practice, initializing φ ← ψ improves our approximation. We optimize the lower bound on E(x; θ) via a similar coordinate ascent strategy, where the mean-field updates take the following form (biases omitted for clarity):
ψ n· = σ U h n + m∈Cn W ψ m + ψ Pn W(8)
We can use q (E) to perform inference at test time in our model, as its parameters ψ directly correspond to the expected values of the hidden topic embeddings under our approximation.
Learning via Gradient Ascent
We train the parameters of our true model p(x, h; θ) via stochastic updates wherein we optimize both ap-proximations on a single datum (i.e. thread) to compute the approximate gradient of its log-likelihood, and take a single gradient step on the model parameters (repeating on all training instances until convergence). That gradient is given by the difference in feature expectations under the approximations (entropy terms from the lower bounds are dropped as they do not depend on θ).
∇ log p(x; θ) ≈ E q (E) (h;ψ) [∇E(x, h; θ)] − E q (Z) (x ,h;ψ) ∇E(x , h; θ)(9)
In summary, we use two separate mean-field approximations to compute lower bounds on the marginal energy E(x, h; θ), and its normalizer Z(θ), which lets us approximate the marginal likelihood p(x; θ). Note that as our estimate on the marginal likelihood is the difference between two lower bounds, it is not a lower bound itself, although in practice it works well for training.
Scalability and GPU Implementation
Given the magnitude of our dataset, it is essential to be able to train efficiently at scale. Many commonly used topic models such as LDA (Ng and Jordan, 2003) have difficulty scaling, particularly if trained via MCMC methods. Improvements have been shown from online training (Hoffman et al., 2010), but extending such techniques to model comment-to-comment connections and leverage GPU compute is nontrivial.
In contrast, our proposed model and mean-field procedure can be scaled efficiently to large data because they are amenable to GPU implementation. Specifically, the described inference procedure can be viewed as the output of a neural network. This is because DDTM is globally normalized with edges parameterized as log-bilinear weights, which results in the mean-field updates taking the form of matrix operations followed by nonlinearities. Therefore, a single iteration of mean-field is equivalent to a forward pass through a recursive neural network, whose architecture is defined by the tree structure of the thread. Multiple iterations are equivalent to feeding the output of the network back into itself in a recurrent manner, and optimizing for T iterations is achieved by unrolling the network over T timesteps. This property makes DDTM highly amenable to efficient training on a GPU, and allowed us to scale experiments to a dataset of over 13M total Reddit comments.
Experimental Setup
Data
We mined a corpus of Reddit threads pulled through the platform's API. Focusing on the twenty most popular subreddits (gifs, todayilearned, CFB, funny, aww, AskReddit, Black-PeopleTwitter, videos, pics, politics, The_Donald, soccer, leagueoflegends, nba, nfl, worldnews, movies, mildlyinteresting, news, gaming) over a one month period yielded 200, 000 threads consisting of 13, 276, 455 comments total. The data was preprocessed by removing special characters, replacing URLs with a domain-specific token, stemming English words using a Snowball English Stemmer (Porter, 2001), removing stopwords, and truncating the vocabulary to only include the top 10, 000 most common words. OPs are modeled as a comment at the root of each thread to which all top-level comments respond. This dataset will be made available for public use after publication.
Baselines and Comparisons
We compare to baselines of Replicated Softmax (RS) (Salakhutdinov and Hinton, 2009) and Latent Dirichlet Allocation (LDA) (Ng and Jordan, 2003). RS is a distributed topic model similar to our own, albeit without any coupling potentials. LDA is a locally normalized topic model which defines topics as non-overlapping distributions over words. To ensure that DDTM does not gain an unfair advantage purely by having a larger embedding space, we divide the dimensions equally between commentand thread-level. Unless specified 64 bits/topics were used. We experiment with RS and LDA treating either comments or full threads as documents.
Training and Initialization
SGD was performed using the Adam optimizer (Kingma and Ba, 2015). When running inference, we found convergence was reached in an average of 2 iterations of updates. Using a single NVIDIA Titan X (Pascal) card, we were able to train our model to convergence on the training set of 10M comments in less than 30 hours. It is worth noting that we found DDTM to be fairly sensitive to initialization. We found best results from Gaussian noise, with comment-level emissions at variance of 0.01, thread-level emissions at 0.0001, and transitions at 0. We initialized all biases to 0 except for the bias on word counts, which we set to the unigram log-probabilities from the train set.
Results
Evaluating Perplexity
We compare models by perplexity on a held-out test set, a standard evaluation for generative and latent variables models.
Setup: Due to the use of mean-field approximations for both the marginal energy and normalizer we lose any guarantees regarding the accuracy of our likelihood estimate (both approximations are lower bounds, and therefore their difference is neither a strict lower bound nor guaranteed to be unbiased). To evaluate perplexity in a more principled way, we use Annealed Importance Sampling (AIS) to estimate the ratio between our model's normalizer and the tractable normalizer of a base model from which we can draw true independent samples as described by Salakhutdinov and Murray (2008). Note that since the marginal energy is intractable in our model, unlike a standard RBM, we must sample the joint -and not the marginal -intermediate distributions. This yields an unbiased estimate of the normalizer. The marginal energy must still be approximated via a lower bound, but given that AIS is unbiased and empirically low in variance, we can treat the overall estimate as a lower bound on likelihood for evaluation. Using 2000 intermediate distributions, and averaging over 20 runs, we evaluated per-word perplexity over a set of 50 unseen threads. Results are shown in Table 1.
Results: DDTM achieves the lowest perplexity at all dimensionalities. Note our ablation with the coupling potentials removed (-cpl), increases perplexity noticeably, indicating that modeling replies helps beyond simply modeling threads and comments jointly, particularly at larger embeddings. For reference, a unigram model achieves 2644. We find that LDA's approximate perplexity is even worse, likely due to slackness in its lower bound.
Upvote Regression
To measure how well embeddings capture comment-level characteristics, we feed them into a linear regression model that predicts the number of upvotes the comment received. Upvotes provide a loose human-annotated measure of likability. We expect that context matters in determining how well received a comment is; the same comment posted in response to different parents may receive a very different number of upvotes. Hence, we expect comment-level embeddings to be more informative for this task when connected via our model's coupling potentials. Setup: We trained a standard linear regressor for each model. The regressor was trained using ordinary least squares on the entire training set of comments using the model's computed topic embeddings as input, and the number of upvotes on the comment as the output to predict. As a preprocessing step, we took the log of the absolute number of votes before training. We compared models by mean squared error (MSE) on our test set. Results are shown in Table 2.
Results: DDTM achieves lowest MSE. To assess statistical significance, we performed a 500 sample bootstrap of our training set. The standard errors of these replications are small, and a two-sample t-test rejects the null hypothesis that DDTM has an average MSE equal to that of the next best method (p < .001). Note that our model outperforms both comment-and thread-level embeddings, suggesting that modeling these jointly, and modeling the effect of neighboring representations in the comment graph, more accurately learns information relevant to a comment's social impact. Figure 3: Precision vs. recall for document retrieval based on subreddit comparing various models for 1000 randomly selected held-out query comments.
Deletion Prediction
Comments that are excessively provocative or in violation of site rules are often deleted, either by the author or a moderator. We can measure whether DDTM captures discursive interactions that lead to such intervention by training a logistic classifier that predicts whether any of a given comment's children have been deleted.
Setup: For each model, a logistic regression classifier was trained stochastically with the Adam optimizer on the entire training set of comments using the model's computed topic embeddings as input, and a binary label for whether the comment had any deleted children as the output to predict. We compared models by accuracy on our test set. Results are shown in Table 2.
Results: DDTM gets the highest accuracy. Interestingly, thread-level models do better than comment-level ones, which suggests that certain topics or even subreddits may correlate with comments being deleted. This makes sense given that subreddits vary in severity of moderation. DDTM's performance also demonstrates that modeling comment-to-comment interaction patterns is helpful in predicting when a comment will spawn a deleted future response, which strongly matches our intuition.
Document Retrieval
Finally, while DDTM is not designed to better capture topical structure, we evaluate the extent to which it can still capture this information by performing document retrieval, a standard evaluation, for which we treat the subreddit to which a thread was posted as a label for relevance. Note that every comment within the same thread belongs to the same subreddit, which gives thread-level models an inherent advantage at this task. We include this task purely for the purpose of demonstrating that by capturing discursive patterns, DDTM does not lose the ability to model thread-level topics as well. Setup: Given a query comment from our held-out test set, we rank the training set by the Dice similarity of the hidden embeddings computed by the model. We consider a retrieved comment relevant to the query if they both originate from the same subreddit, which loosely categorizes the semantic content. Tuning the number of documents we return allows us to form precision recall curves, which we show in Figure 3. Results: DDTM outperforms both comment-level baselines and is competitive with thread-level models, even beating LDA at high levels of recall. This indicates that despite using half of its dimensions to model comment-to-comment interactions DDTM can still do almost as good a job of modeling threadlevel semantics as a model using its entire capacity Bit # Associated Word Stems by Emission Weight (Higher Score → Lower Score)
Comment-Level
Bit 1 faq tldrs pms 165 til keyword questions feedback chat pm 2 irl riamverysmart legend omfg riski aboard favr madman skillset tunnel 3 lotta brah ouch spici oof bummer buildup viewership hd uncanni 4 funniest mah tfw teleport fav hoo plz bah whyd dumbest 5 handsom hipster texan hottest whore norwegian shittier scandinavian jealousi douch
Thread-Level
Bit 1 btc gameplay tutori cyclist dev currenc kitti bitcoin rpg crypto 2 url_youtu url_leagueoflegends url_businessinsider url_twitter url_redd url_snopes 3 comey pede macron pg13 maga globalist ucf committe cuck distributor 4 maduro venezuelan ballot puerto catalonia rican quak skateboard venezuela quebec 5 nra scotus opioid cheney nevada metallica marijuana vermont colorado xanax to do so. The gap between comment-level RS and LDA is also consistent with LDA's known issues dealing with sparse data (Sridhar, 2015), and lends credence to our theory that distributed topic representations are better suited to such domains.
Qualitative Analysis of Topics
We now offer qualitative analysis of the topic embeddings learned by our model. Note that since we use distributed embeddings, our bits are more akin to filters than complete distributions over words, and we typically observe as many as half of them active for a single comment. In a sense, we have an exponential number of topics, whose parameterization simply factors over the bits. Therefore, it can be difficult to interpret them as one would interpret topics learned by a model such as LDA. Furthermore, we find that in practice this effect is correlated with the topic embedding size; the more bits our model has, the less sparse and consequently less individually meaningful the bits become. Therefore for this analysis, we specifically focus on DDTM trained with 64 bits total.
Bits in Isolation
Directly inspecting the emission parameters, reveals that the comment-level and thread-level halves of our embeddings capture substantially different aspects of the data (shown in Table 3) akin to vertical, within-thread, and horizontal, across-thread sources of variance respectively. The comment-level topic bits tend to reflect styles of speaking, lingo, and memes that are not unique to a particular subject of discourse or even subreddit. For example, comment-level Bit 2 captures many words typical of taunting Reddit comments; replying with "/r/iamverysmart" (a subreddit dedicated to mocking people who make grandiose claims about their intellect) is a common way of jokingly implying that the author of the parent comment takes themselves too seriously -and thus corresponds to a certain kind of rhetorical move. Further, it is grouped with other words that indicate related rhetorical moves; calling a user "risky" or a "madman" is a common means of suggesting that they are engaging in a pointless act of rebellion. They also cluster at the coarsest level by length (see Figure 5) which we find to correlate with writing style. By contrast, the thread-level bits are more indicative of specific topics of discussion, and unsurprisingly they cluster by subreddit (see Figure 4). For example, thread-level Bit 3 captures lexicon used almost exclusively by alt-right Donald Trump supporters as well as the names of various political figures. Bit 4 highlights words related to civil unrest in Spanish speaking parts of the world.
Bits in Combination
While these distributions over words (particularly for comment-level bits) can seem vague, when multiple bits are active, their effects compound to produce much more specific topics. One can think of the bits as defining soft filters over the space of words, that when stacked together carve out patterns not apparent in any of them individually. We now analyze a few sample topic embeddings. To do this, we perform inference as described on a held-out thread, and pass the comment-level topic embedding for a single sampled comment through our emission matrix and inspect the words with the highest corresponding weight (shown in Table 4). In generative terminology, these can be thought of as reconstructions of comments.
These topic embeddings capture more specific conversational and rhetorical moves. For example, Sample # Associated Word Stems by Emission Weight (Higher Score → Lower Score)
Comment-Level
Sample 1 grade grader math age 5th 9th 10th till mayb 7th 2 repost damn dope bamboozl shitload imagin cutest sad legendari awhil 3 heh dawg hmm spooki buddi aye m8 aww fam woah 4 hug merci bless tfw prayer pleas dear bear banana satan 5 chuckl cutest funniest yall bummer oooh mustv coolest ok oop 6 cutest heard coolest funniest havent seen ive craziest stupidest weirdest 7 reev keanu christoph murphi walken vincent chris til wick roger 8 moron douchebag stupid dipshit snitch jackass dickhead idioci hypocrit riddanc 9 technic actual realiz happen escal werent citat practic memo cba 10 reddit shill question background user subreddit answer relev discord guild Table 4: Words with the highest emission weight for sample held-out comment reconstructions.
Sample 6 displays supportive and interested reactionary language, which one might expect to see used in response to a post or comment linking to media or describing something intriguing. This is of note given that one of the primary aims of including coupling potentials was to encourage DDTM to learn "topics" that correspond to responses and interactive behavior, something existing methods are largely not designed for. By contrast, Sample 9 captures a variety of hostile language and insults, which unlike those discussed previously do not denote membership in a particular online community. As patterns of toxic and hateful behavior on Reddit are more well-studied (Chandrasekharan et al., 2017), it could be useful to have a tool to analyze precipitous contexts and parent comments, something which we hope systems based on coupling of comment embeddings have the capacity to provide. Sample 10 is of particular interest as it consists largely of Reddit terminology. Conversations about the meta of the site can manifest for example in users accusing each other of being "shills" (i.e. accounts paid to astroturf on behalf of external interests) or requesting/responding to "guilding", a feature which lets users purchase premium access for each other often in response to a particularly well made comment.
Conclusion
In this paper we introduce a novel way to learn topic interactions in observed discourse trees, and describe GPU-amenable learning techniques to train on large-scale data mined from Reddit. We demonstrate improvements over previous models on perplexity and downstream tasks, and offer qualitative analysis of learned discursive patterns. The dichotomy between the two levels of embeddings hints at applications in style-transfer.
| 4,838 |
1809.07282
|
2952771416
|
In this paper, we propose a deep, globally normalized topic model that incorporates structural relationships connecting documents in socially generated corpora, such as online forums. Our model (1) captures discursive interactions along observed reply links in addition to traditional topic information, and (2) incorporates latent distributed representations arranged in a deep architecture, which enables a GPU-based mean-field inference procedure that scales efficiently to large data. We apply our model to a new social media dataset consisting of 13M comments mined from the popular internet forum Reddit, a domain that poses significant challenges to models that do not account for relationships connecting user comments. We evaluate against existing methods across multiple metrics including perplexity and metadata prediction, and qualitatively analyze the learned interaction patterns.
|
Some approaches such as Pairwise-Link-LDA and Link-PSLA-LDA @cite_6 attempt to model interactions among documents in an arbitrary graph, albeit with important drawbacks. The former models every possible pairwise link between comments, and the latter models links as a bipartite graph, limiting its ability to scale to large tree-structured threads. Similar work on Topic-Link LDA @cite_9 models link probabilities conditioned on both topic similarity and an authorship model, yet this approach is poorly suited to high volume, semi-anonymous online domains. Other studies have leveraged reply-structures on Reddit in the context of predicting persuasion , but DDTM differs in its generative, unsupervised approach.
|
{
"abstract": [
"Given a large-scale linked document collection, such as a collection of blog posts or a research literature archive, there are two fundamental problems that have generated a lot of interest in the research community. One is to identify a set of high-level topics covered by the documents in the collection; the other is to uncover and analyze the social network of the authors of the documents. So far these problems have been viewed as separate problems and considered independently from each other. In this paper we argue that these two problems are in fact inter-dependent and should be addressed together. We develop a Bayesian hierarchical approach that performs topic modeling and author community discovery in one unified framework. The effectiveness of our model is demonstrated on two blog data sets in different domains and one research paper citation data from CiteSeer.",
"In this work, we address the problem of joint modeling of text and citations in the topic modeling framework. We present two different models called the Pairwise-Link-LDA and the Link-PLSA-LDA models. The Pairwise-Link-LDA model combines the ideas of LDA [4] and Mixed Membership Block Stochastic Models [1] and allows modeling arbitrary link structure. However, the model is computationally expensive, since it involves modeling the presence or absence of a citation (link) between every pair of documents. The second model solves this problem by assuming that the link structure is a bipartite graph. As the name indicates, Link-PLSA-LDA model combines the LDA and PLSA models into a single graphical model. Our experiments on a subset of Citeseer data show that both these models are able to predict unseen data better than the baseline model of Erosheva and Lafferty [8], by capturing the notion of topical similarity between the contents of the cited and citing documents. Our experiments on two different data sets on the link prediction task show that the Link-PLSA-LDA model performs the best on the citation prediction task, while also remaining highly scalable. In addition, we also present some interesting visualizations generated by each of the models."
],
"cite_N": [
"@cite_9",
"@cite_6"
],
"mid": [
"2130978632",
"2165636119"
]
}
|
Modeling Online Discourse with Coupled Distributed Topics
|
Topic models have become one of the most common unsupervised methods for uncovering latent semantic information in natural language data, and have found a wide variety of applications across the sciences. However, many common modelssuch as Latent Dirichlet Allocation (Ng and Jordan, 2003) -make an explicit exchangeability assumption that treats documents as independent samples from a generative prior, thereby ignoring important aspects of text corpora which are generated by nonergodic, interconnected social systems. While the direct application of such models to datasets such as transcripts of The French Revolution (Barron et al., 2017) and discussions on Twitter (Zhao et al., 2011) have yielded sensible topics and exciting insights, their exclusion of document-to-document interactions imposes limitations on the scope of their applicability and the analyses they support.
For instance, on many social media platforms, comments are short (the average Reddit comment is 10 words long), making them difficult to treat as full documents, yet they do cohere as a collection, suggesting that contextual relationships should be considered. Moreover, analysis of social data is often principally concerned with understanding relationships between documents (such as question-asking and -answering), so a model able to capture such features is of direct scientific relevance.
To address these issues, we propose a design that models representations of comments jointly along observed reply links. Specifically, we attach a vector of latent binary variables to each comment in a collection of social data, which in turn connect to each other according to the observed reply-link structure of the dataset. The inferred representations can provide information about the rhetorical moves and linguistic elements that characterize an evolving discourse. An added benefit is that while previous work such as Sequential LDA (Du et al., 2012) has focused on modeling a linear progression, the model we present applies to a more general class of acyclic graphs such as tree-structured comment threads ubiquitous on the web.
Online data can be massive, which presents a scalability issue for traditional methods. Our approach uses latent binary variables similar to a Restricted Boltzmann Machine (RBM); related models such as Replicated Softmax (RS) (Salakhutdinov and Hinton, 2009) have previously seen success in capturing latent properties of language, and found substantial speedups over previous methods due to their GPU amenable training procedure. RS was also shown to deal well with documents of significantly different length, another key characteristic of online data. While RBMs permit exact inference, the additional coupling potentials present in our model make inference intractable. However, the choice of bilinear potentials and latent features admits a mean-field inference procedure which takes the form of a series of dense matrix multiplications followed by nonlinearities, which is particularly amenable to GPU computation and lets us scale efficiently to large data. Our model outperforms LDA and RS baselines on perplexity and downstream tasks including metadata prediction and document retrieval when evaluated on a new dataset mined from Reddit. We also qualitatively analyze the learned topics and discuss the social phenomena uncovered.
Model
We now present an overview of our model. Specifically, it will take the probabilistic form of an undirected graphical model whose architecture mirrors the tree structure of the threads in our data.
Motivating Dataset
We evaluate on a corpus mined from Reddit, an internet forum which ranks as the fourth most trafficked site in the US (Alexa, 2018) and sees millions of daily comments (Reddit, 2015). Discourse on Reddit follows a branching pattern, shown in Figure 1. The largest unit of discourse is a thread, beginning with a link to external content or a natural language prompt, posted to a relevant subreddit based on its subject matter. Users comment in response to the original post (OP), or to any other comment. The result is a structure which splits at many points into more specific or tangential discussions that while locally coherent may differ substantially from each other. The data reflect features of the underlying memory and network structure of the generating process; comments are serially correlated and highly cross-referential. We treat individual comments as "documents" under the standard topic modeling paradigm, but use observed reply structure to induce a tree of documents for every thread.
Description of Discursive Distributed Topic Model
We now introduce the Discursive Distributed Topic Model (DDTM) (illustrated in Figure 1). For each comment in the thread, DDTM assigns a latent vector of binary random variables (or bits) that collectively form a distributed embedding of the topical content of that comment; for instance, one bit might represent sarcastic language while another might track usage of specific acronyms -a given comment could have any combination of those features. These representations are tied to those of parent and child comments via coupling potentials (see Section 2.3), which allow them to learn discursive properties by inducing a deep undirected network over the thread. In order to encourage the model to use these comment-level representations to learn discursive and stylistic patterns as opposed to simply topics of discussion, we incorporate a single additional latent vector for the entire thread that interacts with each comment, explaining word choices that are mainly topical rather than discursive or stylistic. As we demonstrate in our experiments (see Section 6) the thread-level embedding learns distributions more reminiscent of what a traditional topic model would uncover, while the comment-level embeddings model styles of speak-ing and mannerisms that do not directly indicate specific subjects of conversation. The joint probability is defined in terms of an energy function that scores latent embeddings and observed word counts across the tree of comments within a thread using log-bilinear potentials, and is globally normalized over all word count and embedding combinations.
Probability Model
More formally, consider a thread containing N comments each of size D n with a vocabulary of size K. As depicted in Figure 1, each comment is viewed as a bag-of-words, densely connected via a log-bilinear potential to a latent embedding of size F . Let each comment be represented as as an integer vector x n ∈ Z K where x nk is number of times word k was observed in comment n, and let h n = {0, 1} F be the topic embedding for each comment, and let h 0 = {0, 1} F be the embedding for the entire thread. To model topic transitions, we score the embeddings of parent-child pairs with a separate coupling potential as shown in Figure 1 (comments with no parents or children receive additional start/stop biases respectively). Let replies be represented with sets R, P N , and C N where (n, m) ∈ R and n ∈ P m and m ∈ C n if comment m is a reply to comment n. DDTM assigns probability to a specific configuration of x, h with an energy function scored by the emission (π e ) and coupling (π c ) potentials.
E(x, h; θ) = N n=1 π e (h, x, n) Emission Potentials + (n,m)∈R π c (h, n, m) Coupling Potentials π e (h, x, n) = h n U x n + x n a + D n h n b + h 0 V x n + D n h 0 c π c (h, n, m) = h n W h m(1)
Note that the bias on embeddings is scaled by the number of words in the comment, which controls for their highly variable length. The joint probability is computed by exponentiating the energy and dividing by a normalizing constant.
p(x, h; θ) = exp(E(x, h; θ)) Z(θ) Z(θ) = x ,h exp(E(x , h ; θ))(2)
This architecture encourages the model to learn discursive maneuvers via the coupling potentials while separating within-thread variance and acrossthread variance through the comment-level and thread-level embeddings respectively. The coupling of latent variables makes factored inference impossible, meaning that even the exact computation of the partition function is no longer tractable. This necessitates approximating the gradients for learning which we will now address.
Learning and Inference
Inference in this model class in intractable, so as has been done in previous work on topic modeling (Ng and Jordan, 2003) we rely on variational methods to approximate the gradients needed during training as well as the posteriors over the topic bit vectors. Specifically, we will need the gradients of the normalizer and the sum of the energy function over the hidden variables
E(x; θ) = log h exp(E(x, h; θ))(3)
which we refer to as the marginal energy. Following the approach described for undirected models by Eisner (2011), we approximate these quantities and their gradients with respect to the model parameters θ as we will now describe (thread-level embeddings are omitted in this section for clarity).
Normalizer Approximation
We aim to train our model to maximize the marginal likelihood of the observed comment word counts, conditioned on the reply links. To do this we must compute the gradient of the normalizer Z(θ). However, this quantity is computationally intractable, as it contains a summation over all exponential choices for every word in the thread. Therefore, we must approximate Z(θ). Observe that under Jensen's Inequality, we can form the following lower bound on the normalizer using an approximate joint distribution q (Z) .
log Z(θ) = log x,h exp(E(x, h; θ)) ≥ E q (Z) [E(x, h; θ)] − E q (Z) [log q (Z) (x, h; φ, γ)](4)
We now define q (Z) as depicted in Figure 2 as a mean-field approximation that treats all variables as independent. We parameterize q (Z) with Factor graph of full joint compared to meanfield approximations to joint and posterior.
φ nf ∈ [0, 1], independent Bernoulli parameters representing the probability of h nf being equal to 1, and γ nk replicated softmaxes representing the probability of a word in x n taking the value k. Note that all words in x n are modeled as samples from this single distribution. The approximation then factors as follows:
q (Z) (x, h; φ, γ) = q (Z) (x; γ) · q (Z) (h; φ) q (Z) (x; γ) = N n=1 K k=1 (γ nk ) x nk q (Z) (h; φ) = N n=1 F f =1 (φ nf ) h nf (1 − φ nf ) (1−h nf )(5)
We optimize the parameters of q (Z) to maximize its variational lower bound, via iterative mean-field updates, which allow us to perform coordinate ascent over the parameters of q (Z) . Maximizing the lower bound with respect to particular φ nf and γ nk while holding all other parameters frozen, yields the following mean-field update equations (biases omitted for clarity):
φ n· = σ U γ n + m∈Cn W φ m + φ Pn W γ n· = σ (φ n U )(6)
We iterate over the parameters of q (Z) in an "upward-downward" manner; first updating φ for all comments with no children, then all comments whose children have been updated, and so on up to the root of the thread. Then we perform the same updates in reverse order. After updating all φ, we then update γ simultaneously (the components of γ are independent conditioned on φ). We iterate these upward-downward passes until convergence.
Marginal Energy Approximation
We can now approximate the normalizer, but still need the marginal data likelihood in order to take gradient steps on it and train our model. In order to recover the marginal likelihood, we must next approximate the marginal energy E(x; θ) as it too is intractable. This is due to the coupling potentials, which make the topics across comments dependent even when conditioned on the word counts. To do this, we form an additional variational approximation (see Figure 2) to the marginal energy, which we optimize similarly.
E(x; θ) = log h exp(E(x, h; θ)) ≥ E q (E) [E(x, h; θ)] − E q (E) [log q (E) (h; ψ)](7)
Since q (E) (h; ψ) need only model the hidden units h, we can parameterize it in the same manner as q (Z) (h; φ). Note that while these distributions factor similarly, they do not share parameters, although we find that in practice, initializing φ ← ψ improves our approximation. We optimize the lower bound on E(x; θ) via a similar coordinate ascent strategy, where the mean-field updates take the following form (biases omitted for clarity):
ψ n· = σ U h n + m∈Cn W ψ m + ψ Pn W(8)
We can use q (E) to perform inference at test time in our model, as its parameters ψ directly correspond to the expected values of the hidden topic embeddings under our approximation.
Learning via Gradient Ascent
We train the parameters of our true model p(x, h; θ) via stochastic updates wherein we optimize both ap-proximations on a single datum (i.e. thread) to compute the approximate gradient of its log-likelihood, and take a single gradient step on the model parameters (repeating on all training instances until convergence). That gradient is given by the difference in feature expectations under the approximations (entropy terms from the lower bounds are dropped as they do not depend on θ).
∇ log p(x; θ) ≈ E q (E) (h;ψ) [∇E(x, h; θ)] − E q (Z) (x ,h;ψ) ∇E(x , h; θ)(9)
In summary, we use two separate mean-field approximations to compute lower bounds on the marginal energy E(x, h; θ), and its normalizer Z(θ), which lets us approximate the marginal likelihood p(x; θ). Note that as our estimate on the marginal likelihood is the difference between two lower bounds, it is not a lower bound itself, although in practice it works well for training.
Scalability and GPU Implementation
Given the magnitude of our dataset, it is essential to be able to train efficiently at scale. Many commonly used topic models such as LDA (Ng and Jordan, 2003) have difficulty scaling, particularly if trained via MCMC methods. Improvements have been shown from online training (Hoffman et al., 2010), but extending such techniques to model comment-to-comment connections and leverage GPU compute is nontrivial.
In contrast, our proposed model and mean-field procedure can be scaled efficiently to large data because they are amenable to GPU implementation. Specifically, the described inference procedure can be viewed as the output of a neural network. This is because DDTM is globally normalized with edges parameterized as log-bilinear weights, which results in the mean-field updates taking the form of matrix operations followed by nonlinearities. Therefore, a single iteration of mean-field is equivalent to a forward pass through a recursive neural network, whose architecture is defined by the tree structure of the thread. Multiple iterations are equivalent to feeding the output of the network back into itself in a recurrent manner, and optimizing for T iterations is achieved by unrolling the network over T timesteps. This property makes DDTM highly amenable to efficient training on a GPU, and allowed us to scale experiments to a dataset of over 13M total Reddit comments.
Experimental Setup
Data
We mined a corpus of Reddit threads pulled through the platform's API. Focusing on the twenty most popular subreddits (gifs, todayilearned, CFB, funny, aww, AskReddit, Black-PeopleTwitter, videos, pics, politics, The_Donald, soccer, leagueoflegends, nba, nfl, worldnews, movies, mildlyinteresting, news, gaming) over a one month period yielded 200, 000 threads consisting of 13, 276, 455 comments total. The data was preprocessed by removing special characters, replacing URLs with a domain-specific token, stemming English words using a Snowball English Stemmer (Porter, 2001), removing stopwords, and truncating the vocabulary to only include the top 10, 000 most common words. OPs are modeled as a comment at the root of each thread to which all top-level comments respond. This dataset will be made available for public use after publication.
Baselines and Comparisons
We compare to baselines of Replicated Softmax (RS) (Salakhutdinov and Hinton, 2009) and Latent Dirichlet Allocation (LDA) (Ng and Jordan, 2003). RS is a distributed topic model similar to our own, albeit without any coupling potentials. LDA is a locally normalized topic model which defines topics as non-overlapping distributions over words. To ensure that DDTM does not gain an unfair advantage purely by having a larger embedding space, we divide the dimensions equally between commentand thread-level. Unless specified 64 bits/topics were used. We experiment with RS and LDA treating either comments or full threads as documents.
Training and Initialization
SGD was performed using the Adam optimizer (Kingma and Ba, 2015). When running inference, we found convergence was reached in an average of 2 iterations of updates. Using a single NVIDIA Titan X (Pascal) card, we were able to train our model to convergence on the training set of 10M comments in less than 30 hours. It is worth noting that we found DDTM to be fairly sensitive to initialization. We found best results from Gaussian noise, with comment-level emissions at variance of 0.01, thread-level emissions at 0.0001, and transitions at 0. We initialized all biases to 0 except for the bias on word counts, which we set to the unigram log-probabilities from the train set.
Results
Evaluating Perplexity
We compare models by perplexity on a held-out test set, a standard evaluation for generative and latent variables models.
Setup: Due to the use of mean-field approximations for both the marginal energy and normalizer we lose any guarantees regarding the accuracy of our likelihood estimate (both approximations are lower bounds, and therefore their difference is neither a strict lower bound nor guaranteed to be unbiased). To evaluate perplexity in a more principled way, we use Annealed Importance Sampling (AIS) to estimate the ratio between our model's normalizer and the tractable normalizer of a base model from which we can draw true independent samples as described by Salakhutdinov and Murray (2008). Note that since the marginal energy is intractable in our model, unlike a standard RBM, we must sample the joint -and not the marginal -intermediate distributions. This yields an unbiased estimate of the normalizer. The marginal energy must still be approximated via a lower bound, but given that AIS is unbiased and empirically low in variance, we can treat the overall estimate as a lower bound on likelihood for evaluation. Using 2000 intermediate distributions, and averaging over 20 runs, we evaluated per-word perplexity over a set of 50 unseen threads. Results are shown in Table 1.
Results: DDTM achieves the lowest perplexity at all dimensionalities. Note our ablation with the coupling potentials removed (-cpl), increases perplexity noticeably, indicating that modeling replies helps beyond simply modeling threads and comments jointly, particularly at larger embeddings. For reference, a unigram model achieves 2644. We find that LDA's approximate perplexity is even worse, likely due to slackness in its lower bound.
Upvote Regression
To measure how well embeddings capture comment-level characteristics, we feed them into a linear regression model that predicts the number of upvotes the comment received. Upvotes provide a loose human-annotated measure of likability. We expect that context matters in determining how well received a comment is; the same comment posted in response to different parents may receive a very different number of upvotes. Hence, we expect comment-level embeddings to be more informative for this task when connected via our model's coupling potentials. Setup: We trained a standard linear regressor for each model. The regressor was trained using ordinary least squares on the entire training set of comments using the model's computed topic embeddings as input, and the number of upvotes on the comment as the output to predict. As a preprocessing step, we took the log of the absolute number of votes before training. We compared models by mean squared error (MSE) on our test set. Results are shown in Table 2.
Results: DDTM achieves lowest MSE. To assess statistical significance, we performed a 500 sample bootstrap of our training set. The standard errors of these replications are small, and a two-sample t-test rejects the null hypothesis that DDTM has an average MSE equal to that of the next best method (p < .001). Note that our model outperforms both comment-and thread-level embeddings, suggesting that modeling these jointly, and modeling the effect of neighboring representations in the comment graph, more accurately learns information relevant to a comment's social impact. Figure 3: Precision vs. recall for document retrieval based on subreddit comparing various models for 1000 randomly selected held-out query comments.
Deletion Prediction
Comments that are excessively provocative or in violation of site rules are often deleted, either by the author or a moderator. We can measure whether DDTM captures discursive interactions that lead to such intervention by training a logistic classifier that predicts whether any of a given comment's children have been deleted.
Setup: For each model, a logistic regression classifier was trained stochastically with the Adam optimizer on the entire training set of comments using the model's computed topic embeddings as input, and a binary label for whether the comment had any deleted children as the output to predict. We compared models by accuracy on our test set. Results are shown in Table 2.
Results: DDTM gets the highest accuracy. Interestingly, thread-level models do better than comment-level ones, which suggests that certain topics or even subreddits may correlate with comments being deleted. This makes sense given that subreddits vary in severity of moderation. DDTM's performance also demonstrates that modeling comment-to-comment interaction patterns is helpful in predicting when a comment will spawn a deleted future response, which strongly matches our intuition.
Document Retrieval
Finally, while DDTM is not designed to better capture topical structure, we evaluate the extent to which it can still capture this information by performing document retrieval, a standard evaluation, for which we treat the subreddit to which a thread was posted as a label for relevance. Note that every comment within the same thread belongs to the same subreddit, which gives thread-level models an inherent advantage at this task. We include this task purely for the purpose of demonstrating that by capturing discursive patterns, DDTM does not lose the ability to model thread-level topics as well. Setup: Given a query comment from our held-out test set, we rank the training set by the Dice similarity of the hidden embeddings computed by the model. We consider a retrieved comment relevant to the query if they both originate from the same subreddit, which loosely categorizes the semantic content. Tuning the number of documents we return allows us to form precision recall curves, which we show in Figure 3. Results: DDTM outperforms both comment-level baselines and is competitive with thread-level models, even beating LDA at high levels of recall. This indicates that despite using half of its dimensions to model comment-to-comment interactions DDTM can still do almost as good a job of modeling threadlevel semantics as a model using its entire capacity Bit # Associated Word Stems by Emission Weight (Higher Score → Lower Score)
Comment-Level
Bit 1 faq tldrs pms 165 til keyword questions feedback chat pm 2 irl riamverysmart legend omfg riski aboard favr madman skillset tunnel 3 lotta brah ouch spici oof bummer buildup viewership hd uncanni 4 funniest mah tfw teleport fav hoo plz bah whyd dumbest 5 handsom hipster texan hottest whore norwegian shittier scandinavian jealousi douch
Thread-Level
Bit 1 btc gameplay tutori cyclist dev currenc kitti bitcoin rpg crypto 2 url_youtu url_leagueoflegends url_businessinsider url_twitter url_redd url_snopes 3 comey pede macron pg13 maga globalist ucf committe cuck distributor 4 maduro venezuelan ballot puerto catalonia rican quak skateboard venezuela quebec 5 nra scotus opioid cheney nevada metallica marijuana vermont colorado xanax to do so. The gap between comment-level RS and LDA is also consistent with LDA's known issues dealing with sparse data (Sridhar, 2015), and lends credence to our theory that distributed topic representations are better suited to such domains.
Qualitative Analysis of Topics
We now offer qualitative analysis of the topic embeddings learned by our model. Note that since we use distributed embeddings, our bits are more akin to filters than complete distributions over words, and we typically observe as many as half of them active for a single comment. In a sense, we have an exponential number of topics, whose parameterization simply factors over the bits. Therefore, it can be difficult to interpret them as one would interpret topics learned by a model such as LDA. Furthermore, we find that in practice this effect is correlated with the topic embedding size; the more bits our model has, the less sparse and consequently less individually meaningful the bits become. Therefore for this analysis, we specifically focus on DDTM trained with 64 bits total.
Bits in Isolation
Directly inspecting the emission parameters, reveals that the comment-level and thread-level halves of our embeddings capture substantially different aspects of the data (shown in Table 3) akin to vertical, within-thread, and horizontal, across-thread sources of variance respectively. The comment-level topic bits tend to reflect styles of speaking, lingo, and memes that are not unique to a particular subject of discourse or even subreddit. For example, comment-level Bit 2 captures many words typical of taunting Reddit comments; replying with "/r/iamverysmart" (a subreddit dedicated to mocking people who make grandiose claims about their intellect) is a common way of jokingly implying that the author of the parent comment takes themselves too seriously -and thus corresponds to a certain kind of rhetorical move. Further, it is grouped with other words that indicate related rhetorical moves; calling a user "risky" or a "madman" is a common means of suggesting that they are engaging in a pointless act of rebellion. They also cluster at the coarsest level by length (see Figure 5) which we find to correlate with writing style. By contrast, the thread-level bits are more indicative of specific topics of discussion, and unsurprisingly they cluster by subreddit (see Figure 4). For example, thread-level Bit 3 captures lexicon used almost exclusively by alt-right Donald Trump supporters as well as the names of various political figures. Bit 4 highlights words related to civil unrest in Spanish speaking parts of the world.
Bits in Combination
While these distributions over words (particularly for comment-level bits) can seem vague, when multiple bits are active, their effects compound to produce much more specific topics. One can think of the bits as defining soft filters over the space of words, that when stacked together carve out patterns not apparent in any of them individually. We now analyze a few sample topic embeddings. To do this, we perform inference as described on a held-out thread, and pass the comment-level topic embedding for a single sampled comment through our emission matrix and inspect the words with the highest corresponding weight (shown in Table 4). In generative terminology, these can be thought of as reconstructions of comments.
These topic embeddings capture more specific conversational and rhetorical moves. For example, Sample # Associated Word Stems by Emission Weight (Higher Score → Lower Score)
Comment-Level
Sample 1 grade grader math age 5th 9th 10th till mayb 7th 2 repost damn dope bamboozl shitload imagin cutest sad legendari awhil 3 heh dawg hmm spooki buddi aye m8 aww fam woah 4 hug merci bless tfw prayer pleas dear bear banana satan 5 chuckl cutest funniest yall bummer oooh mustv coolest ok oop 6 cutest heard coolest funniest havent seen ive craziest stupidest weirdest 7 reev keanu christoph murphi walken vincent chris til wick roger 8 moron douchebag stupid dipshit snitch jackass dickhead idioci hypocrit riddanc 9 technic actual realiz happen escal werent citat practic memo cba 10 reddit shill question background user subreddit answer relev discord guild Table 4: Words with the highest emission weight for sample held-out comment reconstructions.
Sample 6 displays supportive and interested reactionary language, which one might expect to see used in response to a post or comment linking to media or describing something intriguing. This is of note given that one of the primary aims of including coupling potentials was to encourage DDTM to learn "topics" that correspond to responses and interactive behavior, something existing methods are largely not designed for. By contrast, Sample 9 captures a variety of hostile language and insults, which unlike those discussed previously do not denote membership in a particular online community. As patterns of toxic and hateful behavior on Reddit are more well-studied (Chandrasekharan et al., 2017), it could be useful to have a tool to analyze precipitous contexts and parent comments, something which we hope systems based on coupling of comment embeddings have the capacity to provide. Sample 10 is of particular interest as it consists largely of Reddit terminology. Conversations about the meta of the site can manifest for example in users accusing each other of being "shills" (i.e. accounts paid to astroturf on behalf of external interests) or requesting/responding to "guilding", a feature which lets users purchase premium access for each other often in response to a particularly well made comment.
Conclusion
In this paper we introduce a novel way to learn topic interactions in observed discourse trees, and describe GPU-amenable learning techniques to train on large-scale data mined from Reddit. We demonstrate improvements over previous models on perplexity and downstream tasks, and offer qualitative analysis of learned discursive patterns. The dichotomy between the two levels of embeddings hints at applications in style-transfer.
| 4,838 |
1809.07282
|
2952771416
|
In this paper, we propose a deep, globally normalized topic model that incorporates structural relationships connecting documents in socially generated corpora, such as online forums. Our model (1) captures discursive interactions along observed reply links in addition to traditional topic information, and (2) incorporates latent distributed representations arranged in a deep architecture, which enables a GPU-based mean-field inference procedure that scales efficiently to large data. We apply our model to a new social media dataset consisting of 13M comments mined from the popular internet forum Reddit, a domain that poses significant challenges to models that do not account for relationships connecting user comments. We evaluate against existing methods across multiple metrics including perplexity and metadata prediction, and qualitatively analyze the learned interaction patterns.
|
DDTM's emission potentials are similar to those of Replicated Softmax @cite_5 , an undirected model based on a Restricted Boltzmann Machine. Unlike LDA-style models, RS does not assign a topic to each word, but instead builds a distributed representation. In this setting, a single word can be likely under two different topics, both of which are present, and lend probability mass to that word. LDA-style models by contrast would require the topics to compete for the word.
|
{
"abstract": [
"We introduce a two-layer undirected graphical model, called a \"Replicated Softmax\", that can be used to model and automatically extract low-dimensional latent semantic representations from a large unstructured collection of documents. We present efficient learning and inference algorithms for this model, and show how a Monte-Carlo based method, Annealed Importance Sampling, can be used to produce an accurate estimate of the log-probability the model assigns to test data. This allows us to demonstrate that the proposed model is able to generalize much better compared to Latent Dirichlet Allocation in terms of both the log-probability of held-out documents and the retrieval accuracy."
],
"cite_N": [
"@cite_5"
],
"mid": [
"2100002341"
]
}
|
Modeling Online Discourse with Coupled Distributed Topics
|
Topic models have become one of the most common unsupervised methods for uncovering latent semantic information in natural language data, and have found a wide variety of applications across the sciences. However, many common modelssuch as Latent Dirichlet Allocation (Ng and Jordan, 2003) -make an explicit exchangeability assumption that treats documents as independent samples from a generative prior, thereby ignoring important aspects of text corpora which are generated by nonergodic, interconnected social systems. While the direct application of such models to datasets such as transcripts of The French Revolution (Barron et al., 2017) and discussions on Twitter (Zhao et al., 2011) have yielded sensible topics and exciting insights, their exclusion of document-to-document interactions imposes limitations on the scope of their applicability and the analyses they support.
For instance, on many social media platforms, comments are short (the average Reddit comment is 10 words long), making them difficult to treat as full documents, yet they do cohere as a collection, suggesting that contextual relationships should be considered. Moreover, analysis of social data is often principally concerned with understanding relationships between documents (such as question-asking and -answering), so a model able to capture such features is of direct scientific relevance.
To address these issues, we propose a design that models representations of comments jointly along observed reply links. Specifically, we attach a vector of latent binary variables to each comment in a collection of social data, which in turn connect to each other according to the observed reply-link structure of the dataset. The inferred representations can provide information about the rhetorical moves and linguistic elements that characterize an evolving discourse. An added benefit is that while previous work such as Sequential LDA (Du et al., 2012) has focused on modeling a linear progression, the model we present applies to a more general class of acyclic graphs such as tree-structured comment threads ubiquitous on the web.
Online data can be massive, which presents a scalability issue for traditional methods. Our approach uses latent binary variables similar to a Restricted Boltzmann Machine (RBM); related models such as Replicated Softmax (RS) (Salakhutdinov and Hinton, 2009) have previously seen success in capturing latent properties of language, and found substantial speedups over previous methods due to their GPU amenable training procedure. RS was also shown to deal well with documents of significantly different length, another key characteristic of online data. While RBMs permit exact inference, the additional coupling potentials present in our model make inference intractable. However, the choice of bilinear potentials and latent features admits a mean-field inference procedure which takes the form of a series of dense matrix multiplications followed by nonlinearities, which is particularly amenable to GPU computation and lets us scale efficiently to large data. Our model outperforms LDA and RS baselines on perplexity and downstream tasks including metadata prediction and document retrieval when evaluated on a new dataset mined from Reddit. We also qualitatively analyze the learned topics and discuss the social phenomena uncovered.
Model
We now present an overview of our model. Specifically, it will take the probabilistic form of an undirected graphical model whose architecture mirrors the tree structure of the threads in our data.
Motivating Dataset
We evaluate on a corpus mined from Reddit, an internet forum which ranks as the fourth most trafficked site in the US (Alexa, 2018) and sees millions of daily comments (Reddit, 2015). Discourse on Reddit follows a branching pattern, shown in Figure 1. The largest unit of discourse is a thread, beginning with a link to external content or a natural language prompt, posted to a relevant subreddit based on its subject matter. Users comment in response to the original post (OP), or to any other comment. The result is a structure which splits at many points into more specific or tangential discussions that while locally coherent may differ substantially from each other. The data reflect features of the underlying memory and network structure of the generating process; comments are serially correlated and highly cross-referential. We treat individual comments as "documents" under the standard topic modeling paradigm, but use observed reply structure to induce a tree of documents for every thread.
Description of Discursive Distributed Topic Model
We now introduce the Discursive Distributed Topic Model (DDTM) (illustrated in Figure 1). For each comment in the thread, DDTM assigns a latent vector of binary random variables (or bits) that collectively form a distributed embedding of the topical content of that comment; for instance, one bit might represent sarcastic language while another might track usage of specific acronyms -a given comment could have any combination of those features. These representations are tied to those of parent and child comments via coupling potentials (see Section 2.3), which allow them to learn discursive properties by inducing a deep undirected network over the thread. In order to encourage the model to use these comment-level representations to learn discursive and stylistic patterns as opposed to simply topics of discussion, we incorporate a single additional latent vector for the entire thread that interacts with each comment, explaining word choices that are mainly topical rather than discursive or stylistic. As we demonstrate in our experiments (see Section 6) the thread-level embedding learns distributions more reminiscent of what a traditional topic model would uncover, while the comment-level embeddings model styles of speak-ing and mannerisms that do not directly indicate specific subjects of conversation. The joint probability is defined in terms of an energy function that scores latent embeddings and observed word counts across the tree of comments within a thread using log-bilinear potentials, and is globally normalized over all word count and embedding combinations.
Probability Model
More formally, consider a thread containing N comments each of size D n with a vocabulary of size K. As depicted in Figure 1, each comment is viewed as a bag-of-words, densely connected via a log-bilinear potential to a latent embedding of size F . Let each comment be represented as as an integer vector x n ∈ Z K where x nk is number of times word k was observed in comment n, and let h n = {0, 1} F be the topic embedding for each comment, and let h 0 = {0, 1} F be the embedding for the entire thread. To model topic transitions, we score the embeddings of parent-child pairs with a separate coupling potential as shown in Figure 1 (comments with no parents or children receive additional start/stop biases respectively). Let replies be represented with sets R, P N , and C N where (n, m) ∈ R and n ∈ P m and m ∈ C n if comment m is a reply to comment n. DDTM assigns probability to a specific configuration of x, h with an energy function scored by the emission (π e ) and coupling (π c ) potentials.
E(x, h; θ) = N n=1 π e (h, x, n) Emission Potentials + (n,m)∈R π c (h, n, m) Coupling Potentials π e (h, x, n) = h n U x n + x n a + D n h n b + h 0 V x n + D n h 0 c π c (h, n, m) = h n W h m(1)
Note that the bias on embeddings is scaled by the number of words in the comment, which controls for their highly variable length. The joint probability is computed by exponentiating the energy and dividing by a normalizing constant.
p(x, h; θ) = exp(E(x, h; θ)) Z(θ) Z(θ) = x ,h exp(E(x , h ; θ))(2)
This architecture encourages the model to learn discursive maneuvers via the coupling potentials while separating within-thread variance and acrossthread variance through the comment-level and thread-level embeddings respectively. The coupling of latent variables makes factored inference impossible, meaning that even the exact computation of the partition function is no longer tractable. This necessitates approximating the gradients for learning which we will now address.
Learning and Inference
Inference in this model class in intractable, so as has been done in previous work on topic modeling (Ng and Jordan, 2003) we rely on variational methods to approximate the gradients needed during training as well as the posteriors over the topic bit vectors. Specifically, we will need the gradients of the normalizer and the sum of the energy function over the hidden variables
E(x; θ) = log h exp(E(x, h; θ))(3)
which we refer to as the marginal energy. Following the approach described for undirected models by Eisner (2011), we approximate these quantities and their gradients with respect to the model parameters θ as we will now describe (thread-level embeddings are omitted in this section for clarity).
Normalizer Approximation
We aim to train our model to maximize the marginal likelihood of the observed comment word counts, conditioned on the reply links. To do this we must compute the gradient of the normalizer Z(θ). However, this quantity is computationally intractable, as it contains a summation over all exponential choices for every word in the thread. Therefore, we must approximate Z(θ). Observe that under Jensen's Inequality, we can form the following lower bound on the normalizer using an approximate joint distribution q (Z) .
log Z(θ) = log x,h exp(E(x, h; θ)) ≥ E q (Z) [E(x, h; θ)] − E q (Z) [log q (Z) (x, h; φ, γ)](4)
We now define q (Z) as depicted in Figure 2 as a mean-field approximation that treats all variables as independent. We parameterize q (Z) with Factor graph of full joint compared to meanfield approximations to joint and posterior.
φ nf ∈ [0, 1], independent Bernoulli parameters representing the probability of h nf being equal to 1, and γ nk replicated softmaxes representing the probability of a word in x n taking the value k. Note that all words in x n are modeled as samples from this single distribution. The approximation then factors as follows:
q (Z) (x, h; φ, γ) = q (Z) (x; γ) · q (Z) (h; φ) q (Z) (x; γ) = N n=1 K k=1 (γ nk ) x nk q (Z) (h; φ) = N n=1 F f =1 (φ nf ) h nf (1 − φ nf ) (1−h nf )(5)
We optimize the parameters of q (Z) to maximize its variational lower bound, via iterative mean-field updates, which allow us to perform coordinate ascent over the parameters of q (Z) . Maximizing the lower bound with respect to particular φ nf and γ nk while holding all other parameters frozen, yields the following mean-field update equations (biases omitted for clarity):
φ n· = σ U γ n + m∈Cn W φ m + φ Pn W γ n· = σ (φ n U )(6)
We iterate over the parameters of q (Z) in an "upward-downward" manner; first updating φ for all comments with no children, then all comments whose children have been updated, and so on up to the root of the thread. Then we perform the same updates in reverse order. After updating all φ, we then update γ simultaneously (the components of γ are independent conditioned on φ). We iterate these upward-downward passes until convergence.
Marginal Energy Approximation
We can now approximate the normalizer, but still need the marginal data likelihood in order to take gradient steps on it and train our model. In order to recover the marginal likelihood, we must next approximate the marginal energy E(x; θ) as it too is intractable. This is due to the coupling potentials, which make the topics across comments dependent even when conditioned on the word counts. To do this, we form an additional variational approximation (see Figure 2) to the marginal energy, which we optimize similarly.
E(x; θ) = log h exp(E(x, h; θ)) ≥ E q (E) [E(x, h; θ)] − E q (E) [log q (E) (h; ψ)](7)
Since q (E) (h; ψ) need only model the hidden units h, we can parameterize it in the same manner as q (Z) (h; φ). Note that while these distributions factor similarly, they do not share parameters, although we find that in practice, initializing φ ← ψ improves our approximation. We optimize the lower bound on E(x; θ) via a similar coordinate ascent strategy, where the mean-field updates take the following form (biases omitted for clarity):
ψ n· = σ U h n + m∈Cn W ψ m + ψ Pn W(8)
We can use q (E) to perform inference at test time in our model, as its parameters ψ directly correspond to the expected values of the hidden topic embeddings under our approximation.
Learning via Gradient Ascent
We train the parameters of our true model p(x, h; θ) via stochastic updates wherein we optimize both ap-proximations on a single datum (i.e. thread) to compute the approximate gradient of its log-likelihood, and take a single gradient step on the model parameters (repeating on all training instances until convergence). That gradient is given by the difference in feature expectations under the approximations (entropy terms from the lower bounds are dropped as they do not depend on θ).
∇ log p(x; θ) ≈ E q (E) (h;ψ) [∇E(x, h; θ)] − E q (Z) (x ,h;ψ) ∇E(x , h; θ)(9)
In summary, we use two separate mean-field approximations to compute lower bounds on the marginal energy E(x, h; θ), and its normalizer Z(θ), which lets us approximate the marginal likelihood p(x; θ). Note that as our estimate on the marginal likelihood is the difference between two lower bounds, it is not a lower bound itself, although in practice it works well for training.
Scalability and GPU Implementation
Given the magnitude of our dataset, it is essential to be able to train efficiently at scale. Many commonly used topic models such as LDA (Ng and Jordan, 2003) have difficulty scaling, particularly if trained via MCMC methods. Improvements have been shown from online training (Hoffman et al., 2010), but extending such techniques to model comment-to-comment connections and leverage GPU compute is nontrivial.
In contrast, our proposed model and mean-field procedure can be scaled efficiently to large data because they are amenable to GPU implementation. Specifically, the described inference procedure can be viewed as the output of a neural network. This is because DDTM is globally normalized with edges parameterized as log-bilinear weights, which results in the mean-field updates taking the form of matrix operations followed by nonlinearities. Therefore, a single iteration of mean-field is equivalent to a forward pass through a recursive neural network, whose architecture is defined by the tree structure of the thread. Multiple iterations are equivalent to feeding the output of the network back into itself in a recurrent manner, and optimizing for T iterations is achieved by unrolling the network over T timesteps. This property makes DDTM highly amenable to efficient training on a GPU, and allowed us to scale experiments to a dataset of over 13M total Reddit comments.
Experimental Setup
Data
We mined a corpus of Reddit threads pulled through the platform's API. Focusing on the twenty most popular subreddits (gifs, todayilearned, CFB, funny, aww, AskReddit, Black-PeopleTwitter, videos, pics, politics, The_Donald, soccer, leagueoflegends, nba, nfl, worldnews, movies, mildlyinteresting, news, gaming) over a one month period yielded 200, 000 threads consisting of 13, 276, 455 comments total. The data was preprocessed by removing special characters, replacing URLs with a domain-specific token, stemming English words using a Snowball English Stemmer (Porter, 2001), removing stopwords, and truncating the vocabulary to only include the top 10, 000 most common words. OPs are modeled as a comment at the root of each thread to which all top-level comments respond. This dataset will be made available for public use after publication.
Baselines and Comparisons
We compare to baselines of Replicated Softmax (RS) (Salakhutdinov and Hinton, 2009) and Latent Dirichlet Allocation (LDA) (Ng and Jordan, 2003). RS is a distributed topic model similar to our own, albeit without any coupling potentials. LDA is a locally normalized topic model which defines topics as non-overlapping distributions over words. To ensure that DDTM does not gain an unfair advantage purely by having a larger embedding space, we divide the dimensions equally between commentand thread-level. Unless specified 64 bits/topics were used. We experiment with RS and LDA treating either comments or full threads as documents.
Training and Initialization
SGD was performed using the Adam optimizer (Kingma and Ba, 2015). When running inference, we found convergence was reached in an average of 2 iterations of updates. Using a single NVIDIA Titan X (Pascal) card, we were able to train our model to convergence on the training set of 10M comments in less than 30 hours. It is worth noting that we found DDTM to be fairly sensitive to initialization. We found best results from Gaussian noise, with comment-level emissions at variance of 0.01, thread-level emissions at 0.0001, and transitions at 0. We initialized all biases to 0 except for the bias on word counts, which we set to the unigram log-probabilities from the train set.
Results
Evaluating Perplexity
We compare models by perplexity on a held-out test set, a standard evaluation for generative and latent variables models.
Setup: Due to the use of mean-field approximations for both the marginal energy and normalizer we lose any guarantees regarding the accuracy of our likelihood estimate (both approximations are lower bounds, and therefore their difference is neither a strict lower bound nor guaranteed to be unbiased). To evaluate perplexity in a more principled way, we use Annealed Importance Sampling (AIS) to estimate the ratio between our model's normalizer and the tractable normalizer of a base model from which we can draw true independent samples as described by Salakhutdinov and Murray (2008). Note that since the marginal energy is intractable in our model, unlike a standard RBM, we must sample the joint -and not the marginal -intermediate distributions. This yields an unbiased estimate of the normalizer. The marginal energy must still be approximated via a lower bound, but given that AIS is unbiased and empirically low in variance, we can treat the overall estimate as a lower bound on likelihood for evaluation. Using 2000 intermediate distributions, and averaging over 20 runs, we evaluated per-word perplexity over a set of 50 unseen threads. Results are shown in Table 1.
Results: DDTM achieves the lowest perplexity at all dimensionalities. Note our ablation with the coupling potentials removed (-cpl), increases perplexity noticeably, indicating that modeling replies helps beyond simply modeling threads and comments jointly, particularly at larger embeddings. For reference, a unigram model achieves 2644. We find that LDA's approximate perplexity is even worse, likely due to slackness in its lower bound.
Upvote Regression
To measure how well embeddings capture comment-level characteristics, we feed them into a linear regression model that predicts the number of upvotes the comment received. Upvotes provide a loose human-annotated measure of likability. We expect that context matters in determining how well received a comment is; the same comment posted in response to different parents may receive a very different number of upvotes. Hence, we expect comment-level embeddings to be more informative for this task when connected via our model's coupling potentials. Setup: We trained a standard linear regressor for each model. The regressor was trained using ordinary least squares on the entire training set of comments using the model's computed topic embeddings as input, and the number of upvotes on the comment as the output to predict. As a preprocessing step, we took the log of the absolute number of votes before training. We compared models by mean squared error (MSE) on our test set. Results are shown in Table 2.
Results: DDTM achieves lowest MSE. To assess statistical significance, we performed a 500 sample bootstrap of our training set. The standard errors of these replications are small, and a two-sample t-test rejects the null hypothesis that DDTM has an average MSE equal to that of the next best method (p < .001). Note that our model outperforms both comment-and thread-level embeddings, suggesting that modeling these jointly, and modeling the effect of neighboring representations in the comment graph, more accurately learns information relevant to a comment's social impact. Figure 3: Precision vs. recall for document retrieval based on subreddit comparing various models for 1000 randomly selected held-out query comments.
Deletion Prediction
Comments that are excessively provocative or in violation of site rules are often deleted, either by the author or a moderator. We can measure whether DDTM captures discursive interactions that lead to such intervention by training a logistic classifier that predicts whether any of a given comment's children have been deleted.
Setup: For each model, a logistic regression classifier was trained stochastically with the Adam optimizer on the entire training set of comments using the model's computed topic embeddings as input, and a binary label for whether the comment had any deleted children as the output to predict. We compared models by accuracy on our test set. Results are shown in Table 2.
Results: DDTM gets the highest accuracy. Interestingly, thread-level models do better than comment-level ones, which suggests that certain topics or even subreddits may correlate with comments being deleted. This makes sense given that subreddits vary in severity of moderation. DDTM's performance also demonstrates that modeling comment-to-comment interaction patterns is helpful in predicting when a comment will spawn a deleted future response, which strongly matches our intuition.
Document Retrieval
Finally, while DDTM is not designed to better capture topical structure, we evaluate the extent to which it can still capture this information by performing document retrieval, a standard evaluation, for which we treat the subreddit to which a thread was posted as a label for relevance. Note that every comment within the same thread belongs to the same subreddit, which gives thread-level models an inherent advantage at this task. We include this task purely for the purpose of demonstrating that by capturing discursive patterns, DDTM does not lose the ability to model thread-level topics as well. Setup: Given a query comment from our held-out test set, we rank the training set by the Dice similarity of the hidden embeddings computed by the model. We consider a retrieved comment relevant to the query if they both originate from the same subreddit, which loosely categorizes the semantic content. Tuning the number of documents we return allows us to form precision recall curves, which we show in Figure 3. Results: DDTM outperforms both comment-level baselines and is competitive with thread-level models, even beating LDA at high levels of recall. This indicates that despite using half of its dimensions to model comment-to-comment interactions DDTM can still do almost as good a job of modeling threadlevel semantics as a model using its entire capacity Bit # Associated Word Stems by Emission Weight (Higher Score → Lower Score)
Comment-Level
Bit 1 faq tldrs pms 165 til keyword questions feedback chat pm 2 irl riamverysmart legend omfg riski aboard favr madman skillset tunnel 3 lotta brah ouch spici oof bummer buildup viewership hd uncanni 4 funniest mah tfw teleport fav hoo plz bah whyd dumbest 5 handsom hipster texan hottest whore norwegian shittier scandinavian jealousi douch
Thread-Level
Bit 1 btc gameplay tutori cyclist dev currenc kitti bitcoin rpg crypto 2 url_youtu url_leagueoflegends url_businessinsider url_twitter url_redd url_snopes 3 comey pede macron pg13 maga globalist ucf committe cuck distributor 4 maduro venezuelan ballot puerto catalonia rican quak skateboard venezuela quebec 5 nra scotus opioid cheney nevada metallica marijuana vermont colorado xanax to do so. The gap between comment-level RS and LDA is also consistent with LDA's known issues dealing with sparse data (Sridhar, 2015), and lends credence to our theory that distributed topic representations are better suited to such domains.
Qualitative Analysis of Topics
We now offer qualitative analysis of the topic embeddings learned by our model. Note that since we use distributed embeddings, our bits are more akin to filters than complete distributions over words, and we typically observe as many as half of them active for a single comment. In a sense, we have an exponential number of topics, whose parameterization simply factors over the bits. Therefore, it can be difficult to interpret them as one would interpret topics learned by a model such as LDA. Furthermore, we find that in practice this effect is correlated with the topic embedding size; the more bits our model has, the less sparse and consequently less individually meaningful the bits become. Therefore for this analysis, we specifically focus on DDTM trained with 64 bits total.
Bits in Isolation
Directly inspecting the emission parameters, reveals that the comment-level and thread-level halves of our embeddings capture substantially different aspects of the data (shown in Table 3) akin to vertical, within-thread, and horizontal, across-thread sources of variance respectively. The comment-level topic bits tend to reflect styles of speaking, lingo, and memes that are not unique to a particular subject of discourse or even subreddit. For example, comment-level Bit 2 captures many words typical of taunting Reddit comments; replying with "/r/iamverysmart" (a subreddit dedicated to mocking people who make grandiose claims about their intellect) is a common way of jokingly implying that the author of the parent comment takes themselves too seriously -and thus corresponds to a certain kind of rhetorical move. Further, it is grouped with other words that indicate related rhetorical moves; calling a user "risky" or a "madman" is a common means of suggesting that they are engaging in a pointless act of rebellion. They also cluster at the coarsest level by length (see Figure 5) which we find to correlate with writing style. By contrast, the thread-level bits are more indicative of specific topics of discussion, and unsurprisingly they cluster by subreddit (see Figure 4). For example, thread-level Bit 3 captures lexicon used almost exclusively by alt-right Donald Trump supporters as well as the names of various political figures. Bit 4 highlights words related to civil unrest in Spanish speaking parts of the world.
Bits in Combination
While these distributions over words (particularly for comment-level bits) can seem vague, when multiple bits are active, their effects compound to produce much more specific topics. One can think of the bits as defining soft filters over the space of words, that when stacked together carve out patterns not apparent in any of them individually. We now analyze a few sample topic embeddings. To do this, we perform inference as described on a held-out thread, and pass the comment-level topic embedding for a single sampled comment through our emission matrix and inspect the words with the highest corresponding weight (shown in Table 4). In generative terminology, these can be thought of as reconstructions of comments.
These topic embeddings capture more specific conversational and rhetorical moves. For example, Sample # Associated Word Stems by Emission Weight (Higher Score → Lower Score)
Comment-Level
Sample 1 grade grader math age 5th 9th 10th till mayb 7th 2 repost damn dope bamboozl shitload imagin cutest sad legendari awhil 3 heh dawg hmm spooki buddi aye m8 aww fam woah 4 hug merci bless tfw prayer pleas dear bear banana satan 5 chuckl cutest funniest yall bummer oooh mustv coolest ok oop 6 cutest heard coolest funniest havent seen ive craziest stupidest weirdest 7 reev keanu christoph murphi walken vincent chris til wick roger 8 moron douchebag stupid dipshit snitch jackass dickhead idioci hypocrit riddanc 9 technic actual realiz happen escal werent citat practic memo cba 10 reddit shill question background user subreddit answer relev discord guild Table 4: Words with the highest emission weight for sample held-out comment reconstructions.
Sample 6 displays supportive and interested reactionary language, which one might expect to see used in response to a post or comment linking to media or describing something intriguing. This is of note given that one of the primary aims of including coupling potentials was to encourage DDTM to learn "topics" that correspond to responses and interactive behavior, something existing methods are largely not designed for. By contrast, Sample 9 captures a variety of hostile language and insults, which unlike those discussed previously do not denote membership in a particular online community. As patterns of toxic and hateful behavior on Reddit are more well-studied (Chandrasekharan et al., 2017), it could be useful to have a tool to analyze precipitous contexts and parent comments, something which we hope systems based on coupling of comment embeddings have the capacity to provide. Sample 10 is of particular interest as it consists largely of Reddit terminology. Conversations about the meta of the site can manifest for example in users accusing each other of being "shills" (i.e. accounts paid to astroturf on behalf of external interests) or requesting/responding to "guilding", a feature which lets users purchase premium access for each other often in response to a particularly well made comment.
Conclusion
In this paper we introduce a novel way to learn topic interactions in observed discourse trees, and describe GPU-amenable learning techniques to train on large-scale data mined from Reddit. We demonstrate improvements over previous models on perplexity and downstream tasks, and offer qualitative analysis of learned discursive patterns. The dichotomy between the two levels of embeddings hints at applications in style-transfer.
| 4,838 |
1809.07058
|
2950546634
|
Navigating in search and rescue environments is challenging, since a variety of terrains has to be considered. Hybrid driving-stepping locomotion, as provided by our robot Momaro, is a promising approach. Similar to other locomotion methods, it incorporates many degrees of freedom---offering high flexibility but making planning computationally expensive for larger environments. We propose a navigation planning method, which unifies different levels of representation in a single planner. In the vicinity of the robot, it provides plans with a fine resolution and a high robot state dimensionality. With increasing distance from the robot, plans become coarser and the robot state dimensionality decreases. We compensate this loss of information by enriching coarser representations with additional semantics. Experiments show that the proposed planner provides plans for large, challenging scenarios in feasible time.
|
Planning for systems with high-dimensional motion flexibility quickly reaches its limits for larger environments since the search space grows exponentially. Similar to multiresolution planning, several approaches utilize multiple representations with different planning dimensionalities to decrease planning complexity. @cite_5 generate an initial plan in a low-dimensional search space and replan in the high-dimensional search space by only considering those states that are part of the low-dimensional plan. @cite_4 plan a path in a low-dimensional search space and only switch to high-dimensional planning in those areas where low-dimensional planning cannot find a solution. Similarly, @cite_9 plan in 2D and switch to high-dimensional planning in the robot vicinity and at key points. As described for multiresolution planning, planning with multiple robot configuration dimensionalities might lead to wrong or bad plans, since a low-dimensional robot representation might assess challenging situations wrongly.
|
{
"abstract": [
"The manufacturing industry today is still focused on the maximization of production. A possible development able to support the global achievement of this goal is the implementation of a new support system for trajectory-planning, specific for industrial robots. This paper describes the trajectory-planning algorithm, able to generate trajectories manageable by human operators, consisting of linear and circular movement primitives. First, the world model and a topology preserving roadmap are stored in a probabilistic occupancy octree by applying a cell extension based algorithm. Successively, the roadmap is constructed within the free reachable joint space maximizing the clearance to the obstacles. A search algorithm is applied on robot configuration positions within the roadmap to identify a path avoiding static obstacles. Finally, the resulting path is converted through an elastic net algorithm into a robot trajectory, which consists of canonical ordered linear and circular movement primitives. The algorithm is demonstrated in a real industrial manipulator context.",
"Planning with kinodynamic constraints is often required for mobile robots operating in cluttered, complex environments. A common approach is to use a two-dimensional (2-D) global planner for long range planning, and a short range higher dimensional planner or controller capable of satisfying all of the constraints on motion. However, this approach is incomplete and can result in oscillations and the inability to find a path to the goal. In this paper we present an approach to solving this problem by combining the global and local path planning problem into a single search using a combined 2-D and higher dimensional state-space.",
"Path planning quickly becomes computationally hard as the dimensionality of the state-space increases. In this paper, we present a planning algorithm intended to speed up path planning for high-dimensional state-spaces such as robotic arms. The idea behind this work is that while planning in a high-dimensional state-space is often necessary to ensure the feasibilityof the resulting path, large portions of the path have a lower-dimensional structure. Based on this observation, our algorithm iteratively constructs a state-space of an adaptive dimensionality--a state-space that is high-dimensional only where the higher dimensionality is absolutely necessary for finding a feasible path. This often reduces drastically the size of the state-space, and as a result, the planning time and memory requirements. Analytically, we show that our method is complete and is guaranteed to find a solution if one exists, within a specified suboptimality bound. Experimentally, we apply the approach to 3D vehicle navigation (x, y, heading), and to a 7 DOF robotic arm on the Willow Garage’s PR2 robot. The results from our experiments suggest that ourmethod can be substantially faster than some of the state-of-the-art planning algorithms optimized for those tasks."
],
"cite_N": [
"@cite_5",
"@cite_9",
"@cite_4"
],
"mid": [
"2052718016",
"2005784077",
"2115970849"
]
}
|
Planning Hybrid Driving-Stepping Locomotion on Multiple Levels of Abstraction
|
Hybrid driving-stepping locomotion is a flexible approach to traverse many types of terrain since it combines the advantages of both, wheeled and legged, locomotion types. However, due to its high robot state dimensionality, planning respective paths is challenging.
In our previous work [1] we presented an approach to plan hybrid driving-stepping locomotion paths for our robot Momaro [2] even for very challenging terrain such as staircases with additional obstacles on it. The planner prefers omnidirectional driving whenever possible and considers individual steps in situations where driving is not possible. The individual configuration of ground contact points (robot footprint) is considered at any time. During planning, steps are represented as abstract manoeuvres which are expanded to detailed motion sequences before executing them. For small scenarios, this method generates high quality paths in feasible time with bounded suboptimality. Due to the high dimensionality of the robot configuration, the explored state space increases rapidly for larger scenarios and makes planning expensive. This effect is not unique for hybrid drivingstepping locomotion but affects high-dimensional planning in many applications such as locomotion planning for robots with tracked flippers or manipulation planning.
The search space can be reduced by choosing a coarser resolution or describing the robot and its manoeuvres in a more abstract way with less degrees of freedom (DoF). However, a fine resolution is key to navigate the robot precisely through challenging terrain. Moreover, only using a more abstract robot description is difficult, since the planning result shall be a path which can be executed by the robot with its given number of DoF.
Coarse-to-fine planning approaches [3,4] address this problem by generating a rough plan first and refine the resulting path to the desired resolution and number of DoF in a second step. Especially in challenging, cluttered terrain, this procedure bears the risk of only finding expensive paths due to the lack of detail in the initial search.
We present a method which plans hybrid driving-stepping locomotion on three different levels of representation (see Fig. 1). In the vicinity of the robot, a representation with a high resolution and a high number of DoF is used to find paths which can be executed by the robot. With increasing distance from the robot, the resolution gets coarser and the robot is described with less DoF. These path segments are situated further in the future which comes along with a higher degree of uncertainty and less accurate sensor information.
We compensate this loss of information for higher levels of representation by enriching the representation with additional semantics. All levels of representation are unified in a single planner. We further present methods to refine path segments into more detailed levels of representation. This decreases the number of necessary replanning steps. Replanning is only initiated if costs indicate that a situation is wrongly assessed in the coarser representation. In addition, we introduce a heuristic, based on the most abstract level of representation.
Experiments show that, compared to our previous work, this approach can handle much larger scenarios in feasible planning time while the path quality stays comparable.
III. HARDWARE
We use our mobile manipulation robot Momaro [2] (see Fig. 2). It offers omnidirectional driving through its four articulated legs ending in directly driven 360°steerable pairs of wheels. The unique design enables manoeuvres which are neither realizable by pure driving nor pure walking robots such as shifting a single foot while maintaining ground contact and thus changing the robot footprint under load. Active leg movements are restricted to the sagittal plane since each leg consists of three pitch joints.
Sensor inputs come from an IMU and a continuously rotating Velodyne Puck 3D laser scanner at the robot head which provides a spherical field-of-view. The laser-range measurements are registered and aggregated to a 3D environment map using the method of Droeschel et al. [16].
IV. APPROACH
Input to our method is a height map with a resolution of 2.5 cm which is generated from the 3D environment map. In the vicinity of the robot, height information is very precise. With increasing distance from the robot, the accuracy decreases due to measurement errors. Planning is done on foot and body costs. The ground contact costs C GC describe the costs to place an individual ground contact element (e.g., a foot or a foot pair) in a given configuration on the map. C GC includes information about the terrain surface and obstacles in the vicinity. The body costs C B ( r b ) describe the costs to place the robot base r b = (r x , r y , r θ ) with its center position r x , r y and its orientation r θ on the map. C B ( r b ) include information about obstacles under the robot base and about the terrain slope under the robot. The generation of C GC and C B from the height map varies between the different levels of representation and may contain several steps. Ground contact costs and body costs are combined to pose costs C( r) which describe the costs to place the robot in a given configuration r on the map. Path planning is realized through an A*-search with anytime characteristics (ARA* [17]) on pose costs. For the current search pose, feasible neighbour poses are generated during the search. They can either be reached by omnidirectional driving or by stepping-related motions. Steppingrelated motions are only considered in the vicinity of obstacles where driving is infeasible. Steps are described as abstract steps, the direct transition between a pre-step to an after-step pose. The detailed motion sequence for steps is not considered during planning but generated before execution.
The environment and the robot are described in three different levels of representation with different sizes. In the vicinity of the robot, we use a fine resolution and a high robot configuration dimensionality for planning. We call this Level 1 representation. With increasing distance from the current robot position, the environment and the robot are represented on higher levels with a coarser resolution and a robot representation with lower dimensionality. This is reasonable, since those parts of the plan are reached in the further future and thus are more uncertain. Moreover, sensor measurements become less precise with increasing distance from the robot. At the same time, we compensate this loss of detail by enriching the environment representation with additional features, which increase the understanding of the situation. Pose costs and robot actions use these semantic features. Higher levels of representation can be derived from lower levels of representation. The approach is visualized in Fig. 3. Level sizes and positions are shown in Fig. 4. For a planning task, the planner only performs a single planning run while including all three levels of representation. Hence, it is important that the same action carries the same costs in different levels of representation to make planning consistent over all levels. Moreover, the transition between the different levels of representation is challenging. All three levels of representation and the transition between them are described in detail in the following sections.
The resulting path consists of segments in multiple levels of representation. As described before, the contained steps are abstract manoeuvres. Abstract steps in the initial path segment are expanded to detailed motion sequences before executing them. Roll and pitch motions of the robot base as well as single foot shifts stabilize the robot to perform each step safely. In addition, foot heights are derived. See our previous work [1] for more details.
Steps are only expanded for path segments in Level 1 which is based on our previous work. For higher levels, representations are not detailed enough to derive concrete robot motions. As the robot executes the initial path segment, more measurements are made and a more detailed environment representation becomes available for path segments which have been represented in higher levels before. The path is updated with these updated representations. This can either be done by replanning the whole path or by transforming the respective path segments into more detailed representations, as described in Section IV-F. We call this coarse-to-fine transformation refinement.
A. Representation Level 1 Level 1 is based on the approach which we presented in our previous work [1]. Input is a height map with a resolution of 2.5 cm. We derive local unsigned height differences between neighbour cells from this height map to generate ground contact costs for each individual foot. Base costs are derived from the height map itself. A height map and the derived foot costs can be seen in Feasible driving neighbour poses can be found within a 20-position-neighbourhood and by turning on the spot to the next discrete orientation, as shown in Fig. 6. If the robot is close to an obstacle, additional stepping-related manoeuvres are considered which are visualized in Fig. 7. Those can be a discrete step, a longitudinal base shift manoeuvre, shifting individual feet forward or shifting individual feet towards their neutral position. We define the neutral robot pose as the pose visualized in Fig. 7 a, top. The costs for the presented manoeuvres are based on the foot and body costs, the individual robot elements induce during the manoeuvre.
As an extension of the previous work, we want the robot to align its orientation with the stair orientation, when climbing those. This is desirable, since the kinematic only allows for leg movements in the sagittal plane and since this behavior can also be observed when humans climb stairs by themselves or teleoperate robots to do so. If, after a stepping manoeuvre, the two front/rear feet have the same longitudinal position but stand on different heights, this indicates that the robot is not aligned with the stairs it climbs. By punishing such a configuration with an additional cost term, we achieve the desired behavior.
B. Representation Level 2
We use the input height map with a resolution of 2.5 cm to compute the Level 2 representation consisting of a height map and a height difference map with a resolution of 5 cm (see Fig. 9 a,b). According to the Nyquist-Shannon sampling theorem, subsampling has to come along with smoothing. To satisfy this theorem, we subsample the Level 1 height map as shown in Fig. 8. Each Level 2 height value is computed from the normalized, weighted sum of a 4×4-region of Level 1 height values. We use a binomial distribution for weighing. A Level 2 height difference map is generated in the same manner: We generate a Level 1 height difference map by computing local height differences on the Level 1 height map. This height difference map is then subsampled to a Level 2 height difference map.
a) b) c) d)
To decrease the robot configuration space dimensionality, we accumulate individual feet to pairs. This is intuitive, since we observe a tendency to pairwise foot movement in Level 1 paths. Moreover, instead of describing each foot position precisely, we use foot areas as a more abstract description. We know, that a foot will be placed somewhere in the respective area but since the representation contains some time-related and measurement-related imprecision, a knowledge of the accurate foot position is not necessary. A Level 2 robot pose r = ( r b , f f , f r ) is consequently represented by its robot base pose r b and its relative longitudinal front and rear foot area pair coordinates f f and f r . Note that our platform and planner only allow sagittal leg movement. Lateral foot coordinates are fixed and thus a single variable is sufficient to describe each foot area pair.
We use the generated Level 2 representation to compute ground contact and body costs. Body cost computation is similar to Level 1 and only relies on height information. Ground contact costs
C GC,2 = 1 + k 1 · H avg ,(1)
where k 1 = 107, are costs to place foot area pairs on the map and are generated from the average height differences H avg in the respective area. A Level 2 foot area pair cost map can be seen in Fig. 9 c. Again, a punishing cost term is introduced for after-step poses with different average heights under neighbouring foot areas.
The robot actions are defined accordingly. Driving neighbours can be found similar to Level 1 but with a doubled action resolution of 5 cm and 32 discrete robot orientations at each position. Additional stepping-related manoeuvres differ from Level 1 since the robot is only able to move foot pairs instead of individual feet. If the robot is close to an obstacle, it may step with a foot pair or perform another stepping related manoeuvre, as visualized in Fig. 10. To motivate stepping manoeuvres, we define a maximum height difference H max,drive for the foot area center coordinate which can be overcome by driving. Larger height differences only can be traversed by stepping.
The costs for such a foot pair manoeuvre are the concatenated costs for each individual foot action as described for Level 1. If, for example, the robot steps with its front foot pair as visualized in Fig. 10 a, the costs for this manoeuvre are the sum of the costs for a step with the front left foot and a step with the front right foot. Since Level 2 foot pair area costs differ from Level 1 foot costs, we reparametrized the manoeuvre cost computation. We do this by performing foot pair manoeuvres in a variety of basic scenarios (e.g., drive/turn on a patch of flat/rough underground, step up different height differences, do a base shift) in both representation levels and manually tune the Level 2 cost parameters until the costs for those manoeuvres in both levels vary by ≤ 5%.
During planning and execution, it is an important feature to refine Level 2 path segments into Level 1. To refine a Level 2 path segment between two successive poses r 2,i and r 2,i+1 , we transform both poses into Level 1 and generate a set S of feasible robot base poses by interpolating between r 1,i and r 1,i+1 . S is then inflated with a radius of two position steps and one orientation step as visualized in Fig. 11. A local planner, which is restricted to S, searches for a Level 1 path between r 1,i and r 1,i+1 . If • either one of the two poses becomes infeasible when transformed to Level 1 because Level 2 assessed the given situation wrongly or • the costs for the refined Level 1 path differs by > 25% from the original costs for the path segment, we call this path segment not refineable.
C. Representation Level 3
We apply the described subsampling process (see Section IV-B) to generate a Level 3 height map and height difference map with a resolution of 10 cm from the Level 2 height map and height difference map. To increase the semantics of the environment representation, we categorize each Level 2 map cell into one of the following terrain classes: a) b) Fig. 11. Generating a set of feasible robot base poses for path refinement: a) For a given start ( r 1,i , red arrow) and goal ( r 1,i+1 , green arrow) robot base pose, we generate a set of feasible robot base poses (black lines) by interpolating between the two. b) Inflation by two position steps and one orientation step.
• flat: easily traversable by driving, • rough: traversable by driving with high effort, • step: includes height differences which are too large to be traversed by driving but can be traversed by stepping, • wall: occurring height differences are too large to be traversed by stepping, and • unknown: cell cannot be classified. First, we search for cells of the terrain type step. This is done by searching for cell pairs c i and c j that fulfill the following criteria:
• H(c i ) < H max,drive : c i is on a drivable surface, • H(c j ) < H max,drive : c j is on a drivable surface, • c i − c j < 0.45 m:
The distance between c i and c j is within a maximum step length, and • for the set T of cells c k on the straight line between c i and c j , C GC (c k ) = ∞ counts for all cells c k ∈ T : A direct foot movement from c i to c j requires a step. For all pairs of c i and c j which fulfill these criteria, each cell c s ∈ c i ∪ c j ∪ T is assigned the terrain class step. In addition, we compute the angle α i,j between c i and c j and save it for c s . Since most step cells are detected several times, we collect several angles for each cell. α avg,s , the mean of circular quantities of these angles describes the estimated step orientation in c s . Second, we classify the remaining cells by their Level 2 height difference value H 1 :
• flat if H(c i ) ∈ [0 m, 2 * 10 −4 m], • rough if H(c i ) ∈ [2 * 10 −4 m, 0.05 m], • wall if H(c i ) ∈ [0.05 m, ∞], and • unknown if H(c i ) is unknown.
The height difference intervals are tuned manually with respect to a maximum terrain height difference of 4 cm which can be overcome by driving and a maximum terrain height difference of 30 cm which can be overcome by stepping. The terrain class of a Level 3 map cell is generated from the respective four Level 2 cells by either choosing the terrain class with most members or, if this cannot be identified, the least difficult occurring terrain class.
Another source for terrain class segmentation can be camera images as shown in [18]. Fig. 12 a,b gives an example for a Level 3 height map and terrain class map.
The considered but we assume that the feet are somewhere in a ground contact area around the robot a r (see Fig. 3). Hence, the robot is not able to perform foot or foot pair movements in this representation. The whole robot is rather moved over the terrain and traverses different terrain classes with different costs. Path search neighbour poses can be found similar to the driving neighbours described for Level 1.
In this level of representation, the action resolution is 10 cm and the robot may have 16 different orientations at each position. When moving over step cells, a robot state is only feasible if the difference between the robot orientation and the step orientation of each step cell c r is less than one discrete orientation step: abs(α avg,r −r θ ) < 1 16 ·2π. Moreover, the robot is only allowed to move parallel and orthogonal to step orientations. These restrictions are required to enforce a behavior, which is induced by the robot kinematic in lower representation levels but not represented in Level 3 otherwise.
Regarding cost generation, each cell c i is assigned a cost value C c (c i ) depending on its terrain class: The pose cost C( r) does not combine individual ground contact and body cost but averages the cost values of all cells in a r . The described terrain class specific cell costs are manually tuned by comparing the cost of Level 1 and Level 3 manoeuvres for the same set of basic scenarios, as mentioned in Section IV-B. While constant values were sufficient for flat and rough cells, costs for stepping manoeuvres depend on the height difference to overcome. The presented computation method for step cells is required to keep cost differences for these basic manoeuvres ≤ 5%. A resulting robot area cost map can be seen in Fig. 12 c.
Level 3 paths can be refined to Level 2 paths in the following way: As described for Level 2, we generate a set S of feasible robot base poses. In contrast to Level 2, we do not only consider two successive poses but the whole path segment r 3,s , ..., r 3,g that needs to be refined at once. The first and last robot pose r 3,s and r 3,g of this Level 3 path segment are transformed to a Level 2 start and goal pose and a local Level 2 planner, which is restricted to S, searches for a path between r 2,s and r 2,g . If a Level 3 path needs to be refined to Level 1, Level 2 is taken as an intermediate refinement step.
D. Level Transition
All three levels of representation are combined in a single planner, which chooses the lowest available level for each pose to provide the most detailed planning. Since planning in a low level of representation is slower, we provide Level 1 data only in a small area around the robot position which is sufficiently large to plan the next manoeuvres. Level 2 data is provided for a medium-sized region around the current robot position while Level 3 covers the whole map.
The planner checks for each manoeuvre (e.g., drive into one direction, do a step, ...) if both, start and goal pose of this manoeuvre, are part of the same level of representation. If the goal pose is not part of the start pose level of representation, the start pose is transformed to the next higher level of representation and the same manoeuvre is replanned in this level if it is still available in this level. Note that the transformation of the start pose to the next higher level of representation might induce costs. Due to different map resolutions, the robot might be shifted to fit into the next level map cell and discrete orientation. Due to increasing foot restrictions, feet might be shifted to fit the next level robot representation (e.g., individual feet have to align within foot area pairs). We check each transformation for feasibility and generate costs from the occurring manoeuvre costs.
E. Heuristic
In our previous work, a combination of the Euclidean distance and the orientation difference was used as an admissible A* heuristic (Euclidean heuristic). However, this heuristic does not consider the terrain which has large influence on the path costs. We propose a Level 3-based heuristic which includes such terrain features (Dijkstra heuristic).
After the goal pose r i,G is set, it is transformed to Level 3. We then start a one-to-any 3D Dijkstra search in Level 3 starting from r 3,G . Hence, we get for each Level 3 pose a cost estimation to reach the goal pose. During path planning, we can estimate the costs from any robot pose to the goal by transforming it to Level 3 and get the respective cost value.
Note that the quality of this heuristic strongly depends on the quality of the Level 3 cost model in comparison to costs for the same manoeuvres in other levels of representation. Further note that we cannot prove that this heuristic always underestimates costs, which is necessary to prove admissibility for the generation of optimal paths. However, since we also utilize the suboptimal ARA* algorithm, we do not aim to generate optimal paths for a given problem. In fact, we focus on generating paths with satisfying quality in feasible time. The performance of this heuristic is evaluated in Section V.
F. Continuous Refinement
As the robot moves along the initial path, the sensors provide new measurements and high-detailed environment representations are generated in the vicinity of the current robot position. We include these updated representations in the path by continuously refining the respective path segments, as shown in Fig. 13. If a cost difference > 25% between the original and the refined path segments indicates Fig. 13. As the robot moves along the path, the Level 1 and Level 2 representations move with it. Consequently, those path segments which are represented in a higher level and for which a more detailed representation becomes available, can be refined to this more detailed representation.
that the higher-level planning assessed a situation wrongly, we initiate a new planner run. With this approach, we can guarantee that path segments in the vicinity of the robot are always represented in Level 1 and thus, included steps can be expanded and the result can be executed by the controller.
V. EXPERIMENTS
We evaluate the proposed approach in two experiments. Both are done on one core of a 2.6 GHz Intel i7-6700HQ processor using 16 GB of memory. An additional video is available online 2 which also contains a Gazebo experiment to demonstrate the continuous refinement strategy.
A first experiment evaluates the planning performance of the different levels of representation individually and combined, as shown in Fig. 4. For this, we choose the Level 1 size to be 3×3 m. This is sufficiently large to plan the next robot manoeuvres in high detail, but still small enough to avoid long high-dimensional planning. The Level 2 size is chosen to be 9×9 m so that the Level 2 path segment is about twice as long as the Level 1 path segment. We utilized the Euclidean heuristic to compare the results to our previous work. The height map and a resulting path are shown in Fig. 14. Since we use an ARA* algorithm which works with several heuristic weights W, we evaluate the influence of these. Fig. 15 shows the planner performance.
It can be seen that planning on levels of representation >1 and with combined levels is faster by at least one order of magnitude compared to pure Level 1 planning. The Level 1 path for W = 1.0 could not be computed due to memory limitations. We distinguish between the path costs in the respective levels of representation (estimated cost) and the costs each path carries when refined to Level 1. Comparing the estimated costs to the refined Level 1 costs gives an assessment about the quality of cost generation in each level of representation. The comparison of the refined Level 1 costs to the original Level 1 costs indicates the quality of the resulting path. It can be seen that the estimated costs always underestimate the refined Level 1 costs. Especially for W ≤ 1.5 the estimation is close with a difference ≤ 7.7%. Furthermore, the results show that for W ≤ 1.5 the refined Level 1 costs differ to the original Level 1 costs by ≤ 15%.
In a second experiment, we compare the presented Dijkstra heuristic to the Euclidean heuristic. The scenario shown in Fig. 16 is larger and more challenging, compared to 2 https://www.ais.uni-bonn.de/videos/ICRA_2018_ Klamt/ the first scenario. The starting pose is pose a. Planning is performed on combined levels of representation. A resulting path is shown in Fig. 17. Planning times and resulting costs are shown in Fig. 18. Preprocessing the Dijkstra heuristic took 0.52 s of the presented planning times. It can be seen that the Dijkstra heuristic further accelerates planning while the resulting costs stay comparable at least for W ≤ 1.5. E.g., for W = 1.25, planning is accelerated by more than two orders of magnitude while the refined path costs only differ by 3.3%. Moreover, the resulting path illustrates how the robot aligns with the stairs and only moves parallel and orthogonal to them. We finally compare the planner performance when started from different poses, as shown in Fig. 16. The results in Fig. 19 indicate that an important factor for the planner performance is the complexity of the planning within Level 1 but higher W lead to feasible performances in any case.
VI. CONCLUSION
In this paper, we presented a hybrid locomotion planning approach which is able to provide plans for large scenarios with high detailing in the vicinity of the robot. We achieve this by introducing three levels of representation with de- Resulting path for planning with the Dijkstra heuristic and combined levels with W = 1.25. creasing resolution and robot configuration dimensionality but increasing semantics of the situation. The most abstract level of representation can be used as a heuristic which poses a second acceleration strategy. Experiments show that the presented approach significantly accelerates planning while the result quality stays feasible and, hence, significantly larger scenarios can be handled in comparison to our previous work.
| 4,734 |
1809.07058
|
2950546634
|
Navigating in search and rescue environments is challenging, since a variety of terrains has to be considered. Hybrid driving-stepping locomotion, as provided by our robot Momaro, is a promising approach. Similar to other locomotion methods, it incorporates many degrees of freedom---offering high flexibility but making planning computationally expensive for larger environments. We propose a navigation planning method, which unifies different levels of representation in a single planner. In the vicinity of the robot, it provides plans with a fine resolution and a high robot state dimensionality. With increasing distance from the robot, plans become coarser and the robot state dimensionality decreases. We compensate this loss of information by enriching coarser representations with additional semantics. Experiments show that the proposed planner provides plans for large, challenging scenarios in feasible time.
|
To achieve further planning acceleration, it is an obvious idea to combine multiresolution and multidimensional planning. However, only few works, such as by @cite_17 address this. Different planning dimensionalities and resolutions are applied by using different sets of motion primitives. A fine resolution is only considered close to the start and goal pose and close to obstacles. A high planning dimensionality is considered for states which will be reached within a given time interval. This allows the planner to provide detailed plans close to the robot while planning times stay feasible. The drawbacks of both, multiresolutional and multidimensional planning also apply to this work.
|
{
"abstract": [
"Abstract Safe and efficient path planning for mobile robots in large dynamic environments is still a challenging research topic. In order to plan collision-free trajectories, the time component of the path must be explicitly considered during the search. Furthermore, a precise planning near obstacles and in the vicinity of the robot is important. This results in a high computational burden of the trajectory planning algorithms. However, in large open areas and in the far future of the path, the planning can be performed more coarsely. In this paper, we present a novel algorithm that uses a hybrid-dimensional multi-resolution state × time lattice to efficiently compute trajectories with an adaptive fidelity according to the environmental requirements. We show how to construct this lattice in a consistent way and define the transitions between regions of different granularity. Finally, we provide some experimental results, which prove the real-time capability of our approach and show its advantages over single-dimensional single-resolution approaches."
],
"cite_N": [
"@cite_17"
],
"mid": [
"2020532707"
]
}
|
Planning Hybrid Driving-Stepping Locomotion on Multiple Levels of Abstraction
|
Hybrid driving-stepping locomotion is a flexible approach to traverse many types of terrain since it combines the advantages of both, wheeled and legged, locomotion types. However, due to its high robot state dimensionality, planning respective paths is challenging.
In our previous work [1] we presented an approach to plan hybrid driving-stepping locomotion paths for our robot Momaro [2] even for very challenging terrain such as staircases with additional obstacles on it. The planner prefers omnidirectional driving whenever possible and considers individual steps in situations where driving is not possible. The individual configuration of ground contact points (robot footprint) is considered at any time. During planning, steps are represented as abstract manoeuvres which are expanded to detailed motion sequences before executing them. For small scenarios, this method generates high quality paths in feasible time with bounded suboptimality. Due to the high dimensionality of the robot configuration, the explored state space increases rapidly for larger scenarios and makes planning expensive. This effect is not unique for hybrid drivingstepping locomotion but affects high-dimensional planning in many applications such as locomotion planning for robots with tracked flippers or manipulation planning.
The search space can be reduced by choosing a coarser resolution or describing the robot and its manoeuvres in a more abstract way with less degrees of freedom (DoF). However, a fine resolution is key to navigate the robot precisely through challenging terrain. Moreover, only using a more abstract robot description is difficult, since the planning result shall be a path which can be executed by the robot with its given number of DoF.
Coarse-to-fine planning approaches [3,4] address this problem by generating a rough plan first and refine the resulting path to the desired resolution and number of DoF in a second step. Especially in challenging, cluttered terrain, this procedure bears the risk of only finding expensive paths due to the lack of detail in the initial search.
We present a method which plans hybrid driving-stepping locomotion on three different levels of representation (see Fig. 1). In the vicinity of the robot, a representation with a high resolution and a high number of DoF is used to find paths which can be executed by the robot. With increasing distance from the robot, the resolution gets coarser and the robot is described with less DoF. These path segments are situated further in the future which comes along with a higher degree of uncertainty and less accurate sensor information.
We compensate this loss of information for higher levels of representation by enriching the representation with additional semantics. All levels of representation are unified in a single planner. We further present methods to refine path segments into more detailed levels of representation. This decreases the number of necessary replanning steps. Replanning is only initiated if costs indicate that a situation is wrongly assessed in the coarser representation. In addition, we introduce a heuristic, based on the most abstract level of representation.
Experiments show that, compared to our previous work, this approach can handle much larger scenarios in feasible planning time while the path quality stays comparable.
III. HARDWARE
We use our mobile manipulation robot Momaro [2] (see Fig. 2). It offers omnidirectional driving through its four articulated legs ending in directly driven 360°steerable pairs of wheels. The unique design enables manoeuvres which are neither realizable by pure driving nor pure walking robots such as shifting a single foot while maintaining ground contact and thus changing the robot footprint under load. Active leg movements are restricted to the sagittal plane since each leg consists of three pitch joints.
Sensor inputs come from an IMU and a continuously rotating Velodyne Puck 3D laser scanner at the robot head which provides a spherical field-of-view. The laser-range measurements are registered and aggregated to a 3D environment map using the method of Droeschel et al. [16].
IV. APPROACH
Input to our method is a height map with a resolution of 2.5 cm which is generated from the 3D environment map. In the vicinity of the robot, height information is very precise. With increasing distance from the robot, the accuracy decreases due to measurement errors. Planning is done on foot and body costs. The ground contact costs C GC describe the costs to place an individual ground contact element (e.g., a foot or a foot pair) in a given configuration on the map. C GC includes information about the terrain surface and obstacles in the vicinity. The body costs C B ( r b ) describe the costs to place the robot base r b = (r x , r y , r θ ) with its center position r x , r y and its orientation r θ on the map. C B ( r b ) include information about obstacles under the robot base and about the terrain slope under the robot. The generation of C GC and C B from the height map varies between the different levels of representation and may contain several steps. Ground contact costs and body costs are combined to pose costs C( r) which describe the costs to place the robot in a given configuration r on the map. Path planning is realized through an A*-search with anytime characteristics (ARA* [17]) on pose costs. For the current search pose, feasible neighbour poses are generated during the search. They can either be reached by omnidirectional driving or by stepping-related motions. Steppingrelated motions are only considered in the vicinity of obstacles where driving is infeasible. Steps are described as abstract steps, the direct transition between a pre-step to an after-step pose. The detailed motion sequence for steps is not considered during planning but generated before execution.
The environment and the robot are described in three different levels of representation with different sizes. In the vicinity of the robot, we use a fine resolution and a high robot configuration dimensionality for planning. We call this Level 1 representation. With increasing distance from the current robot position, the environment and the robot are represented on higher levels with a coarser resolution and a robot representation with lower dimensionality. This is reasonable, since those parts of the plan are reached in the further future and thus are more uncertain. Moreover, sensor measurements become less precise with increasing distance from the robot. At the same time, we compensate this loss of detail by enriching the environment representation with additional features, which increase the understanding of the situation. Pose costs and robot actions use these semantic features. Higher levels of representation can be derived from lower levels of representation. The approach is visualized in Fig. 3. Level sizes and positions are shown in Fig. 4. For a planning task, the planner only performs a single planning run while including all three levels of representation. Hence, it is important that the same action carries the same costs in different levels of representation to make planning consistent over all levels. Moreover, the transition between the different levels of representation is challenging. All three levels of representation and the transition between them are described in detail in the following sections.
The resulting path consists of segments in multiple levels of representation. As described before, the contained steps are abstract manoeuvres. Abstract steps in the initial path segment are expanded to detailed motion sequences before executing them. Roll and pitch motions of the robot base as well as single foot shifts stabilize the robot to perform each step safely. In addition, foot heights are derived. See our previous work [1] for more details.
Steps are only expanded for path segments in Level 1 which is based on our previous work. For higher levels, representations are not detailed enough to derive concrete robot motions. As the robot executes the initial path segment, more measurements are made and a more detailed environment representation becomes available for path segments which have been represented in higher levels before. The path is updated with these updated representations. This can either be done by replanning the whole path or by transforming the respective path segments into more detailed representations, as described in Section IV-F. We call this coarse-to-fine transformation refinement.
A. Representation Level 1 Level 1 is based on the approach which we presented in our previous work [1]. Input is a height map with a resolution of 2.5 cm. We derive local unsigned height differences between neighbour cells from this height map to generate ground contact costs for each individual foot. Base costs are derived from the height map itself. A height map and the derived foot costs can be seen in Feasible driving neighbour poses can be found within a 20-position-neighbourhood and by turning on the spot to the next discrete orientation, as shown in Fig. 6. If the robot is close to an obstacle, additional stepping-related manoeuvres are considered which are visualized in Fig. 7. Those can be a discrete step, a longitudinal base shift manoeuvre, shifting individual feet forward or shifting individual feet towards their neutral position. We define the neutral robot pose as the pose visualized in Fig. 7 a, top. The costs for the presented manoeuvres are based on the foot and body costs, the individual robot elements induce during the manoeuvre.
As an extension of the previous work, we want the robot to align its orientation with the stair orientation, when climbing those. This is desirable, since the kinematic only allows for leg movements in the sagittal plane and since this behavior can also be observed when humans climb stairs by themselves or teleoperate robots to do so. If, after a stepping manoeuvre, the two front/rear feet have the same longitudinal position but stand on different heights, this indicates that the robot is not aligned with the stairs it climbs. By punishing such a configuration with an additional cost term, we achieve the desired behavior.
B. Representation Level 2
We use the input height map with a resolution of 2.5 cm to compute the Level 2 representation consisting of a height map and a height difference map with a resolution of 5 cm (see Fig. 9 a,b). According to the Nyquist-Shannon sampling theorem, subsampling has to come along with smoothing. To satisfy this theorem, we subsample the Level 1 height map as shown in Fig. 8. Each Level 2 height value is computed from the normalized, weighted sum of a 4×4-region of Level 1 height values. We use a binomial distribution for weighing. A Level 2 height difference map is generated in the same manner: We generate a Level 1 height difference map by computing local height differences on the Level 1 height map. This height difference map is then subsampled to a Level 2 height difference map.
a) b) c) d)
To decrease the robot configuration space dimensionality, we accumulate individual feet to pairs. This is intuitive, since we observe a tendency to pairwise foot movement in Level 1 paths. Moreover, instead of describing each foot position precisely, we use foot areas as a more abstract description. We know, that a foot will be placed somewhere in the respective area but since the representation contains some time-related and measurement-related imprecision, a knowledge of the accurate foot position is not necessary. A Level 2 robot pose r = ( r b , f f , f r ) is consequently represented by its robot base pose r b and its relative longitudinal front and rear foot area pair coordinates f f and f r . Note that our platform and planner only allow sagittal leg movement. Lateral foot coordinates are fixed and thus a single variable is sufficient to describe each foot area pair.
We use the generated Level 2 representation to compute ground contact and body costs. Body cost computation is similar to Level 1 and only relies on height information. Ground contact costs
C GC,2 = 1 + k 1 · H avg ,(1)
where k 1 = 107, are costs to place foot area pairs on the map and are generated from the average height differences H avg in the respective area. A Level 2 foot area pair cost map can be seen in Fig. 9 c. Again, a punishing cost term is introduced for after-step poses with different average heights under neighbouring foot areas.
The robot actions are defined accordingly. Driving neighbours can be found similar to Level 1 but with a doubled action resolution of 5 cm and 32 discrete robot orientations at each position. Additional stepping-related manoeuvres differ from Level 1 since the robot is only able to move foot pairs instead of individual feet. If the robot is close to an obstacle, it may step with a foot pair or perform another stepping related manoeuvre, as visualized in Fig. 10. To motivate stepping manoeuvres, we define a maximum height difference H max,drive for the foot area center coordinate which can be overcome by driving. Larger height differences only can be traversed by stepping.
The costs for such a foot pair manoeuvre are the concatenated costs for each individual foot action as described for Level 1. If, for example, the robot steps with its front foot pair as visualized in Fig. 10 a, the costs for this manoeuvre are the sum of the costs for a step with the front left foot and a step with the front right foot. Since Level 2 foot pair area costs differ from Level 1 foot costs, we reparametrized the manoeuvre cost computation. We do this by performing foot pair manoeuvres in a variety of basic scenarios (e.g., drive/turn on a patch of flat/rough underground, step up different height differences, do a base shift) in both representation levels and manually tune the Level 2 cost parameters until the costs for those manoeuvres in both levels vary by ≤ 5%.
During planning and execution, it is an important feature to refine Level 2 path segments into Level 1. To refine a Level 2 path segment between two successive poses r 2,i and r 2,i+1 , we transform both poses into Level 1 and generate a set S of feasible robot base poses by interpolating between r 1,i and r 1,i+1 . S is then inflated with a radius of two position steps and one orientation step as visualized in Fig. 11. A local planner, which is restricted to S, searches for a Level 1 path between r 1,i and r 1,i+1 . If • either one of the two poses becomes infeasible when transformed to Level 1 because Level 2 assessed the given situation wrongly or • the costs for the refined Level 1 path differs by > 25% from the original costs for the path segment, we call this path segment not refineable.
C. Representation Level 3
We apply the described subsampling process (see Section IV-B) to generate a Level 3 height map and height difference map with a resolution of 10 cm from the Level 2 height map and height difference map. To increase the semantics of the environment representation, we categorize each Level 2 map cell into one of the following terrain classes: a) b) Fig. 11. Generating a set of feasible robot base poses for path refinement: a) For a given start ( r 1,i , red arrow) and goal ( r 1,i+1 , green arrow) robot base pose, we generate a set of feasible robot base poses (black lines) by interpolating between the two. b) Inflation by two position steps and one orientation step.
• flat: easily traversable by driving, • rough: traversable by driving with high effort, • step: includes height differences which are too large to be traversed by driving but can be traversed by stepping, • wall: occurring height differences are too large to be traversed by stepping, and • unknown: cell cannot be classified. First, we search for cells of the terrain type step. This is done by searching for cell pairs c i and c j that fulfill the following criteria:
• H(c i ) < H max,drive : c i is on a drivable surface, • H(c j ) < H max,drive : c j is on a drivable surface, • c i − c j < 0.45 m:
The distance between c i and c j is within a maximum step length, and • for the set T of cells c k on the straight line between c i and c j , C GC (c k ) = ∞ counts for all cells c k ∈ T : A direct foot movement from c i to c j requires a step. For all pairs of c i and c j which fulfill these criteria, each cell c s ∈ c i ∪ c j ∪ T is assigned the terrain class step. In addition, we compute the angle α i,j between c i and c j and save it for c s . Since most step cells are detected several times, we collect several angles for each cell. α avg,s , the mean of circular quantities of these angles describes the estimated step orientation in c s . Second, we classify the remaining cells by their Level 2 height difference value H 1 :
• flat if H(c i ) ∈ [0 m, 2 * 10 −4 m], • rough if H(c i ) ∈ [2 * 10 −4 m, 0.05 m], • wall if H(c i ) ∈ [0.05 m, ∞], and • unknown if H(c i ) is unknown.
The height difference intervals are tuned manually with respect to a maximum terrain height difference of 4 cm which can be overcome by driving and a maximum terrain height difference of 30 cm which can be overcome by stepping. The terrain class of a Level 3 map cell is generated from the respective four Level 2 cells by either choosing the terrain class with most members or, if this cannot be identified, the least difficult occurring terrain class.
Another source for terrain class segmentation can be camera images as shown in [18]. Fig. 12 a,b gives an example for a Level 3 height map and terrain class map.
The considered but we assume that the feet are somewhere in a ground contact area around the robot a r (see Fig. 3). Hence, the robot is not able to perform foot or foot pair movements in this representation. The whole robot is rather moved over the terrain and traverses different terrain classes with different costs. Path search neighbour poses can be found similar to the driving neighbours described for Level 1.
In this level of representation, the action resolution is 10 cm and the robot may have 16 different orientations at each position. When moving over step cells, a robot state is only feasible if the difference between the robot orientation and the step orientation of each step cell c r is less than one discrete orientation step: abs(α avg,r −r θ ) < 1 16 ·2π. Moreover, the robot is only allowed to move parallel and orthogonal to step orientations. These restrictions are required to enforce a behavior, which is induced by the robot kinematic in lower representation levels but not represented in Level 3 otherwise.
Regarding cost generation, each cell c i is assigned a cost value C c (c i ) depending on its terrain class: The pose cost C( r) does not combine individual ground contact and body cost but averages the cost values of all cells in a r . The described terrain class specific cell costs are manually tuned by comparing the cost of Level 1 and Level 3 manoeuvres for the same set of basic scenarios, as mentioned in Section IV-B. While constant values were sufficient for flat and rough cells, costs for stepping manoeuvres depend on the height difference to overcome. The presented computation method for step cells is required to keep cost differences for these basic manoeuvres ≤ 5%. A resulting robot area cost map can be seen in Fig. 12 c.
Level 3 paths can be refined to Level 2 paths in the following way: As described for Level 2, we generate a set S of feasible robot base poses. In contrast to Level 2, we do not only consider two successive poses but the whole path segment r 3,s , ..., r 3,g that needs to be refined at once. The first and last robot pose r 3,s and r 3,g of this Level 3 path segment are transformed to a Level 2 start and goal pose and a local Level 2 planner, which is restricted to S, searches for a path between r 2,s and r 2,g . If a Level 3 path needs to be refined to Level 1, Level 2 is taken as an intermediate refinement step.
D. Level Transition
All three levels of representation are combined in a single planner, which chooses the lowest available level for each pose to provide the most detailed planning. Since planning in a low level of representation is slower, we provide Level 1 data only in a small area around the robot position which is sufficiently large to plan the next manoeuvres. Level 2 data is provided for a medium-sized region around the current robot position while Level 3 covers the whole map.
The planner checks for each manoeuvre (e.g., drive into one direction, do a step, ...) if both, start and goal pose of this manoeuvre, are part of the same level of representation. If the goal pose is not part of the start pose level of representation, the start pose is transformed to the next higher level of representation and the same manoeuvre is replanned in this level if it is still available in this level. Note that the transformation of the start pose to the next higher level of representation might induce costs. Due to different map resolutions, the robot might be shifted to fit into the next level map cell and discrete orientation. Due to increasing foot restrictions, feet might be shifted to fit the next level robot representation (e.g., individual feet have to align within foot area pairs). We check each transformation for feasibility and generate costs from the occurring manoeuvre costs.
E. Heuristic
In our previous work, a combination of the Euclidean distance and the orientation difference was used as an admissible A* heuristic (Euclidean heuristic). However, this heuristic does not consider the terrain which has large influence on the path costs. We propose a Level 3-based heuristic which includes such terrain features (Dijkstra heuristic).
After the goal pose r i,G is set, it is transformed to Level 3. We then start a one-to-any 3D Dijkstra search in Level 3 starting from r 3,G . Hence, we get for each Level 3 pose a cost estimation to reach the goal pose. During path planning, we can estimate the costs from any robot pose to the goal by transforming it to Level 3 and get the respective cost value.
Note that the quality of this heuristic strongly depends on the quality of the Level 3 cost model in comparison to costs for the same manoeuvres in other levels of representation. Further note that we cannot prove that this heuristic always underestimates costs, which is necessary to prove admissibility for the generation of optimal paths. However, since we also utilize the suboptimal ARA* algorithm, we do not aim to generate optimal paths for a given problem. In fact, we focus on generating paths with satisfying quality in feasible time. The performance of this heuristic is evaluated in Section V.
F. Continuous Refinement
As the robot moves along the initial path, the sensors provide new measurements and high-detailed environment representations are generated in the vicinity of the current robot position. We include these updated representations in the path by continuously refining the respective path segments, as shown in Fig. 13. If a cost difference > 25% between the original and the refined path segments indicates Fig. 13. As the robot moves along the path, the Level 1 and Level 2 representations move with it. Consequently, those path segments which are represented in a higher level and for which a more detailed representation becomes available, can be refined to this more detailed representation.
that the higher-level planning assessed a situation wrongly, we initiate a new planner run. With this approach, we can guarantee that path segments in the vicinity of the robot are always represented in Level 1 and thus, included steps can be expanded and the result can be executed by the controller.
V. EXPERIMENTS
We evaluate the proposed approach in two experiments. Both are done on one core of a 2.6 GHz Intel i7-6700HQ processor using 16 GB of memory. An additional video is available online 2 which also contains a Gazebo experiment to demonstrate the continuous refinement strategy.
A first experiment evaluates the planning performance of the different levels of representation individually and combined, as shown in Fig. 4. For this, we choose the Level 1 size to be 3×3 m. This is sufficiently large to plan the next robot manoeuvres in high detail, but still small enough to avoid long high-dimensional planning. The Level 2 size is chosen to be 9×9 m so that the Level 2 path segment is about twice as long as the Level 1 path segment. We utilized the Euclidean heuristic to compare the results to our previous work. The height map and a resulting path are shown in Fig. 14. Since we use an ARA* algorithm which works with several heuristic weights W, we evaluate the influence of these. Fig. 15 shows the planner performance.
It can be seen that planning on levels of representation >1 and with combined levels is faster by at least one order of magnitude compared to pure Level 1 planning. The Level 1 path for W = 1.0 could not be computed due to memory limitations. We distinguish between the path costs in the respective levels of representation (estimated cost) and the costs each path carries when refined to Level 1. Comparing the estimated costs to the refined Level 1 costs gives an assessment about the quality of cost generation in each level of representation. The comparison of the refined Level 1 costs to the original Level 1 costs indicates the quality of the resulting path. It can be seen that the estimated costs always underestimate the refined Level 1 costs. Especially for W ≤ 1.5 the estimation is close with a difference ≤ 7.7%. Furthermore, the results show that for W ≤ 1.5 the refined Level 1 costs differ to the original Level 1 costs by ≤ 15%.
In a second experiment, we compare the presented Dijkstra heuristic to the Euclidean heuristic. The scenario shown in Fig. 16 is larger and more challenging, compared to 2 https://www.ais.uni-bonn.de/videos/ICRA_2018_ Klamt/ the first scenario. The starting pose is pose a. Planning is performed on combined levels of representation. A resulting path is shown in Fig. 17. Planning times and resulting costs are shown in Fig. 18. Preprocessing the Dijkstra heuristic took 0.52 s of the presented planning times. It can be seen that the Dijkstra heuristic further accelerates planning while the resulting costs stay comparable at least for W ≤ 1.5. E.g., for W = 1.25, planning is accelerated by more than two orders of magnitude while the refined path costs only differ by 3.3%. Moreover, the resulting path illustrates how the robot aligns with the stairs and only moves parallel and orthogonal to them. We finally compare the planner performance when started from different poses, as shown in Fig. 16. The results in Fig. 19 indicate that an important factor for the planner performance is the complexity of the planning within Level 1 but higher W lead to feasible performances in any case.
VI. CONCLUSION
In this paper, we presented a hybrid locomotion planning approach which is able to provide plans for large scenarios with high detailing in the vicinity of the robot. We achieve this by introducing three levels of representation with de- Resulting path for planning with the Dijkstra heuristic and combined levels with W = 1.25. creasing resolution and robot configuration dimensionality but increasing semantics of the situation. The most abstract level of representation can be used as a heuristic which poses a second acceleration strategy. Experiments show that the presented approach significantly accelerates planning while the result quality stays feasible and, hence, significantly larger scenarios can be handled in comparison to our previous work.
| 4,734 |
1809.06911
|
2889976821
|
Abstract This paper introduces SensoGraph, a novel approach for fast sensory evaluation using two-dimensional geometric techniques. In the tasting sessions, the assessors follow their own criteria to place samples on a tablecloth, according to the similarity between samples. In order to analyse the data collected, first a geometric clustering is performed to each tablecloth, extracting connections between the samples. Then, these connections are used to construct a global similarity matrix. Finally, a graph drawing algorithm is used to obtain a 2D consensus graphic, which reflects the global opinion of the panel by (1) positioning closer those samples that have been globally perceived as similar and (2) showing the strength of the connections between samples. The proposal is validated by performing four tasting sessions, with three types of panels tasting different wines, and by developing a new software to implement the proposed techniques. The results obtained show that the graphics provide similar positionings of the samples as the consensus maps obtained by multiple factor analysis (MFA), further providing extra information about connections between samples, not present in any previous method. The main conclusion is that the use of geometric techniques provides information complementary to MFA, and of a different type. Finally, the method proposed is computationally able to manage a significantly larger number of assessors than MFA, which can be useful for the comparison of pictures by a huge number of consumers, via the Internet.
|
Thus, several alternative methods have arisen in the last years , aiming to provide a fast sensory positioning of a set of products by assessors who are not necessarily trained. Skipping the need to train the panellists allows to elude the need of waiting a long time before obtaining results, as well as the need of agreeing on particular attributes, which may become difficult when working with experts like wine professionals or chefs . Introduced by @cite_2 @cite_10 , Projective Mapping asks the assessors to position the presented samples on a two-dimensional space, usually a blank sheet of paper as tablecloth, following their own criteria: The more similar they perceive two samples, the closer they should position them, and vice versa . In those seminal works, the data were analysed by generalized procrustes analysis (GPA) and principal component analysis (PCA) , using the RV coefficient to compare the method with conventional profiling.
|
{
"abstract": [
"Abstract This paper deals with the method of projective mapping and its use in sensory analysis. Projective mapping is a method which allows naive consumers to map products on a two dimension space, based on similarities and differences in the products. An experiment was carried out on seven blueberry soups, whereby sensory profiling was undertaken using a trained panel and projective mapping. Preference assessment was undertaken using a small group of consumers. Results indicated that the three replicates of the mapping exercise produced visually very similar maps, at least on the first two dimensions. However, it was found that consumers perceived the samples in somewhat different ways as high-lighted by RV coefficients. The consensus mapping dimensions were compared to those from the profile data, and it was apparent that the best similarity was found when comparing the first dimension, thus suggesting good agreement on the obvious aspects of the product. The internal preference map also revealed major product contrasts along this dimension with some weak evidence of segmentation. The overall conclusion of this paper raises further fundamental questions about consensus spaces with consumers, and the question of dimensionality of consumer perception as compared to trained panels.",
"Abstract Studies have indicated that profiling and (dis)similarity scaling yield different perceptual product maps. Conceptually, these two procedures are different. This paper looks at a third and alternative method of producing a two-dimensional, perceptual map utilizing a projective-type method whereby individual assessors themselves are required to place products on the space according to the similarities and differences they perceive. However, visual comparison of the final results provided by each assessor is difficult and, hence, generalized Procrustes analysis is applied to compare each assessor's map for similarity with the others. In this study it was found that the perceptual map derived from projective mapping was as similar to the map derived from profiling as from dissimilarity scaling. However, consistency over repeated trials was greater for projective mapping than for the other two methods. It is suggested that projective mapping could be a potentially useful technique for linking sensory analysis and consumer research data."
],
"cite_N": [
"@cite_10",
"@cite_2"
],
"mid": [
"1979883873",
"2065540379"
]
}
|
Testing SensoGraph, a geometric approach for fast sensory evaluation
|
Material and methods
Data collection
In order to validate this proposal, a total of four tasting sessions using Projective Mapping (Risvik et al., 1994;Pagès, 2005) have been performed, with three types of panels tasting different wines. composed of twelve assessors with experience in wine tasting, performed two sessions of Projective Mapping; a first session without any experience in the method and a repetition. The same eight red wines were used both for the training and for the final test, all of them elaborated at the winery of the School of Agricultural Engineering of the University of Valladolid in Palencia (Spain) using cv. Tempranillo from Toro appellation (Spain) and the same vintage. These eight wines were different from those tasted by the previous panel. This panel was composed by students of the Enology degree at the University of Valladolid, who had studied three academic years of Enology including a course in Sensory Analysis.
(C) Panel of habitual wine consumers tasting commercial wines: A final panel, composed of twenty-four habitual consumers of wine, performed one session of Projective Mapping. They tasted nine commercial wines, one of them duplicated. Seven of the wines used only one variety: Three of them were cv. Mencía, three more were cv. Tempranillo (one of them from Toro appellation, Spain), and another one was cv. Monastrell. The other two wines were a blend of varieties: The duplicated wine used mainly cv. Cabernet Franc, together with cv. Merlot, Garnacha, and Monastrell. The other wine was mainly cv. Tempranillo, blended with cv. Garnacha and Graciano.
For all the sessions, the number of samples followed the recommendations of Valentin et al. (2016). The samples were simultaneously presented to each assessor. The panellists were requested to position the wine samples on an A2 paper (60 × 40 cm), in such a way that two wine samples were to be placed close to each other if they seemed sensorially similar, and that two wines were to be distant from one another if they seemed sensorially different. All of this according to the assessor's own criteria for what close or far mean.
In all the sessions, the samples were served as 25 mL aliquots in standardised wineglasses (ISO 3591, 1977), which were coded with 3-digit numbers, and all the samples were presented simultaneously using a randomized complete block design. The serving temperature was 14±1 • C. All these sensory evaluations were carried out at the Sensory Science Laboratory of the School of Agricultural Engineering, at the University of Valladolid, Palencia (Spain), in individual booths designed in accordance with ISO 8589 (2007).
Data analysis
The x-and y-coordinates of each sample on the paper were measured from the leftbottom corner of the sheet. These data were then stored in a table with S rows, one for each sample, and 2A columns, with A being the number of assessors.
Statistical techniques
On one hand, these data were analysed by statistical techniques with MFA, as proposed by Pagès (2005), using the R language (R Development Core Team, 2007) and the FactoMineR package . MFA has become a common choice for the analysis of Projective Mapping data , and it has been proved to be equal or better than other models like individual differences scaling (INDSCAL) for estimating the consensus configuration (Naes et al., 2017). Finally, confidence ellipses were constructed using truncated total bootstrapping (Cadoret and Husson, 2013) with SensoMineR package .
Geometric techniques
On the other hand, in order to analyse the data by geometric techniques, we have developed and applied the following method:
Step 1: Geometric clustering (Capoyleas et al., 1991) allows to group data using basic operations from two-dimensional geometry, like drawing circles or segments. With the goal of analyzing each tablecloth to connections between the samples, and after exploring a large number of alternatives (de Miguel et al., 2013), the Gabriel graph (Gabriel and Sokal, 1969) was chosen, because of its good behavior and its clustering abilities having been widely checked (Matula and Sokal, 1980;Urquhart, 1982;Choo et al., 2007).
For the construction of the Gabriel graph, two samples P, Q get connected if, and only if, there is no other sample inside the closed disk having the straight segment P − Q as diameter. Figure 1 shows how to construct a Gabriel graph. Figure 2 shows another example, with four tablecloths (first row) and their corresponding Gabriel graphs (second row). Recall that the assessors position the samples on the tablecloth without a common metric criterion, according to their own understanding of close and far. For an example, look at the two leftmost tablecloths in the top row of Figure 2. The square 1-2-3-4 shows different distances in the two tablecloths, with samples 1-2 being much closer in the second picture than in the first one. However, at a glance we would say that both tablecloths provide similar information, namely a group 1-2-3-4 together with the samples 5-6-7 getting further from that group.
This is the kind of information extracted by the Gabriel graph, which therefore leads to the same graph for those two cases. See the two leftmost pictures of the second row in Figure 2, which both show the same connections.
Step 2: A global similarity matrix (Abdi et al., 2007) was constructed. Each entry of the matrix stores, for a pair of samples P, Q, how many tablecloths show a connection P − Q after the clustering step (e.g., entry 1, 2 will equal the number of tablecloths in which the samples 1 and 2 are connected). Figure 2 illustrates (third row, left) the global similarity matrix from four tablecloths for which the clustering Gabriel graph has already been constructed (second row). Step 1: Tablecloths from projective mapping
7 • 3 2 4 2 0 2 3 • 3 1 1 0 0 2 3 • 4 1 0 1 4 1 4 • 1 1 0 2 1 1 1 • 3 0 0 0 0 1 3 • 4 2 0 1 0 0 4 •
Number of connections
Step 2: Geometric clustering from Gabriel Graph
Step 3: Global similarity matrix
Step 4: Graph layout (SensoGraph) from Kamada-Kawai algorithm Figure 2: First row: Four hypothetical tablecloths from Projective Mapping. Second row: Geometric clustering obtained by the Gabriel Graph. Third row, left: Global similarity matrix. For an example, entry 1, 2 has value 3 because the samples 1 and 2 are connected in 3 of the tablecloths in the second row. Third row, right: The SensoGraph resulting graphic, where distances between samples represent the global similarity perceived. Two clear groups 1-2-3-4 and 5-6-7 appear, consistently with the tablecloths on top. In addition, connections between samples can be checked, with the connection forces being represented by thickness and opacity. For an example, the edge 1 − 2, having force 3, is thicker and more opaque than the edge 2 − 5, whose force is 1, but thinner and more transparent than the edge 1 − 4, of force 4. This global similarity matrix can alternatively be seen as encoding a graph, in which entry i, j stores the weight of the connection between vertices i and j.
Step 3: A graph drawing algorithm (Eades et al., 2010) was applied to the graph encoded by the global similarity matrix. Graph drawing algorithms have been used in social and behavioral sciences as a geometric alternative (DeJordy et al., 2007) to non-metric multidimensional scaling (Chollet et al., 2014). Among the different kinds of graph drawing algorithms, the particular class of force-directed drawing algorithms (Fruchterman and Reingold, 1991;Hu, 2005) has been the one chosen, because of providing good results and being easy to understand.
In this class of algorithms, each entry P, Q of the global similarity matrix models the force of a spring, which connects P − Q and pulls those samples together with that prescribed force. The particular algorithm chosen has been the Kamada-Kawai algorithm, where the resulting system of forces is let to evolve until an equilibrium position of the samples is reached. Technical details can be checked at the paper by Kamada and Kawai (1989), but for a better understanding of this third step, the reader can imagine that the samples are (1) pinned at arbitrary positions on a table, (2) joined by springs with the forces specified in the matrix, and (3) finally unpinned all at the same time so that they evolve to an equilibrium position. Figure 2 shows a graphical sketch of these three steps. The equilibrium position reached provides a consensus graphic, here named SensoGraph, which reflects the global opinion of the panel by positioning closer those samples that have been globally perceived as similar. In addition, the graphic shows the connections and represents their forces by the thickness and opacity of the corresponding segments (the actual values of the forces being attached as a matrix). This information allows to know how similar or different two products have been perceived, playing the role of the confidence areas used by other methods in the literature (Cadoret and Husson, 2013).
Software
In order to perform the three steps detailed above, a new software was implemented. For convenience, Microsoft Visual Studio together with the programming language C # were used to create an executable file for Windows, which allows to visually open the data spreadsheet and click a button to obtain the consensus graphic. This allows to start using the software with a negligible learning curve.
The implementation of Step 1 above followed a standard scheme for the construction of the Gabriel graph (Gabriel and Sokal, 1969), computing first the Delaunay triangulation (de Berg et al., 2008) and then traversing its edges to check which of them fulfill the Gabriel graph defining condition (that there is no other sample inside the closed disk having that edge as diameter, as stated in Step 1). Note that this is an exact algorithm, and hence there are no parameters to be chosen.
Implementing
Step 2 was straightforward, just needing to run through the Gabriel graphs obtained, updating the counters for the appearances of each edge, and storing the results as a matrix. Finally, for Step 3 the algorithm in the seminal paper by Kamada and Kawai (1989) was used. This algorithm does need the following choices of parameters. The desirable length L of an edge in the outcome, for which the diameter of the tablecloths was used as suggested in Eq. (3) in the reference. A constant K = 100 used to define the strengths of the springs as in Eq. (4) in the reference, which determines how strongly the edges tend to the desirable length.
Finally, a maximum number C = 1000 of iterations and a threshold ε = 0.1 were chosen for the stopping condition of the algorithm. All these choices are rather standard, since our tests did not show huge variability among different choices.
A video showing the software in use has been broadcast (Orden, 2018) and readers interested in the software can contact the corresponding author. Moreover, the implementation of a Python version and an R package are projected for the future. The data obtained by performing Projective Mapping with the panel trained in QDA of wine were processed both by MFA (Fig. 3, left) and by SensoGraph (Fig. 3, right). For MFA, the first two dimensions accounted for 66.53% of the explained variance. At first glance, the positionings provided by MFA and SensoGraph look similar. Going into details, both graphics show a clear group 2-3-4: the corresponding ellipses in MFA superimpose, meaning that the assessors did not perceived a significant difference among these three samples, while in SensoGraph connections among 2-3-4 have arisen in between 55% and 64% of the tablecloths. (May the reader be interested in checking the actual numbers, these are shown in Table 1.)
In addition, the graphic from MFA suggests a group 5-6-8, with non-empty intersection for their ellipses, and a further group 6-7. This connection 6-7 has appeared in the 55% of the tablecloths in SensoGraph. Further, in SensoGraph the connections in the group 5-6-8 have appeared in percentages from 27% to 45% of the tablecloths. On the contrary, the group 5-7-8 is more apparent, its connections having arisen in between the 36% and the 55% of the tablecloths. This is because the geometric clustering has joined the sample 5 to the sample 7 in more tablecloths, 55%, than to the sample 6, only 27%. It is interesting to note that this is compatible with the confidence ellipses of samples 5, 6, and 7 in the MFA graphic.
(B) Panel receiving one training session in Projective Mapping: With the aim of studying how the experience in the Projective Mapping methodology affects the results, MFA and SensoGraph were used to process both the data obtained from a first Projective Mapping session with panellists having experience in tasting wines, Figure 4, and the data the same panel generated in a second session, Figure 5. For MFA, the first two dimensions accounted respectively for 54.54% (Fig. 4, left) and 61.48% (Fig. 5, left) of the explained variance, reflecting the effect of the experience achieved. For this panel, the comparison between MFA and Sen-soGraph positionings shows a higher coincidence when the percentage of explained variance is higher, i.e., in the second session. The graphics in Figure 4 are difficult to analyse even for a trained eye, since neither MFA nor SensoGraph show clear groups. The MFA plot shows the ellipses of samples 2-3-4-5-7 superimposed, while all of the ellipses of the remaining samples 1, 6, and 8 do, in turn, superimpose with some of the ellipses in the previous group. In SensoGraph such a group of samples 2-3-4-5-7 appears indeed, at the lowerright corner, with connections ranging from the 8% of tablecloths joining 3-4 to the 58% joining 4-7. Interestingly enough, SensoGraph allows to distinguish the behavior of sample 7, which turns out to be strongly connected to samples 2, 3, 4, and 5 in between the 42% and the 58% of the tablecloths, from the behavior of sample 4, which is poorly connected with samples 2 and 3 since these connections arise, respectively, only at the 25% and the 8% of the tablecloths.
The situation for the repetition session is shown in Figure 5, where the same groups 2-3-5-8 and 4-6-7 are apparent for MFA (left) and SensoGraph (right), with sample 1 clearly isolated. In MFA the corresponding confidence ellipses do actually intersect, while in SensoGraph connections in the group 2-3-5-8 have appeared in between the 33% and the 58% of the tablecloths and those in the group 4-6-7 range between the 42% and the 58%. Here, it is interesting to note that sample 1 is better connected to the group 4-6-7, these connections appearing in between 25% and 42% of the tablecloths, than to the group 2-3-5-8, in between 8% and 33%. (See Table 2 for the global similarity matrices.) (C) Panel of habitual wine consumers tasting commercial wines:
Again, the data were analyzed by MFA and SensoGraph, see Figure 6. For MFA, the first two dimensions accounted for 50.62%. The positionings provided by between MFA and SensoGraph are similar, with samples 1-2 and 10 clearly separated from the others. This sample 10 used only cv. Monastrell and the samples 1-2 correspond to wines elaborated with only cv. Tempranillo. It is interesting to observe that the pairs of samples 1-2 and 5-7 are quite similar in the MFA map, both according to the distances d(1, 2) and d(5, 7) between the two samples, and according to their corresponding ellipses. However, the pair 5-7 appears farther apart in SensoGraph, as can be checked at the global similarity matrix (Table 3), where the connection 1-2 arises in an 88% of the tablecloths, while the connection 5-7 arises in the 50% of them. This is consistent with samples 1-2 being elaborated with only cv. Tempranillo, while samples 5-7 differ in the grape variety, one of them using only cv. Mencía and the other being a blend of cv. Tempranillo, Garnacha and Graciano.
General discussion
As a summary of these four experiments, it can be observed that the positionings provided by SensoGraph are similar to those obtained by MFA.
Furthermore, the more trained is the panel, the more clear and similar are the groups in the graphics given by SensoGraph and MFA. This is consistent with the behavior previously observed for statistical techniques, since Liu et al. (2016) reported that, for Projective Mapping, conducting training on either the method or the product leads to more robust results. Actually, observing the percentages of explained variance leads to the conclusion that, the higher the total inertia, the more similar are the positionings for MFA and SensoGraph.
Note that, for Step 3, it would be possible to use the Kamada-Kawai energy (Gansner et al., 2004), which is indeed analogous to the stress introduced by Kruskal (1964a,b), as an index of how well the graph drawing algorithm has drawn the data in the global similarity matrix. However, this would miss the effect of the geometric clustering in Step 1, for which an index of fit is not available.
In addition, it is interesting that the graphic for the SensoGraph method introduced in this paper does not provide only the positions for the samples, but also a graphical representation of the forces of connections, as well as a global similarity matrix. These connections and forces provide a better understanding of the interactions between groups, as already checked in different research fields (Beck et al., 2017;Conover et al., 2011;Junghanns et al., 2015). Further, these connections and forces help to calibrate the significance of the positioning (Cadoret and Husson, 2013), with the help of the global similarity matrices.
For an example, they allow to contrast the distances in the map with the information of the tablecloths, as discussed in the last panel above. It is also interesting that for the matrix in Table 1 there is only one entry which is 0, while for those in Tables 2 and 3 there are no zero entries. This means that almost all samples have been connected at least once. Moreover, the connections appearing in the maximum number of tablecloths do so, respectively, in 58%, 64%, and 88% of the tablecloths. These two observations show a large amount of individual variation in the data, which deserves further study.
Finally, concerning the usability of the SensoGraph method, on one hand the Projective Mapping methodology for data collection has already distinguished as natural and intuitive for the assessors (Ares et al., 2011;Varela and Ares, 2012;Carrillo et al., 2012). On the other hand, the geometric techniques used for data analysis have been explained using basic geometric objects in 2D, aiming to be readily understood by researchers without any previous experience in the method.
Computational efficiency
Furthermore, all the geometric techniques used in this work are known to be extremely efficient from a computational point of view (Cardinal et al., 2009). In the following, the efficiency of the previous methods is analysed, using the standard big O notation from algorithmic complexity. May the reader be unfamiliar with this notion, a good reference is, e.g., the book by Arora and Barak (2009). For the sake of an easier reading, a simplified explanation is also provided after the analysis.
First, the time complexity of SensoGraph is in O(AS log(S) + AS + S 2 ), being S the number of samples and A the number of assessors as before. Each summand comes from each of the three steps detailed in Subsection 2.2.2, as follows: From the first step, the computation for each of the A tablecloths of the Delaunay triangulation (de Berg et al., 2008) of S samples, in O(S log S), together with checking which of them does actually fulfill the condition to appear in the Gabriel graph. From the second step, counting the number of appearances of each of the O(S) edges over the A tablecloths. From the third step, the algorithm by Kamada and Kawai (1989) takes O(S 2 ) per each of the constant number of iterations stated in Subsection 2.3.
With the number S of samples bounded in the order of tens, it is the number of assessors which can grow to the order of hundreds or beyond. Hence, the complexity is dominated by the number A of assessors, and therefore the time complexity of SensoGraph is in O(A), i.e., linear in the number of assessors. On the contrary, the time complexity of MFA is in O(A 3 ), i.e., cubic in the number of assessors, since it needs two rounds of PCA (Abdi et al., 2013), whose time complexity is cubic (Feng et al., 2000).
An explanation in short of these two complexities is the following: Multiplying the number of assessors by a factor X, the number of operations needed by Senso-Graph gets multiplied by X as well, while the one needed by MFA gets multiplied by X 3 . For an example, duplicating the number of assessors (i.e., X = 2), the number of operations needed by SensoGraph gets duplicated as well, while that of MFA gets multiplied by 2 3 . For another example, if the number of assessors gets multiplied by 10, so does the number of operations needed by SensoGraph, while the number of operations needed by MFA gets multiplied by 10 3 . The difference between these two growing rates is small for a number of assessors around 100, but apparent already for 200 and crucial when intending to work with a larger number of assessors, see Figure 7. Working with a greater number of assessors is likely to become more relevant, since sensory analysis moves towards the use of untrained consumers to evaluate products (Valentin et al., 2016). Thanks to its linear time complexity, SensoGraph would be able to handle even millions of tablecloths (de Berg et al., 2008), and this opens an interesting door towards massive sensory analysis, using the Internet to collect large datasets (Beck et al., 2017;Conover et al., 2011;Junghanns et al., 2015). This feature can be particularly suitable for the comparison of pictures like, e.g., the one performed by Mielby et al. (2014). The use of photographs as surrogates of samples has been suggested by Maughan et al. (2016) after proper validation of the photographs.
Conclusions
The main conclusion is that the use of geometric techniques can be an interesting complement to the use of statistics. SensoGraph does not aim to substitute the use of statistics for the analysis of Projective Mapping data, but to provide an additional point of view for an enriched vision.
The results obtained by SensoGraph are comparable to those given by the consensus maps obtained by MFA, further providing information about the connections between samples. This extra information, not provided by any of the previous methods in the literature, helps to a better understanding of the relations inside and between groups.
In addition, we obtain a global similarity matrix storing the information about how many tablecloths show a connection between two samples. This is useful, for instance, when in the MFA map the distance d(P 1 , Q 1 ) between a pair of samples P 1 , Q 1 is very similar to the distance d(P 2 , Q 2 ) between a different pair of samples P 2 , Q 2 . Comparing the two entries in the global similarity matrix allows to check whether the connections P 1 − Q 1 and P 2 − Q 2 do actually arise in a similar number of tablecloths or not.
Finally, the time complexity of SensoGraph is significantly lower than that of MFA. This allows to efficiently manage a number of tablecloths several orders of magnitude above the one handled by MFA. This feature is of particular interest, provided the increasing importance of consumers for the evaluation of existing and new products, opening a door to the analysis of massive sensory data. A good example is the comparison of pictures as surrogates of samples, via the Internet, by a huge amount of assessors.
Acknowledgements
The authors want to gratefully thank professor Ferran Hurtado, in memoriam, for suggesting that proximity graphs could be used for the analysis of tablecloths. They also want to thank David N. de Miguel and Lucas Fox for implementing in a software the methods used. We are thankful for the very helpful comments and input from two anonymous reviewers.
All the authors have been supported by the University of Alcalá grant CCGP2017-EXP/015. In addition, David Orden has been partially supported by MINECO Projects MTM2014-54207 and MTM2017-83750-P, as well as by H2020-MSCA-RISE project 734922 -CONNECT.
Tables
• 4 3 1 6 3 1 2 4 • 7 6 1 0 4 4 3 7 • 7 2 2 3 4 1 6 7 • 4 7 3 1 6 1 2 4 • 3 6 5 3 0 2 7 3 • 6 4 1 4 3 3 6 6 • 4 2 4 4 1 5 4 4 • Table 1: Global similarity matrix for Figure 3.
• 2 3 5 6 3 2 4 2 • 5 3 2 4 6 3 3 5 • 1 5 5 6 5 5 3 1 • 6 3 7 5 6 2 5 6 • 2 5 2 3 4 5 3 2 • 3 3 2 6 6 7 5 3 • 3 4 3 5 5 2 3 3 •
• 3 1 5 3 4 3 4 3 • 5 5 7 4 6 4 1 5 • 5 7 1 3 6 5 5 5 • 1 5 7 1 3 7 7 1 • 2 4 7 4 4 1 5 2 • 7 5 3 6 3 7 4 7 • 1 4 4 6 1 7 5 1 • 10 9 9 8 10 10 9 6 2 10 • 9 9 9 10 7 6 8 4 9 9
•
• 10 12 10 11 7 5 8 9 9 10 • 8 7 7 4 7 7 8 9 12 8 • 8 7 2 4 4 10 10 10 7 8 • 10 8 11 5 10 7 11 7 7 10 • 8 3 4 9 6 7 4 2 8 8 Table 3: Global similarity matrix for Figure 6.
| 4,492 |
1809.06911
|
2889976821
|
Abstract This paper introduces SensoGraph, a novel approach for fast sensory evaluation using two-dimensional geometric techniques. In the tasting sessions, the assessors follow their own criteria to place samples on a tablecloth, according to the similarity between samples. In order to analyse the data collected, first a geometric clustering is performed to each tablecloth, extracting connections between the samples. Then, these connections are used to construct a global similarity matrix. Finally, a graph drawing algorithm is used to obtain a 2D consensus graphic, which reflects the global opinion of the panel by (1) positioning closer those samples that have been globally perceived as similar and (2) showing the strength of the connections between samples. The proposal is validated by performing four tasting sessions, with three types of panels tasting different wines, and by developing a new software to implement the proposed techniques. The results obtained show that the graphics provide similar positionings of the samples as the consensus maps obtained by multiple factor analysis (MFA), further providing extra information about connections between samples, not present in any previous method. The main conclusion is that the use of geometric techniques provides information complementary to MFA, and of a different type. Finally, the method proposed is computationally able to manage a significantly larger number of assessors than MFA, which can be useful for the comparison of pictures by a huge number of consumers, via the Internet.
|
Projective Mapping has been successfully used with many different kinds of products, among which the application to wine stands out . Other examples of beverages analysed by these methods are beers , citrus juices , drinking waters high alcohol products , hot beverages , lemon iced teas , powdered juices , or smoothies . The book by @cite_6 details more products to which consumer based descriptive methodologies have been applied.
|
{
"abstract": [
"\"Sensory characterization is one of the most powerful, sophisticated and extensively applied tools in sensory science. This book focuses on sensory characterization of food and non-food products, providing an overview of classical and novel alternative methodologies. A complete description of the methodologies would be provided. Each description would be accompanied by detailed information for their implementation, discussion of examples of applications and case-studies. The implementation of the majority of the methodologies would be performed in the statistical free software R, which would make the book very useful for people non-familiar with complex statistical software\"--"
],
"cite_N": [
"@cite_6"
],
"mid": [
"2506076252"
]
}
|
Testing SensoGraph, a geometric approach for fast sensory evaluation
|
Material and methods
Data collection
In order to validate this proposal, a total of four tasting sessions using Projective Mapping (Risvik et al., 1994;Pagès, 2005) have been performed, with three types of panels tasting different wines. composed of twelve assessors with experience in wine tasting, performed two sessions of Projective Mapping; a first session without any experience in the method and a repetition. The same eight red wines were used both for the training and for the final test, all of them elaborated at the winery of the School of Agricultural Engineering of the University of Valladolid in Palencia (Spain) using cv. Tempranillo from Toro appellation (Spain) and the same vintage. These eight wines were different from those tasted by the previous panel. This panel was composed by students of the Enology degree at the University of Valladolid, who had studied three academic years of Enology including a course in Sensory Analysis.
(C) Panel of habitual wine consumers tasting commercial wines: A final panel, composed of twenty-four habitual consumers of wine, performed one session of Projective Mapping. They tasted nine commercial wines, one of them duplicated. Seven of the wines used only one variety: Three of them were cv. Mencía, three more were cv. Tempranillo (one of them from Toro appellation, Spain), and another one was cv. Monastrell. The other two wines were a blend of varieties: The duplicated wine used mainly cv. Cabernet Franc, together with cv. Merlot, Garnacha, and Monastrell. The other wine was mainly cv. Tempranillo, blended with cv. Garnacha and Graciano.
For all the sessions, the number of samples followed the recommendations of Valentin et al. (2016). The samples were simultaneously presented to each assessor. The panellists were requested to position the wine samples on an A2 paper (60 × 40 cm), in such a way that two wine samples were to be placed close to each other if they seemed sensorially similar, and that two wines were to be distant from one another if they seemed sensorially different. All of this according to the assessor's own criteria for what close or far mean.
In all the sessions, the samples were served as 25 mL aliquots in standardised wineglasses (ISO 3591, 1977), which were coded with 3-digit numbers, and all the samples were presented simultaneously using a randomized complete block design. The serving temperature was 14±1 • C. All these sensory evaluations were carried out at the Sensory Science Laboratory of the School of Agricultural Engineering, at the University of Valladolid, Palencia (Spain), in individual booths designed in accordance with ISO 8589 (2007).
Data analysis
The x-and y-coordinates of each sample on the paper were measured from the leftbottom corner of the sheet. These data were then stored in a table with S rows, one for each sample, and 2A columns, with A being the number of assessors.
Statistical techniques
On one hand, these data were analysed by statistical techniques with MFA, as proposed by Pagès (2005), using the R language (R Development Core Team, 2007) and the FactoMineR package . MFA has become a common choice for the analysis of Projective Mapping data , and it has been proved to be equal or better than other models like individual differences scaling (INDSCAL) for estimating the consensus configuration (Naes et al., 2017). Finally, confidence ellipses were constructed using truncated total bootstrapping (Cadoret and Husson, 2013) with SensoMineR package .
Geometric techniques
On the other hand, in order to analyse the data by geometric techniques, we have developed and applied the following method:
Step 1: Geometric clustering (Capoyleas et al., 1991) allows to group data using basic operations from two-dimensional geometry, like drawing circles or segments. With the goal of analyzing each tablecloth to connections between the samples, and after exploring a large number of alternatives (de Miguel et al., 2013), the Gabriel graph (Gabriel and Sokal, 1969) was chosen, because of its good behavior and its clustering abilities having been widely checked (Matula and Sokal, 1980;Urquhart, 1982;Choo et al., 2007).
For the construction of the Gabriel graph, two samples P, Q get connected if, and only if, there is no other sample inside the closed disk having the straight segment P − Q as diameter. Figure 1 shows how to construct a Gabriel graph. Figure 2 shows another example, with four tablecloths (first row) and their corresponding Gabriel graphs (second row). Recall that the assessors position the samples on the tablecloth without a common metric criterion, according to their own understanding of close and far. For an example, look at the two leftmost tablecloths in the top row of Figure 2. The square 1-2-3-4 shows different distances in the two tablecloths, with samples 1-2 being much closer in the second picture than in the first one. However, at a glance we would say that both tablecloths provide similar information, namely a group 1-2-3-4 together with the samples 5-6-7 getting further from that group.
This is the kind of information extracted by the Gabriel graph, which therefore leads to the same graph for those two cases. See the two leftmost pictures of the second row in Figure 2, which both show the same connections.
Step 2: A global similarity matrix (Abdi et al., 2007) was constructed. Each entry of the matrix stores, for a pair of samples P, Q, how many tablecloths show a connection P − Q after the clustering step (e.g., entry 1, 2 will equal the number of tablecloths in which the samples 1 and 2 are connected). Figure 2 illustrates (third row, left) the global similarity matrix from four tablecloths for which the clustering Gabriel graph has already been constructed (second row). Step 1: Tablecloths from projective mapping
7 • 3 2 4 2 0 2 3 • 3 1 1 0 0 2 3 • 4 1 0 1 4 1 4 • 1 1 0 2 1 1 1 • 3 0 0 0 0 1 3 • 4 2 0 1 0 0 4 •
Number of connections
Step 2: Geometric clustering from Gabriel Graph
Step 3: Global similarity matrix
Step 4: Graph layout (SensoGraph) from Kamada-Kawai algorithm Figure 2: First row: Four hypothetical tablecloths from Projective Mapping. Second row: Geometric clustering obtained by the Gabriel Graph. Third row, left: Global similarity matrix. For an example, entry 1, 2 has value 3 because the samples 1 and 2 are connected in 3 of the tablecloths in the second row. Third row, right: The SensoGraph resulting graphic, where distances between samples represent the global similarity perceived. Two clear groups 1-2-3-4 and 5-6-7 appear, consistently with the tablecloths on top. In addition, connections between samples can be checked, with the connection forces being represented by thickness and opacity. For an example, the edge 1 − 2, having force 3, is thicker and more opaque than the edge 2 − 5, whose force is 1, but thinner and more transparent than the edge 1 − 4, of force 4. This global similarity matrix can alternatively be seen as encoding a graph, in which entry i, j stores the weight of the connection between vertices i and j.
Step 3: A graph drawing algorithm (Eades et al., 2010) was applied to the graph encoded by the global similarity matrix. Graph drawing algorithms have been used in social and behavioral sciences as a geometric alternative (DeJordy et al., 2007) to non-metric multidimensional scaling (Chollet et al., 2014). Among the different kinds of graph drawing algorithms, the particular class of force-directed drawing algorithms (Fruchterman and Reingold, 1991;Hu, 2005) has been the one chosen, because of providing good results and being easy to understand.
In this class of algorithms, each entry P, Q of the global similarity matrix models the force of a spring, which connects P − Q and pulls those samples together with that prescribed force. The particular algorithm chosen has been the Kamada-Kawai algorithm, where the resulting system of forces is let to evolve until an equilibrium position of the samples is reached. Technical details can be checked at the paper by Kamada and Kawai (1989), but for a better understanding of this third step, the reader can imagine that the samples are (1) pinned at arbitrary positions on a table, (2) joined by springs with the forces specified in the matrix, and (3) finally unpinned all at the same time so that they evolve to an equilibrium position. Figure 2 shows a graphical sketch of these three steps. The equilibrium position reached provides a consensus graphic, here named SensoGraph, which reflects the global opinion of the panel by positioning closer those samples that have been globally perceived as similar. In addition, the graphic shows the connections and represents their forces by the thickness and opacity of the corresponding segments (the actual values of the forces being attached as a matrix). This information allows to know how similar or different two products have been perceived, playing the role of the confidence areas used by other methods in the literature (Cadoret and Husson, 2013).
Software
In order to perform the three steps detailed above, a new software was implemented. For convenience, Microsoft Visual Studio together with the programming language C # were used to create an executable file for Windows, which allows to visually open the data spreadsheet and click a button to obtain the consensus graphic. This allows to start using the software with a negligible learning curve.
The implementation of Step 1 above followed a standard scheme for the construction of the Gabriel graph (Gabriel and Sokal, 1969), computing first the Delaunay triangulation (de Berg et al., 2008) and then traversing its edges to check which of them fulfill the Gabriel graph defining condition (that there is no other sample inside the closed disk having that edge as diameter, as stated in Step 1). Note that this is an exact algorithm, and hence there are no parameters to be chosen.
Implementing
Step 2 was straightforward, just needing to run through the Gabriel graphs obtained, updating the counters for the appearances of each edge, and storing the results as a matrix. Finally, for Step 3 the algorithm in the seminal paper by Kamada and Kawai (1989) was used. This algorithm does need the following choices of parameters. The desirable length L of an edge in the outcome, for which the diameter of the tablecloths was used as suggested in Eq. (3) in the reference. A constant K = 100 used to define the strengths of the springs as in Eq. (4) in the reference, which determines how strongly the edges tend to the desirable length.
Finally, a maximum number C = 1000 of iterations and a threshold ε = 0.1 were chosen for the stopping condition of the algorithm. All these choices are rather standard, since our tests did not show huge variability among different choices.
A video showing the software in use has been broadcast (Orden, 2018) and readers interested in the software can contact the corresponding author. Moreover, the implementation of a Python version and an R package are projected for the future. The data obtained by performing Projective Mapping with the panel trained in QDA of wine were processed both by MFA (Fig. 3, left) and by SensoGraph (Fig. 3, right). For MFA, the first two dimensions accounted for 66.53% of the explained variance. At first glance, the positionings provided by MFA and SensoGraph look similar. Going into details, both graphics show a clear group 2-3-4: the corresponding ellipses in MFA superimpose, meaning that the assessors did not perceived a significant difference among these three samples, while in SensoGraph connections among 2-3-4 have arisen in between 55% and 64% of the tablecloths. (May the reader be interested in checking the actual numbers, these are shown in Table 1.)
In addition, the graphic from MFA suggests a group 5-6-8, with non-empty intersection for their ellipses, and a further group 6-7. This connection 6-7 has appeared in the 55% of the tablecloths in SensoGraph. Further, in SensoGraph the connections in the group 5-6-8 have appeared in percentages from 27% to 45% of the tablecloths. On the contrary, the group 5-7-8 is more apparent, its connections having arisen in between the 36% and the 55% of the tablecloths. This is because the geometric clustering has joined the sample 5 to the sample 7 in more tablecloths, 55%, than to the sample 6, only 27%. It is interesting to note that this is compatible with the confidence ellipses of samples 5, 6, and 7 in the MFA graphic.
(B) Panel receiving one training session in Projective Mapping: With the aim of studying how the experience in the Projective Mapping methodology affects the results, MFA and SensoGraph were used to process both the data obtained from a first Projective Mapping session with panellists having experience in tasting wines, Figure 4, and the data the same panel generated in a second session, Figure 5. For MFA, the first two dimensions accounted respectively for 54.54% (Fig. 4, left) and 61.48% (Fig. 5, left) of the explained variance, reflecting the effect of the experience achieved. For this panel, the comparison between MFA and Sen-soGraph positionings shows a higher coincidence when the percentage of explained variance is higher, i.e., in the second session. The graphics in Figure 4 are difficult to analyse even for a trained eye, since neither MFA nor SensoGraph show clear groups. The MFA plot shows the ellipses of samples 2-3-4-5-7 superimposed, while all of the ellipses of the remaining samples 1, 6, and 8 do, in turn, superimpose with some of the ellipses in the previous group. In SensoGraph such a group of samples 2-3-4-5-7 appears indeed, at the lowerright corner, with connections ranging from the 8% of tablecloths joining 3-4 to the 58% joining 4-7. Interestingly enough, SensoGraph allows to distinguish the behavior of sample 7, which turns out to be strongly connected to samples 2, 3, 4, and 5 in between the 42% and the 58% of the tablecloths, from the behavior of sample 4, which is poorly connected with samples 2 and 3 since these connections arise, respectively, only at the 25% and the 8% of the tablecloths.
The situation for the repetition session is shown in Figure 5, where the same groups 2-3-5-8 and 4-6-7 are apparent for MFA (left) and SensoGraph (right), with sample 1 clearly isolated. In MFA the corresponding confidence ellipses do actually intersect, while in SensoGraph connections in the group 2-3-5-8 have appeared in between the 33% and the 58% of the tablecloths and those in the group 4-6-7 range between the 42% and the 58%. Here, it is interesting to note that sample 1 is better connected to the group 4-6-7, these connections appearing in between 25% and 42% of the tablecloths, than to the group 2-3-5-8, in between 8% and 33%. (See Table 2 for the global similarity matrices.) (C) Panel of habitual wine consumers tasting commercial wines:
Again, the data were analyzed by MFA and SensoGraph, see Figure 6. For MFA, the first two dimensions accounted for 50.62%. The positionings provided by between MFA and SensoGraph are similar, with samples 1-2 and 10 clearly separated from the others. This sample 10 used only cv. Monastrell and the samples 1-2 correspond to wines elaborated with only cv. Tempranillo. It is interesting to observe that the pairs of samples 1-2 and 5-7 are quite similar in the MFA map, both according to the distances d(1, 2) and d(5, 7) between the two samples, and according to their corresponding ellipses. However, the pair 5-7 appears farther apart in SensoGraph, as can be checked at the global similarity matrix (Table 3), where the connection 1-2 arises in an 88% of the tablecloths, while the connection 5-7 arises in the 50% of them. This is consistent with samples 1-2 being elaborated with only cv. Tempranillo, while samples 5-7 differ in the grape variety, one of them using only cv. Mencía and the other being a blend of cv. Tempranillo, Garnacha and Graciano.
General discussion
As a summary of these four experiments, it can be observed that the positionings provided by SensoGraph are similar to those obtained by MFA.
Furthermore, the more trained is the panel, the more clear and similar are the groups in the graphics given by SensoGraph and MFA. This is consistent with the behavior previously observed for statistical techniques, since Liu et al. (2016) reported that, for Projective Mapping, conducting training on either the method or the product leads to more robust results. Actually, observing the percentages of explained variance leads to the conclusion that, the higher the total inertia, the more similar are the positionings for MFA and SensoGraph.
Note that, for Step 3, it would be possible to use the Kamada-Kawai energy (Gansner et al., 2004), which is indeed analogous to the stress introduced by Kruskal (1964a,b), as an index of how well the graph drawing algorithm has drawn the data in the global similarity matrix. However, this would miss the effect of the geometric clustering in Step 1, for which an index of fit is not available.
In addition, it is interesting that the graphic for the SensoGraph method introduced in this paper does not provide only the positions for the samples, but also a graphical representation of the forces of connections, as well as a global similarity matrix. These connections and forces provide a better understanding of the interactions between groups, as already checked in different research fields (Beck et al., 2017;Conover et al., 2011;Junghanns et al., 2015). Further, these connections and forces help to calibrate the significance of the positioning (Cadoret and Husson, 2013), with the help of the global similarity matrices.
For an example, they allow to contrast the distances in the map with the information of the tablecloths, as discussed in the last panel above. It is also interesting that for the matrix in Table 1 there is only one entry which is 0, while for those in Tables 2 and 3 there are no zero entries. This means that almost all samples have been connected at least once. Moreover, the connections appearing in the maximum number of tablecloths do so, respectively, in 58%, 64%, and 88% of the tablecloths. These two observations show a large amount of individual variation in the data, which deserves further study.
Finally, concerning the usability of the SensoGraph method, on one hand the Projective Mapping methodology for data collection has already distinguished as natural and intuitive for the assessors (Ares et al., 2011;Varela and Ares, 2012;Carrillo et al., 2012). On the other hand, the geometric techniques used for data analysis have been explained using basic geometric objects in 2D, aiming to be readily understood by researchers without any previous experience in the method.
Computational efficiency
Furthermore, all the geometric techniques used in this work are known to be extremely efficient from a computational point of view (Cardinal et al., 2009). In the following, the efficiency of the previous methods is analysed, using the standard big O notation from algorithmic complexity. May the reader be unfamiliar with this notion, a good reference is, e.g., the book by Arora and Barak (2009). For the sake of an easier reading, a simplified explanation is also provided after the analysis.
First, the time complexity of SensoGraph is in O(AS log(S) + AS + S 2 ), being S the number of samples and A the number of assessors as before. Each summand comes from each of the three steps detailed in Subsection 2.2.2, as follows: From the first step, the computation for each of the A tablecloths of the Delaunay triangulation (de Berg et al., 2008) of S samples, in O(S log S), together with checking which of them does actually fulfill the condition to appear in the Gabriel graph. From the second step, counting the number of appearances of each of the O(S) edges over the A tablecloths. From the third step, the algorithm by Kamada and Kawai (1989) takes O(S 2 ) per each of the constant number of iterations stated in Subsection 2.3.
With the number S of samples bounded in the order of tens, it is the number of assessors which can grow to the order of hundreds or beyond. Hence, the complexity is dominated by the number A of assessors, and therefore the time complexity of SensoGraph is in O(A), i.e., linear in the number of assessors. On the contrary, the time complexity of MFA is in O(A 3 ), i.e., cubic in the number of assessors, since it needs two rounds of PCA (Abdi et al., 2013), whose time complexity is cubic (Feng et al., 2000).
An explanation in short of these two complexities is the following: Multiplying the number of assessors by a factor X, the number of operations needed by Senso-Graph gets multiplied by X as well, while the one needed by MFA gets multiplied by X 3 . For an example, duplicating the number of assessors (i.e., X = 2), the number of operations needed by SensoGraph gets duplicated as well, while that of MFA gets multiplied by 2 3 . For another example, if the number of assessors gets multiplied by 10, so does the number of operations needed by SensoGraph, while the number of operations needed by MFA gets multiplied by 10 3 . The difference between these two growing rates is small for a number of assessors around 100, but apparent already for 200 and crucial when intending to work with a larger number of assessors, see Figure 7. Working with a greater number of assessors is likely to become more relevant, since sensory analysis moves towards the use of untrained consumers to evaluate products (Valentin et al., 2016). Thanks to its linear time complexity, SensoGraph would be able to handle even millions of tablecloths (de Berg et al., 2008), and this opens an interesting door towards massive sensory analysis, using the Internet to collect large datasets (Beck et al., 2017;Conover et al., 2011;Junghanns et al., 2015). This feature can be particularly suitable for the comparison of pictures like, e.g., the one performed by Mielby et al. (2014). The use of photographs as surrogates of samples has been suggested by Maughan et al. (2016) after proper validation of the photographs.
Conclusions
The main conclusion is that the use of geometric techniques can be an interesting complement to the use of statistics. SensoGraph does not aim to substitute the use of statistics for the analysis of Projective Mapping data, but to provide an additional point of view for an enriched vision.
The results obtained by SensoGraph are comparable to those given by the consensus maps obtained by MFA, further providing information about the connections between samples. This extra information, not provided by any of the previous methods in the literature, helps to a better understanding of the relations inside and between groups.
In addition, we obtain a global similarity matrix storing the information about how many tablecloths show a connection between two samples. This is useful, for instance, when in the MFA map the distance d(P 1 , Q 1 ) between a pair of samples P 1 , Q 1 is very similar to the distance d(P 2 , Q 2 ) between a different pair of samples P 2 , Q 2 . Comparing the two entries in the global similarity matrix allows to check whether the connections P 1 − Q 1 and P 2 − Q 2 do actually arise in a similar number of tablecloths or not.
Finally, the time complexity of SensoGraph is significantly lower than that of MFA. This allows to efficiently manage a number of tablecloths several orders of magnitude above the one handled by MFA. This feature is of particular interest, provided the increasing importance of consumers for the evaluation of existing and new products, opening a door to the analysis of massive sensory data. A good example is the comparison of pictures as surrogates of samples, via the Internet, by a huge amount of assessors.
Acknowledgements
The authors want to gratefully thank professor Ferran Hurtado, in memoriam, for suggesting that proximity graphs could be used for the analysis of tablecloths. They also want to thank David N. de Miguel and Lucas Fox for implementing in a software the methods used. We are thankful for the very helpful comments and input from two anonymous reviewers.
All the authors have been supported by the University of Alcalá grant CCGP2017-EXP/015. In addition, David Orden has been partially supported by MINECO Projects MTM2014-54207 and MTM2017-83750-P, as well as by H2020-MSCA-RISE project 734922 -CONNECT.
Tables
• 4 3 1 6 3 1 2 4 • 7 6 1 0 4 4 3 7 • 7 2 2 3 4 1 6 7 • 4 7 3 1 6 1 2 4 • 3 6 5 3 0 2 7 3 • 6 4 1 4 3 3 6 6 • 4 2 4 4 1 5 4 4 • Table 1: Global similarity matrix for Figure 3.
• 2 3 5 6 3 2 4 2 • 5 3 2 4 6 3 3 5 • 1 5 5 6 5 5 3 1 • 6 3 7 5 6 2 5 6 • 2 5 2 3 4 5 3 2 • 3 3 2 6 6 7 5 3 • 3 4 3 5 5 2 3 3 •
• 3 1 5 3 4 3 4 3 • 5 5 7 4 6 4 1 5 • 5 7 1 3 6 5 5 5 • 1 5 7 1 3 7 7 1 • 2 4 7 4 4 1 5 2 • 7 5 3 6 3 7 4 7 • 1 4 4 6 1 7 5 1 • 10 9 9 8 10 10 9 6 2 10 • 9 9 9 10 7 6 8 4 9 9
•
• 10 12 10 11 7 5 8 9 9 10 • 8 7 7 4 7 7 8 9 12 8 • 8 7 2 4 4 10 10 10 7 8 • 10 8 11 5 10 7 11 7 7 10 • 8 3 4 9 6 7 4 2 8 8 Table 3: Global similarity matrix for Figure 6.
| 4,492 |
1809.06502
|
2889818188
|
Despite its simplicity, bag-of-n-grams sen- tence representation has been found to excel in some NLP tasks. However, it has not re- ceived much attention in recent years and fur- ther analysis on its properties is necessary. We propose a framework to investigate the amount and type of information captured in a general- purposed bag-of-n-grams sentence represen- tation. We first use sentence reconstruction as a tool to obtain bag-of-n-grams representa- tion that contains general information of the sentence. We then run prediction tasks (sen- tence length, word content, phrase content and word order) using the obtained representation to look into the specific type of information captured in the representation. Our analysis demonstrates that bag-of-n-grams representa- tion does contain sentence structure level in- formation. However, incorporating n-grams with higher order n empirically helps little with encoding more information in general, except for phrase content information.
|
Studies such as @cite_0 show that bag-of-n-grams model is usually considered deficient in dealing with data sparsity and poor generalization. In particular, Le shows that CBOW and bigram models perform poorly in encoding paragraph information, and bigram representation generally outperforms unigram. This induces the question whether this result can be generalized to sentence representation and whether a bigger n leads to even better performance. However, a more detailed and systematic analysis has not been done on the properties of bag-of-n-grams embeddings.
|
{
"abstract": [
"Many machine learning algorithms require the input to be represented as a fixed-length feature vector. When it comes to texts, one of the most common fixed-length features is bag-of-words. Despite their popularity, bag-of-words features have two major weaknesses: they lose the ordering of the words and they also ignore semantics of the words. For example, \"powerful,\" \"strong\" and \"Paris\" are equally distant. In this paper, we propose Paragraph Vector, an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents. Our algorithm represents each document by a dense vector which is trained to predict words in the document. Its construction gives our algorithm the potential to overcome the weaknesses of bag-of-words models. Empirical results show that Paragraph Vectors outperform bag-of-words models as well as other techniques for text representations. Finally, we achieve new state-of-the-art results on several text classification and sentiment analysis tasks."
],
"cite_N": [
"@cite_0"
],
"mid": [
"2949547296"
]
}
|
Analysis of Bag-of-n-grams Representation's Properties Based on Textual Reconstruction
|
Though simple as it appears, bag-of-n-grams representation of textual data has been found to excel in many natural language processing (NLP) tasks, in particular sentiment analysis (Cho, 2017). This suggests that it may encode most information of a sentence. This paper aims to investigate the properties of a general-purposed bag-of-n-grams representation to reveal the amount and type of information it captures.
A good sentence representation paves the way for better performance in various NLP tasks, and various methods have been developed to generate a good sentence representation. Continuous-bagof-words (CBOW) model (Mikolov et al., 2013) is efficient to train, and performs well in many downstream tasks. However, it may have discarded the word order information and some semantics. Recently, neural network-based sen-tence representation models including Recursive Neural Networks (RecNN) (Socher et al., 2012), Convolutional Neural Networks (CNN) (Kim, 2014) and Recurrent Neural Networks (RNN) (Sutskever et al., 2014; embeddings have shown advantages in generating general purpose sentence representation. They capture more syntactic and semantic structures of the sentences, but are computationally heavy.
On the other hand, a bag-of-n-grams embedding represents a sentence with a vector by summing over the n-gram embeddings, in theory richer in local order and syntactic information than CBOW and still computationally cheaper than neural networks based sentence representations. Its simplicity can come useful in certain situations, and a more thorough analysis of its properties is needed.
We propose a framework to perform detailed and systematic analysis of the properties of bagof-n-grams sentence representations. To make our analysis more meaningful in a realistic sense, we analyze on general-purposed bag-of-n-grams representations. Sentence reconstruction gives us general-purposed embeddings, and we then test the obtained embeddings on prediction tasks including length, word content, phrase content, and word order prediction, each to reflect bag-of-ngrams' capacity of capturing a particular type of sentence information. We also report word level encoder-decoder model's performance on theses tasks as a baseline.
Approach
We aim to propose a framework to analyze the properties of general purpose bag-of-n-grams sentence representation, including the amount and type of information captured by the representation. In order to obtain general-purposed bag-of-ngrams representations, we use the sentence reconstruction as a tool to learn the representations. The intuition is that if a bag-of-n-grams sentence representation performs well in sentence reconstruction, it must contain most of the useful information in the original sentence. Therefore, we think it's a simple and logical way to use the bag-of-ngrams embeddings obtained from sentence reconstruction as the embedding for prediction tasks.
We then feed the bag-of-n-grams sentence representation as raw input to prediction tasks to further investigate what specific types of information and what amount of that information is encoded in the representation. We vary the choice of n from 1 to 5, and compare the results with those achieved by word-level RNN-encoder based sentence embeddings, which serve as the baseline. Finally, as an extra experiment we test our bag-of-ngrams representation for Finnish and Turkish, both of which are morphologically more complex languages.
Notation
Let S denote a sentence, and we use w j i to represent the j-th i-gram of the sentence. There are in total N i i-grams of the sentence, and the i-gram representation of the sentence is
S i = {w 1 1 , w 2 1 , · · · w N 1 1 , · · · , w 1 i , w 2 i , · · · w N i i }. The i- gram w j
i is associated with a K-dimensional embedding e j i , and so the vector of bag-of-igram representation of a sentence will be E i = N i j=1 e j i . After summation, the bag-of-n-grams vector representation of a sentenceĒ i is given bȳ
E n = n i=1 E i .
Sentence Reconstruction
Inspired by RNN Encoder-Decoder model proposed by , we replace the encoder in our framework with a simple embedder that transforms a sentence S to its bag-of-n-grams vectorĒ n . We maintain the general structure of the decoder, which is an RNN that is trained to generate a sequence of words by predicting the next word y t given the hidden state h t and its previous word y t−1 . The initial hidden state of the decoder h 0 is the bag-of-n-grams vector output by the embedder, and the initial input y 0 is the starting of sentence (SOS) token. We then train the entire model end-to-end to reconstruct the original sentence from the embedding vector to achieve a general-purpose bag-of-n-grams sentence representation, which we feed as input for prediction tasks.
Prediction Tasks
Sentence Length
With the bag-of-n-grams vectorĒ n ∈ R K of a sentence as input, we use a multi-layer perceptron (MLP) classifier to predict the length of the sentence. We formulate it as a multi-class classification, with several output classes according to preset length range. Grouping lengths into classes avoids unnecessarily too many classes while still maintaining the core goal of this task. A good performance on this task would suggest that the length of the original sentence is reasonably encoded into the bag-of-n-grams representation.
Word Content
This task serves to determine whether the information of each individual word from the original sentence can still be identified after the summation over all n-grams from the original sentence that forms the resulted bag-of-n-grams sentence embedding. With the bag-of-n-grams vector E n ∈ R K and the vector of a word w ∈ R K directly extracted from the word's corresponding ngram embedding as input, the MLP classifier's job is to determine whether the corresponding word is contained in the original sentence represented bȳ E n . We formulate this task as a binary classification problem.
Phrase Content
Similar to word content, this task serves to determine whether the information of a phrase from the original sentence can still be identified from the original sentence's bag-of-n-grams embedding. A fundamental difference between word-based representation and n-gram-based representation is how neighboring words, or phrases, are embedded. In theory, n-gram based representations embed a phrase as a singleton, while word-based representations do not. Therefore, for commonly seen phrase, n-gram based representation should be able to capture the information of a phrase more effectively. With the bag-of-n-grams embedding vector of a sentenceĒ n ∈ R K and the bag-ofn-grams embedding vector of a phrase p ∈ R K as input, the MLP classifier's job is to determine whether the corresponding phrase is contained in the original sentence represented byĒ n .
Word Order
This task evaluates how well the bag-of-n-grams representation can preserve the syntactical information of the order among words in the original sentence. We feed the bag-of-n-grams vector E n ∈ R K , and two word representations e 1 ∈ R K , e 2 ∈ R K , the classifier's job is to determine which word comes first in the sentence. We formulate this task as a binary classification as well.
Experiment Settings
Dataset
We perform experiments on the Tab-delimited Bilingual Sentence Pairs (English-French, English-Finnish) dataset (ManyThings.org, 2018). The dataset is cleaned to remove repeated sentences and lowercased in order to avoid an issue of data sparsity. The dataset is then randomly shuffled and split into a training set (80%) and a test set (20%). We trim the dataset to have the first 20,000 pairs in training set for training, and first 5,000 pairs in test set for testing. SpaCy (Honnibal and Montani, 2017) is used for automatic tokenization. We keep 50,000 most frequent n-grams in a dictionary which is used to train our models to mitigate the sparsity issue. Any word not in the dictionary is mapped to a special token (UNK). We make sure that the dictionary contains a reasonable amount of higher order n-grams to avoid having higher order n-grams underrepresented.
Sentence Reconstruction
We train the proposed bag-of-n-grams based model. We also train an RNN autoencoder, which serves as a baseline.
All the encoder and decoder networks are Gated Recurrent Units (GRU) networks with 256 hidden units each. In all cases, we use a multilayer network with a softmax activation layer to compute the conditional probability of each target word.
We use a stochastic gradient descent (SGD) algorithm to train each model. Random teacher forcing enabled with a probability of 0.5 is used to make the models converge faster. Each model is trained for 20 epochs with a learning rate of 0.01.
To measure the closeness of our reconstructed sentences and the original sentences from where n-grams are drawn, we adopt BLEU-clip (Papineni et al., 2002) as our metric.
Prediction Tasks
A simple MLP model with two hidden layers of size 128 and 64 respectively is used for classification in the length, word content, phrase content and word order tasks. Sentence representations obtained from sentence reconstruction task are directly fed as raw input to the MLP. The representations are kept fixed during the training. After hyperparameter tuning, we use a learning rate of 0.001 for all the prediction tasks. Their performances are reported as the percentage prediction accuracy.
For phrase content prediction, since the number of bigrams and trigrams are the largest in the clipped dictionary, we choose to only use them (phrases of length 2 and 3) as phrase content task input.
Results
In this section we provide a detailed description of our experimental results along with their analysis.
Sentence Reconstruction
We use sentence reconstruction on English to train our bag-of-n-grams sentence representations, each with n from 1 to 5 as well as a RNN encoder. Sentence reconstruction's BLEU-clip score could be used as a rough indicator of the overall amount of information encoded in the representation. As Figure 1 shows, the performance of unigram rivals that of the encoder with BLEU-clip scores of around 0.6 in English, while all other choices of n yield similar results that are worse. More specifically, performance drops as n increases. There is a 0.1 to 0.2 gap between the score on short sentences (length ≤ 6) and on long sentences for all models, indicating that our bag-of-n-grams representation suffers when encoding longer sentences to a similar extent as the baseline RNN Encoder. This suggests that when embedding longer sentences, bag-of-n-grams representation with bigger n may not offer more information, and may actually add more noise. We examine this more in the following prediction tasks.
Prediction Task Results
Sentence Length As Figure 2 shows, RNN-encoder has by far the best performance in sentence length task, correctly predicting 90% of the test samples; unigram follows at 66% while other bag-of-n-grams with n from 2 to 5 exhibit an accuracy at around 57%.
One explanation on why any sentence representation may capture sentence length information is that, as indicated by (Adi et al., 2016), the norm of the representation vector plays an important role in encoding sentence length. We perform a similar experiment on bag-of-n-grams based embeddings. As shown in Figure 3, in general, the norm of the bag-of-n-grams based embedding vectors increases as the sentence length increases. Furthermore, RNN-encoder based embedding vectors display a similar trend. This is quite remarkable, as RNN-encoder's result reinforces our proposition about bag-of-n-grams representation, and it further suggests that both models could possibly embed sentence length in the vectors' norm. However, the overall accuracy actually drops as n increases, and remains relatively constant across n higher than 1. Except for the remarkable performance of unigram, the choice of n does not seem to affect the amount of length information encoded in bag-of-n-grams representation. This indicates that, at least in encoding sentence length information, higher order n may not help.
Word Content
Unigram model has the best word content result with accuracy at over 80%, while other models yield accuracy from 70% to 78% with no obvious correlation with n. We then divide the words for prediction into five categories by their frequency of appearance. We use vocabulary index to represent the frequency of the word (i.e. word with index 1 is the most frequent word). By evaluating the prediction accuracy of each model on these groups of words separately, we study whether bagof-n-grams representation's word content performance with each choice of n is related to word frequency. We observe that for each choice of n, the model is able to predict the occurrence of the most frequent words accurately. In addition, the model of each n struggles to predict the occurrence of words in the range of [500, 1000). An interest-ing phenomenon is that the ability to predict the presence of unknown words deteriorates as n increases, presumably due to vocabulary clipping.
Phrase Content
We notice that the accuracy of predicting the presence of a phrase increases as n increases from 1 to 4. There is a decrease in the performance when using bag-of-5-grams, probably due to vocabulary clipping. However, bag-of-5-grams still outperforms bag-of-unigram and bag-of-bigram representation. Bag-of-unigram representation turns out not to be able to predict whether a two-word phrase occurs in the sentence. This indicates that bag-of-n-grams representation with bigger n does encode more information of a phrase's presence. This may be caused by the fact that bag-of-ngrams treats an n-gram (n ≥ 2) as a singleton instead of treating it as a combination of n words. The co-occurrence of a bigram and those n-grams containing it seems to help inscribe its occurrence. This can also be applied to 3-gram as observed. Word Order RNN-encoder achieves an accuracy at 78%, outperforming all bag-of-n-grams models. To investigate how the distance between two words affects the prediction, we divide the word pairs into 5 categories, where the distance is the number of words between the two words. We notice that when the distance is smaller than 4, the prediction accuracy generally increases as distance increases. We also notice within the same distance category, the prediction accuracy does not increase as n increases. This suggests that when treated as a singleton, an n-gram (n ≥ 2) does not encode extra information of the orders of the words contained.
Bag-of-n-gram-based representation of Morphologically Complex Languages
As is shown in Figure 5, all models have poor BLEU-clip score in sentence reconstruction on Finnish and Turkish with the score hovering around 0.40, while all models achieve above 0.46 on English. This meets with our expectation, as both Finnish and Turkish have higher morphological complexity than English. However, we observe the similar pattern in sentence reconstruction of all three languages that RNN encoder and unigram model outperform all other higher-order bag-of-n-grams models, whose performance stays the same or slightly drops. This observation reinforces our previous conclusion that higher-order n-grams may not offer more useful information of the sentence and may actually add more noise. It further suggests that our findings on the properties of n-gram have the potential to transfer across languages of different morphological complexity.
Conclusion and Future Work
We present a systematic set of experiments to perform an analysis of bag-of-n-grams sentence representation, specifically to answer the question of what kind of information it contains and how it may vary as n varies. Our results lead to the following conclusions.
• Bag-of-n-grams sentence representation, which is capable of encoding general sentence information, contains a non-trivial amount of sentence length, word presence, phrase presence and word order information.
• General purpose bag-of-n-grams representation with higher order n does not necessarily encode more useful information. Unigram outperforms all other choices of n on nearly all tasks, while higher n's performance stays relatively the same or even decrease except in phrase content task.
• Phrase occurrence is better encoded in bagof-n-grams representation with higher order n. This also suggests that when treated as a singleton, phrase information doesn't correlate with other structure level information such as word order and word content.
• Finally, though reconstruction score drops overall, the pattern observed above is still similar in our extra experiment on morphologically more complex languages, i.e. Finnish and Turkish. This further reinforces that the above conclusion holds across languages of different levels of complexity.
In our research, there are definitely interesting phenomena that await future exploration, and some aspects of our experiment could be improved. We could design a synthetic dataset that better accounts for sparsity issue, or incorporating attention mechanism in the training/analysis section to obtain a more insightful result. Due to the scope limit to this project, we decide to leave these as future work.
| 2,718 |
1906.04950
|
2951229489
|
We propose an efficient transfer learning method for adapting ImageNet pre-trained Convolutional Neural Network (CNN) to fine-grained image classification task. Conventional transfer learning methods typically face the trade-off between training time and accuracy. By adding "attention module" to each convolutional filters of the pre-trained network, we are able to rank and adjust the importance of each convolutional signal in an end-to-end pipeline. In this report, we show our method can adapt a pre-trianed ResNet50 for a fine-grained transfer learning task within few epochs and achieve accuracy above conventional transfer learning methods and close to models trained from scratch. Our model also offer interpretable result because the rank of the convolutional signal shows which convolution channels are utilized and amplified to achieve better classification result, as well as which signal should be treated as noise for the specific transfer learning task, which could be pruned to lower model size.
|
Although almost a consensus among deep learning community, we found interesting theoretical and experimental results from @cite_3 . explore so-called The Lottery Ticket Hypothesis" and realized that the success of many large network can be attributed to the large number of layers and parameters that made possible for successful sub-networks to appear, which are usually discovered via pruning.
|
{
"abstract": [
"Recent work on neural network pruning indicates that, at training time, neural networks need to be significantly larger in size than is necessary to represent the eventual functions that they learn. This paper articulates a new hypothesis to explain this phenomenon. This conjecture, which we term the \"lottery ticket hypothesis,\" proposes that successful training depends on lucky random initialization of a smaller subcomponent of the network. Larger networks have more of these \"lottery tickets,\" meaning they are more likely to luck out with a subcomponent initialized in a configuration amenable to successful optimization. This paper conducts a series of experiments with XOR and MNIST that support the lottery ticket hypothesis. In particular, we identify these fortuitously-initialized subcomponents by pruning low-magnitude weights from trained networks. We then demonstrate that these subcomponents can be successfully retrained in isolation so long as the subnetworks are given the same initializations as they had at the beginning of the training process. Initialized as such, these small networks reliably converge successfully, often faster than the original network at the same level of accuracy. However, when these subcomponents are randomly reinitialized or rearranged, they perform worse than the original network. In other words, large networks that train successfully contain small subnetworks with initializations conducive to optimization. The lottery ticket hypothesis and its connection to pruning are a step toward developing architectures, initializations, and training strategies that make it possible to solve the same problems with much smaller networks."
],
"cite_N": [
"@cite_3"
],
"mid": [
"2792760996"
]
}
|
Pay Attention to Convolution Filters: Towards Fast and Accurate Fine-Grained Transfer Learning *
|
Our work combines three different topics in deep learning-transfer learning, fine-grained image classification and attention methods-to solve a practical problem. In this section, we will first introduce the problem we are trying to solve and its implication. We will then introduce the three separate topics that inspired our works. Lastly, we will briefly summarize our contribution and result.
Problem and Implication
Most image classification problems in real life are finegrained classification problems. For example, self-driving cars need classification models that correctly classify different kinds of street signs; medical imaging application requires model to distinguish different types of cells. However, most pre-trained image classification models were * CS 194 Final Project Report. Team name: "compressed r-cnn" trained on ImageNet dataset. The dataset is very coarse grained; e.g. models were tasked to distinguish between trucks and cats. It is widely believed that these models constitute as common feature extractors that can be adapted for specific tasks. How can we efficiently take advantage of pre-trained network and adapt them for fine-grained image classification problem?
Our objective is two-fold: (1) We want the training process to be as fast as possible. Ideally, a training process should take less than 10 epochs. This can be done via taking advantages of learned weights in the pre-trained network.
(2) We want the model accuracy to be as high as possible. It should perform reasonably well compared to a model that's trained from scratch.
We hope our work can benefit machine learning practitioners in industry. Using our methods, a small network that learned to classify fine-grained objects well can be trained and iterated quickly. For example, an "expert network" that is able to distinguish very similar classes can serve as part of a larger ensemble [21] or inference cascade [23] to improve overall accuracy.
Transfer Learning
High accuracy image classification models like ResNet [8] and Inception-V3 [22] that are trained on ImageNet [4] have been made widely available across deep learning frameworks [20]. Transfer learning attempts to use these pre-trained models as starting point to adapt the network for a specific task. A general approach [11] is to freeze the weights of some layers, typically low-level features, and retrain the high level features and fully-connected layers. If the new dataset's distribution is similar to that of ImageNet, it is recommended to only re-train the fully connected layer. If the new dataset's distribution is different from ImageNet, it is recommended to only keep the low level features and re-train layers that correspond to high level features.
Fine-Grained Image Classification
Fine-grained image classification tasks are tasks where the objects in the source images are very similar and re-quires fine-grained features: for example, classifying animal species [9] requires the network to pick up features like color patterns of frog skins or specific shapes of bird beaks. To solve this problem, the academia has moved away from training model from scratch. Many researchers have started working on adapting learned features efficiently to identity important details. Methods like subset feature learning [7], mixture of convolutional networks [6], or adding visual attention to image [15] have been proven to be effective. However, many of these methods are very domain specific, and most importantly, their training still takes a long time.
Attention Methods
Attention is first introduced in the setting of Neural Machine Translation [3]. Bahdanau et al. applied attention to each state of RNN to jointly produce a weighted vector that captures the latent semantic meaning of the sentence being translated. Xu et al. borrows the same idea and applies it to the task of image captioning [24]. By using attention mechanism, they achieved state-of-the-art performance while purposing a novel way of visualizing the regions in the image that are most heavily weighted when the network is generating each word in the caption.
Our Contribution
Our work ties together all three ideas to attack the problem of transfer learning for fine-grained image classification. In particular, we draw inspiration from attention method in text models and image model to rank convolution filters in a pre-trained network regarding a specific fine-grained transfer learning task. Our goal is to explore a method that strikes a good balance between speed and validation accuracy.
Our final model follows from the intuition that pretrained models are over-parameterized. By adding trainable "attention" weights to each convolution channel and optimize for the classification loss, these "attention" weights will diverge from their initial states (which are just 1, mapping identical filters). After few iterations, attention weights can either amplify a convolution channel or lower the activation of it. For example, if a convolution channel detecting the vertical stripes pattern from species A contributes greatly to the success of correctly classifying species A, we expect the network would learn to "pay more attention" to this channel by amplifying the attention weight corresponds to that channel.
We also expect the network will learn to "pay less attention" to features that are irrelevant/less important to the fine-grained dataset. For example, high level features (like floppy ears of dogs or the shape of a semi-trailer truck) in a ImageNet trained model (like Inception-V3) [18] are not really useful for a fine-grained dataset.
Structure of the Report
This report will be structured as followed. Section 2 will identify several key literature on which we ground our assumptions and methods. Section 3 will discuss our experimental setup and baseline models. Section 4 will discuss detail implementation of our models and training scheme. Section 5 will identify the set of experiments we ran in order to explore many facets of efficent fine-grained transfer learning. Finally, in section 6 and 7, we will summarize our result and explore future works.
Key Assumption 1: Large Networks are Over-Parametrized
Although almost a consensus among deep learning community, we found interesting theoretical and experimental results from Frankle et al [5]. Frankle et al. explore socalled "The Lottery Ticket Hypothesis" and realized that the success of many large network can be attributed to the large number of layers and parameters that made possible for successful sub-networks to appear, which are usually discovered via pruning.
Key Assumption 2: ImageNet Pre-Trained Convolution Channels are Good Feature Extractors
In particular, we need to assume that the base network that we adapt for transfer learning purpose is a high quality network in terms of its convolution channels. We assume the features learned from training well-design networks like ResNet and Inception using ImageNet dataset are generally transferable. This assumption is confirmed by the work "What makes ImageNet good for transfer learning?" from Huh et al. [10]. ImageNet is confirmed to be a good dataset to produce general convolution feature extractors.
Key Assumption 3: Convolution Channels Can be Ranked and Pruned
In particular, we are assuming that not all the convolution channels are useful. Works on model compression and network pruning provide for us a solid experimental data for this assumption.
Most of the work in convolution channel pruning involves a heuristic defined channel ranking methods, for examples: ranking channels by their ell 1 norms [13], ranking channels by combinatorially determine their effect on validation score [2], and the state-of-the-art ranking methods is first-order Taylor expansion with respect to the channel to closely match the cost function of original network [16]. Our approach is different. Instead of a human defined heuristic about image norm or Taylor expansion, we want the network to figure out how to rank filters by itself. We only provide the network with the an objective of classification score and a set of resource (convolution filters).
Key Assumption 4: Attention Should Be Regularized
If we just assign the weight of 1 to each convolution filters, the network will move very slowly or even remain stagnant. An important lesson we take from Kim et al. about visual attention is the importance of regularized attention [12]. Although the work is about applying visual attention to self-driving cars, it provides important intuitions about regularizing visual attention and forcing it to look for new objects by tuning λ as the strength of regularization.
Our key take-away is just that adding attention in visual model is not enough. Especially in our model, where the attention is about the hierarchical relationship and strength of the convolution channels, just adding a scalar weight of 1 and freeze the convolution filter weights are not good enough. We need to add some prior about the attention weights. We further experiment with and discuss the different forms of regularization in section 5.1.1.
Experimental Setup
Data Source and Pre-Processing
We chose a subset of the INaturalist 2018 Dataset, in which there are 11,156 images of 144 species of amphibians. This dataset resembles a specific fine-grained classification dataset typically encountered in real life. It has quite a significant class imbalance for some classes, and significantly different data distribution from that of ImageNet. The amphibians share many common features and can be extremely similar among different species. Since they were usually photoed in their natural habitats, it is indeed a very hard dataset, yet general and diverse enough to represent a specific fine-grained classification task (example: Figure 1).
The images originally come in different sizes and shapes, and all of them are reshaped without cropping to 224x224 for ResNet models and 299x299 for Inception-V3. They are then normalized according to standard Im-ageNet practice, with µ = 0.485 0.456 0.406 and σ = 0.229 0.224 0.225
Data augmentation was used to solved the problem of class imbalance and over-fitting in some experiments, which will be further discussed in section 5.1.2.
Data is split 7-1.5-1.5 for training, validation, and testing respectively.
Tools
We used PyTorch [19] as our Machine Learning library for this project. We chose PyTorch over TensorFlow or other popular library because of its flexibility. PyTorch uses dynamic computational graph rather than a static one.
In addition, PyTorch provides a simple way to customize pre-trained models to facilitate our experiment. For example, to freeze/unfreeze a specific variable inside the model, one can simply set .requires grad to desired boolean value; adding Attention weight to a specific channel also only requires minor changes to the existing pre-trained models.
All the pre-trained models used in our experiments (ResNet-{34,50,101}, Inception-V3) come from PyTorch Model Zoo [20].
Models were mostly trained on EC2 instances using Tesla K80 GPUs, and some were trained on Pascal Titan X.
Baseline Model
We attempted different popular models pre-trained on ImageNet. Most experiments were done using ResNet 50, but ResNet 34, ResNet 101 and Inception-V3 were also used to investigate compatibility of our method with different pre-trained model structures, and model depths. More on this in section 5. We showed that our attention module and training strategies are generally applicable to models with different architectures and depths.
Evolution of our model
In order to solve the problem of filter redundancy in transfer learning on specific datasets, we first explored the idea of pruning, which ranks the convolutional filters based on some heuristic algorithms, and removes ones with less activation. Such a heuristic algorithm was proposed in [16], which uses Taylor expansion to approximate the loss function and compares the loss when a certain filter is masked out to zero. This gives a heuristic estimation of the impact of this filter. However, we want to propose an endto-end method that uses attention weights as a measure of usefulness of a convolutional filter. The soft attention amplifies useful signals and suppresses redundant ones, which achieves a similar effect to pruning.
Final Model
Model Architecture
The final model has attention modules attached at the output of each of the convolutional layers of the pre-trained models. This is done by multiplying each pre-trained convolution filter by a trainable scalar weight initialized at 1.
One to two epochs of the FC layers were trained to adjust the classifier to the fine-grained dataset, so that errors can be better back propagated to the attention modules. Next we fixed the entire network, and only trained the attention modules to identify the feature extractors in the pre-trained network that are useful for the specific fine-grained task. Training of the batchnorm layers was interspersed within training of the attention modules, in order to fight overfitting and adjust for variance shift.
Pseudocode
The core of our model boils down to the pseodo-code shown below:
# We initialize attention weights # as follows and fill it with 1 attn_weights = Parameter(FloatTensor( out_channels, in_channels, 1,1)) ... # And we modify the "forward" function # in 2D convolution module attn_paid = conv_weight * attn_weights return F.conv2d(input, attn_paid, ...)
Experiment Result and Discussion
In this section, we describe a set of experiments we ran concerning many facets of fine-grained transfer learning and the specifics of our method. For each experiment, we will describe our setup, our result, and our findings.
The investigation section contains experiments that were successfully ran and worked as expected. The visualization contains result from our experiment with interpretability of attention models. The surprise section consists of experiments that either gave us surprising results or just did not work out. We wanted to identify useful filters from redundant ones, and thus applied three different loss for attention weights. Initially, most attention weights clustered near 1, and were reluctant to differentiate filters. L1 or L2 norms of the attention weights were added to the loss function as regularization. L1 norm resulted in a sparser and more polarized distribution of the attention weights, but L2 converged slightly faster. Both achieved similar performance in terms of accuracy.
1 (a j ) = a j 1 (1) 2 (a j ) = a j 2(2)
In order to further help the attention weights diverge, we proposed a modified L2 loss that penalizes the distance of each attention weight to 1, which also serves as regularization. This penalty scheme was most commonly used in our model.
3 (a j ) = − a j − 1 2 1 (3)
The loss function is thus:
L = − n i=1 y (i) log(p) (i) + λ · F j=1 (a j )(4)
where F is the number of filters in the convolutional layers, a j is the attention weight vector for filter j, and λ is a hyperparameter.
As figure 2 shows, both regularization scheme/prior lead to expected result distribution. The 1 regularization put a sparsity inducing prior on attention weights; the result shows many weights are reduced close to zero. Surprisingly, there are few attention weights that are still amplified (value > 1). We expect these weights help identify the subset of convolution channels that are useful for fine-grained classification with our dataset. The histogram for 2 attention loss show similar but smoother result. The distribution of weights gradually diverge from 1 and shift towards either 0 or 2 according to their significance in this particular fine-grained classification task.
Our result is as shown in Table 1.
Does data augmentation help?
Data augmentation was done with a weighted sampler that samples from each class with probability inverse to the number of training images in that class. This solves the problem of class imbalance and also serves as a regularizer to combat overfitting.
Our result is as shown in Table 2. Using data augmentation does improve the performance and it works particularly well with attention modules. Top 3 Val Best Epoch ResNet34 39% 60% 9 RestNet50 43% 63% 10 RestNet101 47% 66% 10 Inception V3 41% 62.7% 5 Table 3. Result from different model architecture, using same FFAAABAAABAA training scheme.
Model Top 1 Val
How Does Different Model Architecture Affect the Score?
Our result is as shown in Table 3. We expect the validation accuracy to be about the same across different depth of the network as well as different model architecture.
As the depth of ResNet increase, we see increase in validation accuracy; this result is justified by the fact that there are much more convolution channel as the network goes deeper as well as more interesting intermediate features ready to be combined.
If we switch the base architecture to a different comparable ImageNet model, Inception V3, we see the performance is not degraded. This show the compatibility of our method with any ImageNet models; as long as it contains convolution.
Is The Accuracy Good Enough (Compare with 100 hours models)?
There is a baseline model [1] trained from scratch using Inception-V3 on a larger and a more complete dataset of animal and plant species, of which our amphibian dataset is a subset. It took 100 epochs for the model to achieve a final top 3 accuracy of 77%.
In comparison, our model, though trained on a more specific dataset, were able to achieve the highest 70% top 3 validation accuracy in just under 15 epochs, which took around 13 minutes on a Pascal Titan X GPU to train.
Thus, the attention model is more suitable for classification tasks where high accuracy is not the absolute priority but require fast iteration of training and decent results. This makes it possible to train a large number of personalized models with personal data.
Interpretability and Visualization
In this section, we present two visualizations that showcase the effectiveness of our method for picking up good signals.
We visualize two channels (Figure 3, Figure 4) in the third to last convolution layer of ResNet 50 (res5c branch2a). The channels are the top-2 channels ranked by the "attention" weights on such channels after training attention weights based on ResNet50 for few iter- Convolution features are visualized by optimizing random noise image to maximize the activation of a convolution channel (activation measured using mean of the image, applied Gaussian blur or Bilateral Filter, described in [17]). The images are sampled from top-10 images that maximally activate the channels ranked by 1 norm of the feature map, the metric as inspired by convolution filter pruning method in [14].
What does these visualizations shows? Figure 3 shows a high level feature of spines of animals. The attention method is able to pickup the specific feature that significantly helps classification for this fine-tuned dataset. We also included a negative image sample. The leaf structure strongly resembles the spine structure therefore also activates this filters. Figure 4 shows another high level feature. However, this feature is an auxiliary feature that helps identify the surrounding environment of different types of frogs. The vertical leaf structure was picked up as a strong signal that can assists the network in identifying species.
Surprise
Attention shape
To answer this question, we ran two sets of controlled experiments. For the "Out Only" method, we assigned the same attention weight to each input channel in the same 3D convolutional filter. For the "In × Out" method, we assigned a different weight to each input channel. As a concrete example, the first convolution layer of the ResNet-Family model takes as input a depth 3 RGB representation of the image and outputs a depth 64 feature map, for method 1 "Out Only" there are 64 attention weights in this layer while method 2 "In × Out" have 3 × 64 attention weights.
Comparing the number of parameters in either model (in Table 4), we initially expect the second method to outperform the first one by a large advantage. However, our experiment result only reveals a relatively small boost of performance. This is surprising because we thought a more sophisticated attention scheme will result in a drastically higher performance.
One possible explanation for this surprising result is the hierarchical connection between a single H × W convolution filter and it's filter group, where all filters in the group collectively result in just one single feature map:
out(N i , C outj ) = Cin−1 k=0 weight(C outj , k) input(N i , k)
where is the convolution operator, C outj is the jth feature map. A single feature map is produced by adding the convoluted result from each filter in the group via
Cin−1 k=0 .
We hypothesize there is a strong connection between different filters in the same group, making attention weight less effective.
Despite of their similar performance, the "Out Only" method converges to the best model much faster than the "In × Out" method (5 versus 10 epochs).
Our result is as shown in Table 4.
Do Traditional Transfer Learning Methods Still Work?
We also explored different levels of 'cut', which are widely used in traditional transfer learning methods. We define the level of 'cut' as the number of layers that were not frozen when training. We trained different blocks of the pre-trained network for different number of epochs, including (1) Fully Connected Layers, (2) Block 4 of ResNet, and (3) the entire network end to end. Method (1) gives us a good linear transformation from fixed high level feature (after average pooling) to the probability of classes. However it is prone to over-fit within few epochs. The best top 3 validation score we got is 46.6%.
Method (2) unfreezes the last large block of ResNet and attempts to re-train high level features. We found the accuracy to be slightly higher: 56.6% Surprisingly, method (3) uses the entire pre-trained network as initialization. The final best validation score is 65.05%, which is better than attention methods. However, this can be justified by the fact that all elements in every convolution filter are subject to optimization. Amplifying a convolution activation by a scalar weight has less complexity than fine-tuning every filter.
Lesson Learned
We investigated the effectiveness of channel wise attention in the context of transfer learning for fine-grained specific classification tasks. Compared to just re-training the fully connected layers, the attention modules are surprisingly effective in quickly identifying useful in and out convolution channels, and it raises the performance of the model to nearly 70% top 3 validation accuracy in under 12 epochs. An inception-V3 model trained with 100 epochs achieved 77% top 3 accuracy. Our attention modules indeed provides a fast and relatively high performance model for transfer learning.
We did different investigations to find out the best training strategy, attention penalty, level of cut, as well as the effect of data augmentation. We used different training strategies and used different combinations of FC, Attention, and BN layers. We discovered that FC layers quickly adjusted the classifier to the fine-grained data distribution but overfits after a few epochs, whereas attention layers effectively boosts the accuracy in a few epochs after the FC layers are trained. After the attention layers, however, the output feature maps are no longer well-behaved. When training batchnorm layers that are interspersed in between attention layers, accuracy drops slightly due to its regularization effect, but they help attention modules to achieve better results.
Furthermore, the attention weights would cluster around 1 when only using cross entropy loss, but the addition of attention penalty encourages them to diverge and form different distributions, which yields roughly equally good performance. We also found the effectiveness of attention in picking interpretable image features that can help explain the network's decision.
Our project provides an intuitive approach for finegrained transfer learning. We still have a long way to go in exploiting pre-trained networks and adapting them for fine-grained or even personalized computer vision models.
Future Work
There are multiple lines of future work possible beyond our current stage.
1. A natural continuation of the project is to prune convolution channels that have attention weights smaller than a certain threshold to reduce the size of the final model as well as to improve accuracy. Attention weights provide us with a good ranking of features available for pruning.
2. Another direction is to further exploit the benefit of interpretability that attention scheme brings to the table.
The attention weights are highly interpretable.
3. In our current approach, we have to experiment with different architectures and training schemes to find a satisfiable configuration. A natural step beyond this project will be to use reinforcement learning, especially meta learning strategies to let the model to learn the best way of integrating attention modules given a set of pre-trianed filters with the objective to classify fine-grained data correctly.
Team Contributions
Work was evenly split among the group. Simon came up the initial idea, the group refined the idea and designed experiment setups together. Each of us experimented with multiple controlled experiments so that we can efficiently allocate GPU usage. We all worked on the poster, final report and code. 1
| 4,041 |
1906.04950
|
2951229489
|
We propose an efficient transfer learning method for adapting ImageNet pre-trained Convolutional Neural Network (CNN) to fine-grained image classification task. Conventional transfer learning methods typically face the trade-off between training time and accuracy. By adding "attention module" to each convolutional filters of the pre-trained network, we are able to rank and adjust the importance of each convolutional signal in an end-to-end pipeline. In this report, we show our method can adapt a pre-trianed ResNet50 for a fine-grained transfer learning task within few epochs and achieve accuracy above conventional transfer learning methods and close to models trained from scratch. Our model also offer interpretable result because the rank of the convolutional signal shows which convolution channels are utilized and amplified to achieve better classification result, as well as which signal should be treated as noise for the specific transfer learning task, which could be pruned to lower model size.
|
In particular, we need to assume that the base network that we adapt for transfer learning purpose is a high quality network in terms of its convolution channels. We assume the features learned from training well-design networks like ResNet and Inception using ImageNet dataset are generally transferable. This assumption is confirmed by the work What makes ImageNet good for transfer learning?'' from @cite_9 . ImageNet is confirmed to be a good dataset to produce general convolution feature extractors.
|
{
"abstract": [
"The tremendous success of ImageNet-trained deep features on a wide range of transfer tasks begs the question: what are the properties of the ImageNet dataset that are critical for learning good, general-purpose features? This work provides an empirical investigation of various facets of this question: Is more pre-training data always better? How does feature quality depend on the number of training examples per class? Does adding more object classes improve performance? For the same data budget, how should the data be split into classes? Is fine-grained recognition necessary for learning good features? Given the same number of training classes, is it better to have coarse classes or fine-grained classes? Which is better: more classes or more examples per class? To answer these and related questions, we pre-trained CNN features on various subsets of the ImageNet dataset and evaluated transfer performance on PASCAL detection, PASCAL action classification, and SUN scene classification tasks. Our overall findings suggest that most changes in the choice of pre-training data long thought to be critical do not significantly affect transfer performance.? Given the same number of training classes, is it better to have coarse classes or fine-grained classes? Which is better: more classes or more examples per class?"
],
"cite_N": [
"@cite_9"
],
"mid": [
"2510153535"
]
}
|
Pay Attention to Convolution Filters: Towards Fast and Accurate Fine-Grained Transfer Learning *
|
Our work combines three different topics in deep learning-transfer learning, fine-grained image classification and attention methods-to solve a practical problem. In this section, we will first introduce the problem we are trying to solve and its implication. We will then introduce the three separate topics that inspired our works. Lastly, we will briefly summarize our contribution and result.
Problem and Implication
Most image classification problems in real life are finegrained classification problems. For example, self-driving cars need classification models that correctly classify different kinds of street signs; medical imaging application requires model to distinguish different types of cells. However, most pre-trained image classification models were * CS 194 Final Project Report. Team name: "compressed r-cnn" trained on ImageNet dataset. The dataset is very coarse grained; e.g. models were tasked to distinguish between trucks and cats. It is widely believed that these models constitute as common feature extractors that can be adapted for specific tasks. How can we efficiently take advantage of pre-trained network and adapt them for fine-grained image classification problem?
Our objective is two-fold: (1) We want the training process to be as fast as possible. Ideally, a training process should take less than 10 epochs. This can be done via taking advantages of learned weights in the pre-trained network.
(2) We want the model accuracy to be as high as possible. It should perform reasonably well compared to a model that's trained from scratch.
We hope our work can benefit machine learning practitioners in industry. Using our methods, a small network that learned to classify fine-grained objects well can be trained and iterated quickly. For example, an "expert network" that is able to distinguish very similar classes can serve as part of a larger ensemble [21] or inference cascade [23] to improve overall accuracy.
Transfer Learning
High accuracy image classification models like ResNet [8] and Inception-V3 [22] that are trained on ImageNet [4] have been made widely available across deep learning frameworks [20]. Transfer learning attempts to use these pre-trained models as starting point to adapt the network for a specific task. A general approach [11] is to freeze the weights of some layers, typically low-level features, and retrain the high level features and fully-connected layers. If the new dataset's distribution is similar to that of ImageNet, it is recommended to only re-train the fully connected layer. If the new dataset's distribution is different from ImageNet, it is recommended to only keep the low level features and re-train layers that correspond to high level features.
Fine-Grained Image Classification
Fine-grained image classification tasks are tasks where the objects in the source images are very similar and re-quires fine-grained features: for example, classifying animal species [9] requires the network to pick up features like color patterns of frog skins or specific shapes of bird beaks. To solve this problem, the academia has moved away from training model from scratch. Many researchers have started working on adapting learned features efficiently to identity important details. Methods like subset feature learning [7], mixture of convolutional networks [6], or adding visual attention to image [15] have been proven to be effective. However, many of these methods are very domain specific, and most importantly, their training still takes a long time.
Attention Methods
Attention is first introduced in the setting of Neural Machine Translation [3]. Bahdanau et al. applied attention to each state of RNN to jointly produce a weighted vector that captures the latent semantic meaning of the sentence being translated. Xu et al. borrows the same idea and applies it to the task of image captioning [24]. By using attention mechanism, they achieved state-of-the-art performance while purposing a novel way of visualizing the regions in the image that are most heavily weighted when the network is generating each word in the caption.
Our Contribution
Our work ties together all three ideas to attack the problem of transfer learning for fine-grained image classification. In particular, we draw inspiration from attention method in text models and image model to rank convolution filters in a pre-trained network regarding a specific fine-grained transfer learning task. Our goal is to explore a method that strikes a good balance between speed and validation accuracy.
Our final model follows from the intuition that pretrained models are over-parameterized. By adding trainable "attention" weights to each convolution channel and optimize for the classification loss, these "attention" weights will diverge from their initial states (which are just 1, mapping identical filters). After few iterations, attention weights can either amplify a convolution channel or lower the activation of it. For example, if a convolution channel detecting the vertical stripes pattern from species A contributes greatly to the success of correctly classifying species A, we expect the network would learn to "pay more attention" to this channel by amplifying the attention weight corresponds to that channel.
We also expect the network will learn to "pay less attention" to features that are irrelevant/less important to the fine-grained dataset. For example, high level features (like floppy ears of dogs or the shape of a semi-trailer truck) in a ImageNet trained model (like Inception-V3) [18] are not really useful for a fine-grained dataset.
Structure of the Report
This report will be structured as followed. Section 2 will identify several key literature on which we ground our assumptions and methods. Section 3 will discuss our experimental setup and baseline models. Section 4 will discuss detail implementation of our models and training scheme. Section 5 will identify the set of experiments we ran in order to explore many facets of efficent fine-grained transfer learning. Finally, in section 6 and 7, we will summarize our result and explore future works.
Key Assumption 1: Large Networks are Over-Parametrized
Although almost a consensus among deep learning community, we found interesting theoretical and experimental results from Frankle et al [5]. Frankle et al. explore socalled "The Lottery Ticket Hypothesis" and realized that the success of many large network can be attributed to the large number of layers and parameters that made possible for successful sub-networks to appear, which are usually discovered via pruning.
Key Assumption 2: ImageNet Pre-Trained Convolution Channels are Good Feature Extractors
In particular, we need to assume that the base network that we adapt for transfer learning purpose is a high quality network in terms of its convolution channels. We assume the features learned from training well-design networks like ResNet and Inception using ImageNet dataset are generally transferable. This assumption is confirmed by the work "What makes ImageNet good for transfer learning?" from Huh et al. [10]. ImageNet is confirmed to be a good dataset to produce general convolution feature extractors.
Key Assumption 3: Convolution Channels Can be Ranked and Pruned
In particular, we are assuming that not all the convolution channels are useful. Works on model compression and network pruning provide for us a solid experimental data for this assumption.
Most of the work in convolution channel pruning involves a heuristic defined channel ranking methods, for examples: ranking channels by their ell 1 norms [13], ranking channels by combinatorially determine their effect on validation score [2], and the state-of-the-art ranking methods is first-order Taylor expansion with respect to the channel to closely match the cost function of original network [16]. Our approach is different. Instead of a human defined heuristic about image norm or Taylor expansion, we want the network to figure out how to rank filters by itself. We only provide the network with the an objective of classification score and a set of resource (convolution filters).
Key Assumption 4: Attention Should Be Regularized
If we just assign the weight of 1 to each convolution filters, the network will move very slowly or even remain stagnant. An important lesson we take from Kim et al. about visual attention is the importance of regularized attention [12]. Although the work is about applying visual attention to self-driving cars, it provides important intuitions about regularizing visual attention and forcing it to look for new objects by tuning λ as the strength of regularization.
Our key take-away is just that adding attention in visual model is not enough. Especially in our model, where the attention is about the hierarchical relationship and strength of the convolution channels, just adding a scalar weight of 1 and freeze the convolution filter weights are not good enough. We need to add some prior about the attention weights. We further experiment with and discuss the different forms of regularization in section 5.1.1.
Experimental Setup
Data Source and Pre-Processing
We chose a subset of the INaturalist 2018 Dataset, in which there are 11,156 images of 144 species of amphibians. This dataset resembles a specific fine-grained classification dataset typically encountered in real life. It has quite a significant class imbalance for some classes, and significantly different data distribution from that of ImageNet. The amphibians share many common features and can be extremely similar among different species. Since they were usually photoed in their natural habitats, it is indeed a very hard dataset, yet general and diverse enough to represent a specific fine-grained classification task (example: Figure 1).
The images originally come in different sizes and shapes, and all of them are reshaped without cropping to 224x224 for ResNet models and 299x299 for Inception-V3. They are then normalized according to standard Im-ageNet practice, with µ = 0.485 0.456 0.406 and σ = 0.229 0.224 0.225
Data augmentation was used to solved the problem of class imbalance and over-fitting in some experiments, which will be further discussed in section 5.1.2.
Data is split 7-1.5-1.5 for training, validation, and testing respectively.
Tools
We used PyTorch [19] as our Machine Learning library for this project. We chose PyTorch over TensorFlow or other popular library because of its flexibility. PyTorch uses dynamic computational graph rather than a static one.
In addition, PyTorch provides a simple way to customize pre-trained models to facilitate our experiment. For example, to freeze/unfreeze a specific variable inside the model, one can simply set .requires grad to desired boolean value; adding Attention weight to a specific channel also only requires minor changes to the existing pre-trained models.
All the pre-trained models used in our experiments (ResNet-{34,50,101}, Inception-V3) come from PyTorch Model Zoo [20].
Models were mostly trained on EC2 instances using Tesla K80 GPUs, and some were trained on Pascal Titan X.
Baseline Model
We attempted different popular models pre-trained on ImageNet. Most experiments were done using ResNet 50, but ResNet 34, ResNet 101 and Inception-V3 were also used to investigate compatibility of our method with different pre-trained model structures, and model depths. More on this in section 5. We showed that our attention module and training strategies are generally applicable to models with different architectures and depths.
Evolution of our model
In order to solve the problem of filter redundancy in transfer learning on specific datasets, we first explored the idea of pruning, which ranks the convolutional filters based on some heuristic algorithms, and removes ones with less activation. Such a heuristic algorithm was proposed in [16], which uses Taylor expansion to approximate the loss function and compares the loss when a certain filter is masked out to zero. This gives a heuristic estimation of the impact of this filter. However, we want to propose an endto-end method that uses attention weights as a measure of usefulness of a convolutional filter. The soft attention amplifies useful signals and suppresses redundant ones, which achieves a similar effect to pruning.
Final Model
Model Architecture
The final model has attention modules attached at the output of each of the convolutional layers of the pre-trained models. This is done by multiplying each pre-trained convolution filter by a trainable scalar weight initialized at 1.
One to two epochs of the FC layers were trained to adjust the classifier to the fine-grained dataset, so that errors can be better back propagated to the attention modules. Next we fixed the entire network, and only trained the attention modules to identify the feature extractors in the pre-trained network that are useful for the specific fine-grained task. Training of the batchnorm layers was interspersed within training of the attention modules, in order to fight overfitting and adjust for variance shift.
Pseudocode
The core of our model boils down to the pseodo-code shown below:
# We initialize attention weights # as follows and fill it with 1 attn_weights = Parameter(FloatTensor( out_channels, in_channels, 1,1)) ... # And we modify the "forward" function # in 2D convolution module attn_paid = conv_weight * attn_weights return F.conv2d(input, attn_paid, ...)
Experiment Result and Discussion
In this section, we describe a set of experiments we ran concerning many facets of fine-grained transfer learning and the specifics of our method. For each experiment, we will describe our setup, our result, and our findings.
The investigation section contains experiments that were successfully ran and worked as expected. The visualization contains result from our experiment with interpretability of attention models. The surprise section consists of experiments that either gave us surprising results or just did not work out. We wanted to identify useful filters from redundant ones, and thus applied three different loss for attention weights. Initially, most attention weights clustered near 1, and were reluctant to differentiate filters. L1 or L2 norms of the attention weights were added to the loss function as regularization. L1 norm resulted in a sparser and more polarized distribution of the attention weights, but L2 converged slightly faster. Both achieved similar performance in terms of accuracy.
1 (a j ) = a j 1 (1) 2 (a j ) = a j 2(2)
In order to further help the attention weights diverge, we proposed a modified L2 loss that penalizes the distance of each attention weight to 1, which also serves as regularization. This penalty scheme was most commonly used in our model.
3 (a j ) = − a j − 1 2 1 (3)
The loss function is thus:
L = − n i=1 y (i) log(p) (i) + λ · F j=1 (a j )(4)
where F is the number of filters in the convolutional layers, a j is the attention weight vector for filter j, and λ is a hyperparameter.
As figure 2 shows, both regularization scheme/prior lead to expected result distribution. The 1 regularization put a sparsity inducing prior on attention weights; the result shows many weights are reduced close to zero. Surprisingly, there are few attention weights that are still amplified (value > 1). We expect these weights help identify the subset of convolution channels that are useful for fine-grained classification with our dataset. The histogram for 2 attention loss show similar but smoother result. The distribution of weights gradually diverge from 1 and shift towards either 0 or 2 according to their significance in this particular fine-grained classification task.
Our result is as shown in Table 1.
Does data augmentation help?
Data augmentation was done with a weighted sampler that samples from each class with probability inverse to the number of training images in that class. This solves the problem of class imbalance and also serves as a regularizer to combat overfitting.
Our result is as shown in Table 2. Using data augmentation does improve the performance and it works particularly well with attention modules. Top 3 Val Best Epoch ResNet34 39% 60% 9 RestNet50 43% 63% 10 RestNet101 47% 66% 10 Inception V3 41% 62.7% 5 Table 3. Result from different model architecture, using same FFAAABAAABAA training scheme.
Model Top 1 Val
How Does Different Model Architecture Affect the Score?
Our result is as shown in Table 3. We expect the validation accuracy to be about the same across different depth of the network as well as different model architecture.
As the depth of ResNet increase, we see increase in validation accuracy; this result is justified by the fact that there are much more convolution channel as the network goes deeper as well as more interesting intermediate features ready to be combined.
If we switch the base architecture to a different comparable ImageNet model, Inception V3, we see the performance is not degraded. This show the compatibility of our method with any ImageNet models; as long as it contains convolution.
Is The Accuracy Good Enough (Compare with 100 hours models)?
There is a baseline model [1] trained from scratch using Inception-V3 on a larger and a more complete dataset of animal and plant species, of which our amphibian dataset is a subset. It took 100 epochs for the model to achieve a final top 3 accuracy of 77%.
In comparison, our model, though trained on a more specific dataset, were able to achieve the highest 70% top 3 validation accuracy in just under 15 epochs, which took around 13 minutes on a Pascal Titan X GPU to train.
Thus, the attention model is more suitable for classification tasks where high accuracy is not the absolute priority but require fast iteration of training and decent results. This makes it possible to train a large number of personalized models with personal data.
Interpretability and Visualization
In this section, we present two visualizations that showcase the effectiveness of our method for picking up good signals.
We visualize two channels (Figure 3, Figure 4) in the third to last convolution layer of ResNet 50 (res5c branch2a). The channels are the top-2 channels ranked by the "attention" weights on such channels after training attention weights based on ResNet50 for few iter- Convolution features are visualized by optimizing random noise image to maximize the activation of a convolution channel (activation measured using mean of the image, applied Gaussian blur or Bilateral Filter, described in [17]). The images are sampled from top-10 images that maximally activate the channels ranked by 1 norm of the feature map, the metric as inspired by convolution filter pruning method in [14].
What does these visualizations shows? Figure 3 shows a high level feature of spines of animals. The attention method is able to pickup the specific feature that significantly helps classification for this fine-tuned dataset. We also included a negative image sample. The leaf structure strongly resembles the spine structure therefore also activates this filters. Figure 4 shows another high level feature. However, this feature is an auxiliary feature that helps identify the surrounding environment of different types of frogs. The vertical leaf structure was picked up as a strong signal that can assists the network in identifying species.
Surprise
Attention shape
To answer this question, we ran two sets of controlled experiments. For the "Out Only" method, we assigned the same attention weight to each input channel in the same 3D convolutional filter. For the "In × Out" method, we assigned a different weight to each input channel. As a concrete example, the first convolution layer of the ResNet-Family model takes as input a depth 3 RGB representation of the image and outputs a depth 64 feature map, for method 1 "Out Only" there are 64 attention weights in this layer while method 2 "In × Out" have 3 × 64 attention weights.
Comparing the number of parameters in either model (in Table 4), we initially expect the second method to outperform the first one by a large advantage. However, our experiment result only reveals a relatively small boost of performance. This is surprising because we thought a more sophisticated attention scheme will result in a drastically higher performance.
One possible explanation for this surprising result is the hierarchical connection between a single H × W convolution filter and it's filter group, where all filters in the group collectively result in just one single feature map:
out(N i , C outj ) = Cin−1 k=0 weight(C outj , k) input(N i , k)
where is the convolution operator, C outj is the jth feature map. A single feature map is produced by adding the convoluted result from each filter in the group via
Cin−1 k=0 .
We hypothesize there is a strong connection between different filters in the same group, making attention weight less effective.
Despite of their similar performance, the "Out Only" method converges to the best model much faster than the "In × Out" method (5 versus 10 epochs).
Our result is as shown in Table 4.
Do Traditional Transfer Learning Methods Still Work?
We also explored different levels of 'cut', which are widely used in traditional transfer learning methods. We define the level of 'cut' as the number of layers that were not frozen when training. We trained different blocks of the pre-trained network for different number of epochs, including (1) Fully Connected Layers, (2) Block 4 of ResNet, and (3) the entire network end to end. Method (1) gives us a good linear transformation from fixed high level feature (after average pooling) to the probability of classes. However it is prone to over-fit within few epochs. The best top 3 validation score we got is 46.6%.
Method (2) unfreezes the last large block of ResNet and attempts to re-train high level features. We found the accuracy to be slightly higher: 56.6% Surprisingly, method (3) uses the entire pre-trained network as initialization. The final best validation score is 65.05%, which is better than attention methods. However, this can be justified by the fact that all elements in every convolution filter are subject to optimization. Amplifying a convolution activation by a scalar weight has less complexity than fine-tuning every filter.
Lesson Learned
We investigated the effectiveness of channel wise attention in the context of transfer learning for fine-grained specific classification tasks. Compared to just re-training the fully connected layers, the attention modules are surprisingly effective in quickly identifying useful in and out convolution channels, and it raises the performance of the model to nearly 70% top 3 validation accuracy in under 12 epochs. An inception-V3 model trained with 100 epochs achieved 77% top 3 accuracy. Our attention modules indeed provides a fast and relatively high performance model for transfer learning.
We did different investigations to find out the best training strategy, attention penalty, level of cut, as well as the effect of data augmentation. We used different training strategies and used different combinations of FC, Attention, and BN layers. We discovered that FC layers quickly adjusted the classifier to the fine-grained data distribution but overfits after a few epochs, whereas attention layers effectively boosts the accuracy in a few epochs after the FC layers are trained. After the attention layers, however, the output feature maps are no longer well-behaved. When training batchnorm layers that are interspersed in between attention layers, accuracy drops slightly due to its regularization effect, but they help attention modules to achieve better results.
Furthermore, the attention weights would cluster around 1 when only using cross entropy loss, but the addition of attention penalty encourages them to diverge and form different distributions, which yields roughly equally good performance. We also found the effectiveness of attention in picking interpretable image features that can help explain the network's decision.
Our project provides an intuitive approach for finegrained transfer learning. We still have a long way to go in exploiting pre-trained networks and adapting them for fine-grained or even personalized computer vision models.
Future Work
There are multiple lines of future work possible beyond our current stage.
1. A natural continuation of the project is to prune convolution channels that have attention weights smaller than a certain threshold to reduce the size of the final model as well as to improve accuracy. Attention weights provide us with a good ranking of features available for pruning.
2. Another direction is to further exploit the benefit of interpretability that attention scheme brings to the table.
The attention weights are highly interpretable.
3. In our current approach, we have to experiment with different architectures and training schemes to find a satisfiable configuration. A natural step beyond this project will be to use reinforcement learning, especially meta learning strategies to let the model to learn the best way of integrating attention modules given a set of pre-trianed filters with the objective to classify fine-grained data correctly.
Team Contributions
Work was evenly split among the group. Simon came up the initial idea, the group refined the idea and designed experiment setups together. Each of us experimented with multiple controlled experiments so that we can efficiently allocate GPU usage. We all worked on the poster, final report and code. 1
| 4,041 |
1809.05828
|
2891427406
|
Robotic grasp detection for novel objects is a challenging task, but for the last few years, deep learning based approaches have achieved remarkable performance improvements, up to 96.1 accuracy, with RGB-D data. In this paper, we propose fully convolutional neural network (FCNN) based methods for robotic grasp detection. Our methods also achieved state-of-the-art detection accuracy (up to 96.6 ) with state-of- the-art real-time computation time for high-resolution images (6-20ms per 360x360 image) on Cornell dataset. Due to FCNN, our proposed method can be applied to images with any size for detecting multigrasps on multiobjects. Proposed methods were evaluated using 4-axis robot arm with small parallel gripper and RGB-D camera for grasping challenging small, novel objects. With accurate vision-robot coordinate calibration through our proposed learning-based, fully automatic approach, our proposed method yielded 90 success rate.
|
Data-driven robotic grasp detection for novel object has been investigated extensively @cite_14 . Saxena proposed a machine learning based method to rank the best graspable location for all candidate image patches from different locations @cite_12 . Jiang proposed a 5D robotic grasp representation and further improved the work of Saxena by proposing a machine learning method to rank the best graspable image patch whose representation includes orientation and gripper distance among all candidates @cite_17 . The work of Jiang achieved the prediction accuracy of 60.5 time of 50 sec (50,000 ms) per image.
|
{
"abstract": [
"",
"We consider the problem of grasping novel objects, specifically objects that are being seen for the first time through vision. Grasping a previously unknown object, one for which a 3-d model is not available, is a challenging problem. Furthermore, even if given a model, one still has to decide where to grasp the object. We present a learning algorithm that neither requires nor tries to build a 3-d model of the object. Given two (or more) images of an object, our algorithm attempts to identify a few points in each image corresponding to good locations at which to grasp the object. This sparse set of points is then triangulated to obtain a 3-d location at which to attempt a grasp. This is in contrast to standard dense stereo, which tries to triangulate every single point in an image (and often fails to return a good 3-d model). Our algorithm for identifying grasp locations from an image is trained by means of supervised learning, using synthetic images for the training set. We demonstrate this approach on two robotic manipulation platforms. Our algorithm successfully grasps a wide variety of objects, such as plates, tape rolls, jugs, cellphones, keys, screwdrivers, staplers, a thick coil of wire, a strangely shaped power horn and others, none of which were seen in the training set. We also apply our method to the task of unloading items from dishwashers.",
"Given an image and an aligned depth map of an object, our goal is to estimate the full 7-dimensional gripper configuration—its 3D location, 3D orientation and the gripper opening width. Recently, learning algorithms have been successfully applied to grasp novel objects—ones not seen by the robot before. While these approaches use low-dimensional representations such as a ‘grasping point’ or a ‘pair of points’ that are perhaps easier to learn, they only partly represent the gripper configuration and hence are sub-optimal. We propose to learn a new ‘grasping rectangle’ representation: an oriented rectangle in the image plane. It takes into account the location, the orientation as well as the gripper opening width. However, inference with such a representation is computationally expensive. In this work, we present a two step process in which the first step prunes the search space efficiently using certain features that are fast to compute. For the remaining few cases, the second step uses advanced features to accurately select a good grasp. In our extensive experiments, we show that our robot successfully uses our algorithm to pick up a variety of novel objects."
],
"cite_N": [
"@cite_14",
"@cite_12",
"@cite_17"
],
"mid": [
"",
"2041376653",
"2123435073"
]
}
|
Real-Time, Highly Accurate Robotic Grasp Detection using Fully Convolutional Neural Networks with High-Resolution Images
|
Robot grasping of novel objects has been investigated extensively, but it is still a challenging, open problem in robotics. Humans instantly identify multiple grasping areas of novel objects (perception) and almost instantly plan how to pick them up (planning), and then actually grasp it reliably (control). However, accurate robotic grasp detection, trajectory planning, and reliable execution are quite challenging for robots. As the first step, detecting robotic grasps accurately and quickly from imaging sensors (e.g., RGB-D camera) is an important task for successful robotic grasping.
Robotic grasp detection or synthesis has been widely investigated for many years. Grasp synthesis is divided into analytical and empirical (or data-driven) methods [1] for known, familiar objects and novel objects [2]. In particular, machine learning (non-deep learning) based approaches for robotic grasp detection have utilized data to learn discriminative features for a suitable grasp configuration and to yield excellent performance on generating grasp locations [3], [4], [5]. A typical approach for them is to use a sliding window to select local image patches and to evaluate graspability so that the best image patch with the highest graspability score is chosen for robotic grasp detection result. In 2011, one of the state-of-the-art graspability prediction accuracies without deep learning was 60.5% and its computation time per image was very slow due to sliding windows (50 sec per image) [5].
Deep learning has been successful in computer vision applications such as image classification [6], [7] and object detection [8], [9]. Deep learning has also been utilized for robotic grasp detection and has achieved significant improvements over conventional methods. Lenz et al. proposed deep learning classifier based robotic grasp detection methods that achieved up to 73.9% (image-wise) and 75.6% (object-wise) prediction accuracy [10], [11]. However, its computation time per image was still slow (13.5 sec per image) due to sliding windows. Redmon et al. proposed deep learning regressor based grasp detection methods that yielded up to 88.0% (image-wise) and 87.1% (object-wise) with remarkably fast computation time (76 ms per image) [12]. Recently, Chu et al. proposed two-stage neural networks with grasp region proposal network and robotic grasp detection networks and have achieved up to 96.0% (image-wise) and 96.1% (objectwise) prediction accuracies [13]. However, its computation time has slightly increased due to region proposal network (120 ms per image). Real-time robotic grasp detection can be critical for some applications with dynamic environment or dynamic objects. Thus, reducing computation time while maintaining high prediction accuracy seems desirable.
In this paper, we proposed novel fully convolutional neural network (FCNN) based methods for robotic grasp detection. Our proposed methods yielded state-of-the-art performance comparable to the work of Chu et al. [13] while their computation time is much faster for high resolution image (360×360 image). Note that most deep learning based robotic grasp detection works used 227×227 resized image including [13]. Our proposed methods can perform multiobject, multigrasp detection as shown in Fig. 1 (Left). Our proposed methods were evaluated with a 4-axis robot as shown in Fig. 1 (Right) and achieved 90% success rate for real grasping tasks with novel objects. Since this small robot has a gripper with the maximum range of 27.5 mm, it was critical to accurately calibrate robotic grasp information Fig. 2: A typical multibox approach for robotic grasp detection. An input image is divided into S×S grid and regression based robotic grasp detection is performed on each grid box. Then, the output with the highest grasp probability is selected as the final result. This approach can be applied to multiobject, multigrasp detection tasks. and our vision system information. We proposed a simple learning-based vision-robot calibration method and achieved accurate calibration and robot grasping performance. Here is the summary of the contributions of this paper: 1) Newly proposed real-time, single-stage FCNN based robotic grasp detection methods that yielded state-ofthe-art computation time for high resolution image (360×360 image) while achieving comparable stateof-the-art prediction accuracies, especially for more strict performance metrics. For example, our method achieved 96.6% image-wise, 95.1% object-wise with 10 ms per high-resolution image while the work of Chu et al. [13] achieved 96.0% image-wise, 96.1% objectwise with 120 ms per low-resolution image. In other words, our method yielded comparable accuracies with 12× faster computation than Chu et al. [13]. Our FCNN based methods can be applied to multigrasp, multiobject detection. 2) Our proposed methods were evaluated for real grasping tasks and yielded 90.0% success rate with challenging small, novel objects and with a small parallel gripper (max open width 27.5 mm). This was possible due to our proposed simple, full automatic learning-based approach for vision-robot calibration. Our method achieved less than 1.5 mm error for calibration, which is close to vision resolution.
III. PROPOSED METHODS FOR ROBOTIC GRASPS
A. Problem Description
The goal of the problem is to predict 5D robotic grasp representations [5], [11] for multiple objects from a given color image (RGB) and possibly depth image (RGB-D) where a 5D robotic grasp representation consists of location (x, y), orientation θ, gripper opening width w, and parallel gripper plate size h, as illustrated in Fig. 3 (a). Then, the 5D robotic grasp representation {x, y, θ, w, h} in camera based vision coordinate system should be transformed into a new 5D grasp representation {x,ỹ,θ,w,h} in actual robot coordinate system so that they can be used for actual robot grasping task.
B. Reparametrization of 5D Grasp Representation and Grasp Probability
MultiGrasp estimates 5D grasp representation {x, y, θ, w, h} as well as grasp probability (confidence) z for each grid cell by reparameterizing θ to be c = cos θ, s = sin θ [12]. In other words, 7 parameters {x, y, c, s, w, h, z} are directly estimated using deep learning based regressors in MultiGrasp. This approach has also been used in YOLO, object detection deep network [21]. Inspired by YOLO9000, a better and faster deep network for object detection than YOLO [9], we propose the following reparametrization of 5D grasp representation and grasp probability for robotic grasp detection as follows:
{t x , t y , θ, t w , t h , t z } where x = σ(t x )+ c x , y = σ(t y )+ c y , w = p w exp(t w ), h = p h exp(t h )
, and z = σ(t z ). Note that σ(·) is a sigmoid function, exp(·) is an exponential function, p h , p w are the pre-defined height and width of an anchor box, respectively, and (c x , c y ) are the location of the top left corner of each grid cell (known). Thus, deep neural network for robotic grasp detection of our proposed methods will estimate {t x , t y , θ, t w , t h , t z } instead of {x, y, θ, w, h, z}. These parameters are illustrated in Fig. 3 (b). Note that x, y, w, h are properly normalized so that the size of each grid cell is 1 × 1. Lastly, the angle θ will be modeled as a discrete value instead of a continuous value, which is different from MultiGrasp. This discretization of the angle in robotic grasp detection was also used in [20]. Thus, for a given (c x , c y ), the range of (x, y) will be c x < x < c x + 1, c y < y < c y + 1 due to the re-parametrization using sigmoid functions.
w, h coordinates in each cell (anchor box). Anchor box approach has also been useful for object detection [9], so we adopt it to our robotic grasp detection. Due to the re-parametrization using anchor box, estimating w, h is converted into estimating t w , t h , which are related to the expected values of various sizes of w, h, and then classifying the best grasp representation among all anchor box candidates. In other words, this re-parametrization changes regression problems for w, h into regression + classification problems. We propose to use the following 7 anchor boxes:
C. Loss Function for Robotic Grasp Detection
We proposed a novel loss function for robotic grasp detection considering the following items.
Angle in each cell (discretization). MultiGrasp reparameterized the angle θ with c = cos θ and s = sin θ so that estimating c, s yields the estimated θ = arctan(s/c). Thus, MultiGrasp took regression approach for θ. We proposed to convert this regression problem for estimating θ into the classification problem for θ among finite number of angle candidates in [0, π]. Specifically, we model that θ ∈ {0, π/18, . . . , π}. Along with data augmentation for different angles every epoch, we were able to observe substantial performance improvement. Similar angle discretization for robotic grasp detection was also used in [20].
Grasp probability (new ground truth). Predicting grasp probability is crucial for multibox approaches such as Multi-Grasp. Conventional ground truth for grasp probability was 1 (graspable) or 0 (not graspable) as used in [12]. Inspired by YOLO9000, we proposed to use IOU (Intersection Over Union, Jaccard index) as the ground truth for grasp probability: the ground truth for grasp probability is
z g = |P ∩ G| |P ∪ G|(1)
where P is the predicted grasp rectangle, G is the ground truth grasp rectangle, and | · | is the area of the inner set. Proposed loss function. We propose to use the follow cost function to train robotic grasp detection networks that we will describe in the next subsection: For the output vector of the deep neural network (t x , t y , θ, t w , t h , t z ) and the ground truth {x g , y g , θ g , w g , h g , z g },
L(t x , t y , θ, t w , t h , t z ) = λ coord S 2 i=1 A j=1 m obj ij [(x g i − x i ) 2 + (y g i − y i ) 2 ] + λ coord S 2 i=1 A j=1 m obj ij [(w g ij − w ij ) 2 + (h g ij − h ij ) 2 ] + λ prob S 2 i=1 A j=1 m obj ij [(z g i − z i ) 2 ] + λ class S 2 i=1 A j=1 m obj ij CrossEntropy(θ g i , θ i )
where x i , y i , w ij , h ij , z i are functions of (t x , t y , t w , t h , t z ), respectively, S 2 is the number of grid cells and A is the number of anchor boxes (7 in our case). We set λ coord = 1, λ prob = 5 and λ class = 1. We set m ij = 1 if the ground truth (x g , y g ) is in the ith cell and m ij = 0 otherwise.
D. Proposed FCNN Architecture
We chose three well-known deep neural networks for image classification tasks Alexnet [6] (base network for Multi-Grasp [12]), Darknet-19 (similar to VGG-16 [24] that was used in [13], but with much smaller memory requirement for similar performance) [9], and Resnet-50 [7] (base network for [20], [13]). These pre-trained networks were modified to yield robotic grasp parameters and their fully connected (FC) layers were replaced by 1×1 convolution layers to make FCNN architecture so that images with any size (e.g., high resolution images) can be processed. Most previous robotic grasp detection methods use 227 × 227 resized images as input, but our proposed FCNN based methods can process higher resolution images. We chose to process 360 × 360 images for grasp detection without resizing. Skin connection layer was also added so that fine grain features can be used. For example, a passthrough layer was added in between the final 3 × 3 × 512 layer and the second to last convolutional layer for Darknet-19 as illustrated in Fig. 4 [9]. Similarly, we added similar skip connection for Resnet-50 in between the convolutional layer right before the last max pooling layer and detection layer. Unfortunately, we did not add skip connection for Alexnet since the pre-trained network did not provide access to inner layers.
E. Learning-based Vision-Robot Calibration
For a successful robot grasping, accurately predicted 5D grasp representation {x, y, θ, w, h} in vision coordinate system must be converted into 5D grasp representation {x,ỹ,θ,w,h} in actual robot coordinate system considering gripper configuration. Thus, accurate calibration between vision and robot coordinate systems is critical for robotic grasping. Our robot is equipped with a gripper whose maximum open distance w is 27.5 mm. In order to grasp small objects whose widths are 10-20 mm, the calibration error between vision and robot coordinates should be less than or equal to 1-2 mm.
We proposed a learning-based, fully automatic visionrobot calibration method as illustrated in Fig. 5: (1) a small known object (round shape in our case) is placed in a known location, (2) the robot moves the object to a random location, (3) the robot places the object, (4) the robot is away from field of view, (5) vision system predicts 5D grasp representation, and (6) the procedure is repeated to collect many samples. Then, 5D grasp representations in both vision coordinate and robot coordinate can be mapped using linear or nonlinear regressions or using simple nonlinear neural networks. For simplicity, we calibrated only x, y with affine transformation using LASSO [25] assuming known w (maximum open width of the gripper), known h (fixed gripper), and relatively good tolerance for θ. The ranges of x, y in our robot coordinate are 150 to 326 mm, -150 to 150 mm, respectively, and the ranges of x, y in our vision coordinate are 160 to 290 pixel, 50 to 315 pixel, respectively. One pixel corresponds to about 1.35×1.13 mm 2 . Fig. 6 shows that calibration error (in mm) is in general decreasing as the number of samples is increasing and the error is below 1.5 mm which is close to one pixel in vision if there are more than 40 samples. Note that since there are 6 LASSO coefficients for mapping x, y's, theoretically only 3 points should be enough to determine all 6 coefficients. However, in practice, much more samples are necessary to ensure good calibration accuracy. This result implies that using high resolution images seem important for successful grasping due to potential high accuracy of calibration.
IV. EXPERIMENTS AND EVALUATION
A. Evaluation with Cornell Dataset
We performed benchmarks using the Cornell grasp detection dataset [10], [11] as shown in Fig. 7. This dataset consists of 855 images (RGB color and depth) of 240 different objects with the ground truth labels of a few graspable rectangles and a few not-graspable rectangles. Note that we cropped images with 360×360, but did not resize it to 224×224. Five-fold cross validation was performed and average prediction accuracy was reported for image-wise and object-wise splits. When the difference between the output orientation θ and the ground truth orientation θ g is less than 30 • , then IOU or Jaccard index in Eq. (1) that is larger than a certain threshold (e.g., 0.25, 0.3) will be considered as a success grasp detection. The same metric for accuracy has been used in other previous works [11], [12], [18].
All proposed methods were implemented using pyTorch and trained with 500 epochs and data augmentation that took about 4 hours of training. For fair comparison, we implemented the work of Lenz et al. [10], [11] and Multi-Grasp [12] using MATLAB or Tenforflow. They achieved similar performance and computation time that were reported in their original papers. All algorithms were tested on the platform with a single GPU (NVIDIA GeForce GTX1080Ti), a single CPU (Intel i7-7700K 4.20GHz) and 32GB memory.
B. Evaluation with 4-axis Robot Arm and RGB-D
We also evaluated our proposed methods with a small 4axis robot arm (Dobot Magician, Shenzhen YueJiang Tech Co., Ltd, China, Fig. 1 (Right)) and a RGB-D camera (Intel RealSense D435, Intel, USA) attached to have the field-ofview including the robot and its workspace from the top. The following 6 novel objects (toothbrush, candy, earphone cap, cable, styrofoam bowl, L-wrench were used for real grasp tasks as shown in Fig. 8. After our learning-based vision-robot calibration, for each object, 5 repetition were performed. If the robot arm is holding an object for more than 3 sec, it is counted as a success grasp.
V. RESULTS
A. Evaluation Results on Cornell Dataset
Table I summarizes all evaluation results on the Cornell robotic grasp dataset for all our proposed methods. Our proposed methods yielded state-of-the-art performance, up to 96.6% prediction accuracy for image-wise split with any metric with state-of-the-art computation time of 3-20 ms. For object-wise split, our proposed methods yielded comparable results for less tolerant metrics (25%, 30%), but yielded stateof-the-art performance for more strict metrics (35%, 40%), demonstrating that our methods yielded highly accurate grasp detection information with true real-time computation. The results of Table I also indicate the importance of good deep network (Darknet, Resnet over Alexnet), of using reparametrization (Offset), and of using high resolution images as input for better performance. Fig. 9 qualitatively illustrates some of these points. Using low resolution image and/or simple network architecture seems to result in missing small graspable candidates as indicated with missing small graspable areas around shoe neck.
B. Evaluation Results with 4-Axis Robot Arm
Fig. 10 illustrates our robot grasp experiment with "candy" object. While previous methods or our method with low image resolution tend to grasp candy part, our proposed method yielded grasp areas around stick part of the candy and our robot actually grasped it as shown in the figure. Table II summarizes our robot experiments showing that our proposed method with high resolution yielded 90% grasp success rate while other methods yielded 53% or less.
VI. CONCLUSIONS
We proposed real-time, highly accurate robotic grasp detection methods that yielded state-of-the-art prediction accuracies with state-of-the-art computation times. We also demonstrated that high accuracy of our proposed methods Fig. 9: One grasp detection results with different image resolution, data type, and with different deep network. All methods were able to detect large grasp areas, but the methods with small deep network and/or low image resolution missed some small grasp areas.
| 3,069 |
1809.05828
|
2891427406
|
Robotic grasp detection for novel objects is a challenging task, but for the last few years, deep learning based approaches have achieved remarkable performance improvements, up to 96.1 accuracy, with RGB-D data. In this paper, we propose fully convolutional neural network (FCNN) based methods for robotic grasp detection. Our methods also achieved state-of-the-art detection accuracy (up to 96.6 ) with state-of- the-art real-time computation time for high-resolution images (6-20ms per 360x360 image) on Cornell dataset. Due to FCNN, our proposed method can be applied to images with any size for detecting multigrasps on multiobjects. Proposed methods were evaluated using 4-axis robot arm with small parallel gripper and RGB-D camera for grasping challenging small, novel objects. With accurate vision-robot coordinate calibration through our proposed learning-based, fully automatic approach, our proposed method yielded 90 success rate.
|
Redmon proposed a deep learning regressor based robotic grasp detection method based on the AlexNet @cite_19 that that yielded 84.4 with fast computation time (76 ms per image) @cite_11 . When performing robotic grasp regression and object classification together, image-wise prediction accuracy of 85.5 Kumra also proposed a real-time regression based grasp detection method using ResNet @cite_16 especially for multimodal information (RGB-D). Their method yielded up to 89.2 with fast computation time (103 ms per image) @cite_23 .
|
{
"abstract": [
"",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"Deep learning has significantly advanced computer vision and natural language processing. While there have been some successes in robotics using deep learning, it has not been widely adopted. In this paper, we present a novel robotic grasp detection system that predicts the best grasping pose of a parallel-plate robotic gripper for novel objects using the RGB-D image of the scene. The proposed model uses a deep convolutional neural network to extract features from the scene and then uses a shallow convolutional neural network to predict the grasp configuration for the object of interest. Our multi-modal model achieved an accuracy of 89.21 on the standard Cornell Grasp Dataset and runs at real-time speeds. This redefines the state-of-the-art for robotic grasp detection.",
"We present an accurate, real-time approach to robotic grasp detection based on convolutional neural networks. Our network performs single-stage regression to graspable bounding boxes without using standard sliding window or region proposal techniques. The model outperforms state-of-the-art approaches by 14 percentage points and runs at 13 frames per second on a GPU. Our network can simultaneously perform classification so that in a single step it recognizes the object and finds a good grasp rectangle. A modification to this model predicts multiple grasps per object by using a locally constrained prediction mechanism. The locally constrained model performs significantly better, especially on objects that can be grasped in a variety of ways."
],
"cite_N": [
"@cite_19",
"@cite_16",
"@cite_23",
"@cite_11"
],
"mid": [
"",
"2949650786",
"2557924869",
"2950988471"
]
}
|
Real-Time, Highly Accurate Robotic Grasp Detection using Fully Convolutional Neural Networks with High-Resolution Images
|
Robot grasping of novel objects has been investigated extensively, but it is still a challenging, open problem in robotics. Humans instantly identify multiple grasping areas of novel objects (perception) and almost instantly plan how to pick them up (planning), and then actually grasp it reliably (control). However, accurate robotic grasp detection, trajectory planning, and reliable execution are quite challenging for robots. As the first step, detecting robotic grasps accurately and quickly from imaging sensors (e.g., RGB-D camera) is an important task for successful robotic grasping.
Robotic grasp detection or synthesis has been widely investigated for many years. Grasp synthesis is divided into analytical and empirical (or data-driven) methods [1] for known, familiar objects and novel objects [2]. In particular, machine learning (non-deep learning) based approaches for robotic grasp detection have utilized data to learn discriminative features for a suitable grasp configuration and to yield excellent performance on generating grasp locations [3], [4], [5]. A typical approach for them is to use a sliding window to select local image patches and to evaluate graspability so that the best image patch with the highest graspability score is chosen for robotic grasp detection result. In 2011, one of the state-of-the-art graspability prediction accuracies without deep learning was 60.5% and its computation time per image was very slow due to sliding windows (50 sec per image) [5].
Deep learning has been successful in computer vision applications such as image classification [6], [7] and object detection [8], [9]. Deep learning has also been utilized for robotic grasp detection and has achieved significant improvements over conventional methods. Lenz et al. proposed deep learning classifier based robotic grasp detection methods that achieved up to 73.9% (image-wise) and 75.6% (object-wise) prediction accuracy [10], [11]. However, its computation time per image was still slow (13.5 sec per image) due to sliding windows. Redmon et al. proposed deep learning regressor based grasp detection methods that yielded up to 88.0% (image-wise) and 87.1% (object-wise) with remarkably fast computation time (76 ms per image) [12]. Recently, Chu et al. proposed two-stage neural networks with grasp region proposal network and robotic grasp detection networks and have achieved up to 96.0% (image-wise) and 96.1% (objectwise) prediction accuracies [13]. However, its computation time has slightly increased due to region proposal network (120 ms per image). Real-time robotic grasp detection can be critical for some applications with dynamic environment or dynamic objects. Thus, reducing computation time while maintaining high prediction accuracy seems desirable.
In this paper, we proposed novel fully convolutional neural network (FCNN) based methods for robotic grasp detection. Our proposed methods yielded state-of-the-art performance comparable to the work of Chu et al. [13] while their computation time is much faster for high resolution image (360×360 image). Note that most deep learning based robotic grasp detection works used 227×227 resized image including [13]. Our proposed methods can perform multiobject, multigrasp detection as shown in Fig. 1 (Left). Our proposed methods were evaluated with a 4-axis robot as shown in Fig. 1 (Right) and achieved 90% success rate for real grasping tasks with novel objects. Since this small robot has a gripper with the maximum range of 27.5 mm, it was critical to accurately calibrate robotic grasp information Fig. 2: A typical multibox approach for robotic grasp detection. An input image is divided into S×S grid and regression based robotic grasp detection is performed on each grid box. Then, the output with the highest grasp probability is selected as the final result. This approach can be applied to multiobject, multigrasp detection tasks. and our vision system information. We proposed a simple learning-based vision-robot calibration method and achieved accurate calibration and robot grasping performance. Here is the summary of the contributions of this paper: 1) Newly proposed real-time, single-stage FCNN based robotic grasp detection methods that yielded state-ofthe-art computation time for high resolution image (360×360 image) while achieving comparable stateof-the-art prediction accuracies, especially for more strict performance metrics. For example, our method achieved 96.6% image-wise, 95.1% object-wise with 10 ms per high-resolution image while the work of Chu et al. [13] achieved 96.0% image-wise, 96.1% objectwise with 120 ms per low-resolution image. In other words, our method yielded comparable accuracies with 12× faster computation than Chu et al. [13]. Our FCNN based methods can be applied to multigrasp, multiobject detection. 2) Our proposed methods were evaluated for real grasping tasks and yielded 90.0% success rate with challenging small, novel objects and with a small parallel gripper (max open width 27.5 mm). This was possible due to our proposed simple, full automatic learning-based approach for vision-robot calibration. Our method achieved less than 1.5 mm error for calibration, which is close to vision resolution.
III. PROPOSED METHODS FOR ROBOTIC GRASPS
A. Problem Description
The goal of the problem is to predict 5D robotic grasp representations [5], [11] for multiple objects from a given color image (RGB) and possibly depth image (RGB-D) where a 5D robotic grasp representation consists of location (x, y), orientation θ, gripper opening width w, and parallel gripper plate size h, as illustrated in Fig. 3 (a). Then, the 5D robotic grasp representation {x, y, θ, w, h} in camera based vision coordinate system should be transformed into a new 5D grasp representation {x,ỹ,θ,w,h} in actual robot coordinate system so that they can be used for actual robot grasping task.
B. Reparametrization of 5D Grasp Representation and Grasp Probability
MultiGrasp estimates 5D grasp representation {x, y, θ, w, h} as well as grasp probability (confidence) z for each grid cell by reparameterizing θ to be c = cos θ, s = sin θ [12]. In other words, 7 parameters {x, y, c, s, w, h, z} are directly estimated using deep learning based regressors in MultiGrasp. This approach has also been used in YOLO, object detection deep network [21]. Inspired by YOLO9000, a better and faster deep network for object detection than YOLO [9], we propose the following reparametrization of 5D grasp representation and grasp probability for robotic grasp detection as follows:
{t x , t y , θ, t w , t h , t z } where x = σ(t x )+ c x , y = σ(t y )+ c y , w = p w exp(t w ), h = p h exp(t h )
, and z = σ(t z ). Note that σ(·) is a sigmoid function, exp(·) is an exponential function, p h , p w are the pre-defined height and width of an anchor box, respectively, and (c x , c y ) are the location of the top left corner of each grid cell (known). Thus, deep neural network for robotic grasp detection of our proposed methods will estimate {t x , t y , θ, t w , t h , t z } instead of {x, y, θ, w, h, z}. These parameters are illustrated in Fig. 3 (b). Note that x, y, w, h are properly normalized so that the size of each grid cell is 1 × 1. Lastly, the angle θ will be modeled as a discrete value instead of a continuous value, which is different from MultiGrasp. This discretization of the angle in robotic grasp detection was also used in [20]. Thus, for a given (c x , c y ), the range of (x, y) will be c x < x < c x + 1, c y < y < c y + 1 due to the re-parametrization using sigmoid functions.
w, h coordinates in each cell (anchor box). Anchor box approach has also been useful for object detection [9], so we adopt it to our robotic grasp detection. Due to the re-parametrization using anchor box, estimating w, h is converted into estimating t w , t h , which are related to the expected values of various sizes of w, h, and then classifying the best grasp representation among all anchor box candidates. In other words, this re-parametrization changes regression problems for w, h into regression + classification problems. We propose to use the following 7 anchor boxes:
C. Loss Function for Robotic Grasp Detection
We proposed a novel loss function for robotic grasp detection considering the following items.
Angle in each cell (discretization). MultiGrasp reparameterized the angle θ with c = cos θ and s = sin θ so that estimating c, s yields the estimated θ = arctan(s/c). Thus, MultiGrasp took regression approach for θ. We proposed to convert this regression problem for estimating θ into the classification problem for θ among finite number of angle candidates in [0, π]. Specifically, we model that θ ∈ {0, π/18, . . . , π}. Along with data augmentation for different angles every epoch, we were able to observe substantial performance improvement. Similar angle discretization for robotic grasp detection was also used in [20].
Grasp probability (new ground truth). Predicting grasp probability is crucial for multibox approaches such as Multi-Grasp. Conventional ground truth for grasp probability was 1 (graspable) or 0 (not graspable) as used in [12]. Inspired by YOLO9000, we proposed to use IOU (Intersection Over Union, Jaccard index) as the ground truth for grasp probability: the ground truth for grasp probability is
z g = |P ∩ G| |P ∪ G|(1)
where P is the predicted grasp rectangle, G is the ground truth grasp rectangle, and | · | is the area of the inner set. Proposed loss function. We propose to use the follow cost function to train robotic grasp detection networks that we will describe in the next subsection: For the output vector of the deep neural network (t x , t y , θ, t w , t h , t z ) and the ground truth {x g , y g , θ g , w g , h g , z g },
L(t x , t y , θ, t w , t h , t z ) = λ coord S 2 i=1 A j=1 m obj ij [(x g i − x i ) 2 + (y g i − y i ) 2 ] + λ coord S 2 i=1 A j=1 m obj ij [(w g ij − w ij ) 2 + (h g ij − h ij ) 2 ] + λ prob S 2 i=1 A j=1 m obj ij [(z g i − z i ) 2 ] + λ class S 2 i=1 A j=1 m obj ij CrossEntropy(θ g i , θ i )
where x i , y i , w ij , h ij , z i are functions of (t x , t y , t w , t h , t z ), respectively, S 2 is the number of grid cells and A is the number of anchor boxes (7 in our case). We set λ coord = 1, λ prob = 5 and λ class = 1. We set m ij = 1 if the ground truth (x g , y g ) is in the ith cell and m ij = 0 otherwise.
D. Proposed FCNN Architecture
We chose three well-known deep neural networks for image classification tasks Alexnet [6] (base network for Multi-Grasp [12]), Darknet-19 (similar to VGG-16 [24] that was used in [13], but with much smaller memory requirement for similar performance) [9], and Resnet-50 [7] (base network for [20], [13]). These pre-trained networks were modified to yield robotic grasp parameters and their fully connected (FC) layers were replaced by 1×1 convolution layers to make FCNN architecture so that images with any size (e.g., high resolution images) can be processed. Most previous robotic grasp detection methods use 227 × 227 resized images as input, but our proposed FCNN based methods can process higher resolution images. We chose to process 360 × 360 images for grasp detection without resizing. Skin connection layer was also added so that fine grain features can be used. For example, a passthrough layer was added in between the final 3 × 3 × 512 layer and the second to last convolutional layer for Darknet-19 as illustrated in Fig. 4 [9]. Similarly, we added similar skip connection for Resnet-50 in between the convolutional layer right before the last max pooling layer and detection layer. Unfortunately, we did not add skip connection for Alexnet since the pre-trained network did not provide access to inner layers.
E. Learning-based Vision-Robot Calibration
For a successful robot grasping, accurately predicted 5D grasp representation {x, y, θ, w, h} in vision coordinate system must be converted into 5D grasp representation {x,ỹ,θ,w,h} in actual robot coordinate system considering gripper configuration. Thus, accurate calibration between vision and robot coordinate systems is critical for robotic grasping. Our robot is equipped with a gripper whose maximum open distance w is 27.5 mm. In order to grasp small objects whose widths are 10-20 mm, the calibration error between vision and robot coordinates should be less than or equal to 1-2 mm.
We proposed a learning-based, fully automatic visionrobot calibration method as illustrated in Fig. 5: (1) a small known object (round shape in our case) is placed in a known location, (2) the robot moves the object to a random location, (3) the robot places the object, (4) the robot is away from field of view, (5) vision system predicts 5D grasp representation, and (6) the procedure is repeated to collect many samples. Then, 5D grasp representations in both vision coordinate and robot coordinate can be mapped using linear or nonlinear regressions or using simple nonlinear neural networks. For simplicity, we calibrated only x, y with affine transformation using LASSO [25] assuming known w (maximum open width of the gripper), known h (fixed gripper), and relatively good tolerance for θ. The ranges of x, y in our robot coordinate are 150 to 326 mm, -150 to 150 mm, respectively, and the ranges of x, y in our vision coordinate are 160 to 290 pixel, 50 to 315 pixel, respectively. One pixel corresponds to about 1.35×1.13 mm 2 . Fig. 6 shows that calibration error (in mm) is in general decreasing as the number of samples is increasing and the error is below 1.5 mm which is close to one pixel in vision if there are more than 40 samples. Note that since there are 6 LASSO coefficients for mapping x, y's, theoretically only 3 points should be enough to determine all 6 coefficients. However, in practice, much more samples are necessary to ensure good calibration accuracy. This result implies that using high resolution images seem important for successful grasping due to potential high accuracy of calibration.
IV. EXPERIMENTS AND EVALUATION
A. Evaluation with Cornell Dataset
We performed benchmarks using the Cornell grasp detection dataset [10], [11] as shown in Fig. 7. This dataset consists of 855 images (RGB color and depth) of 240 different objects with the ground truth labels of a few graspable rectangles and a few not-graspable rectangles. Note that we cropped images with 360×360, but did not resize it to 224×224. Five-fold cross validation was performed and average prediction accuracy was reported for image-wise and object-wise splits. When the difference between the output orientation θ and the ground truth orientation θ g is less than 30 • , then IOU or Jaccard index in Eq. (1) that is larger than a certain threshold (e.g., 0.25, 0.3) will be considered as a success grasp detection. The same metric for accuracy has been used in other previous works [11], [12], [18].
All proposed methods were implemented using pyTorch and trained with 500 epochs and data augmentation that took about 4 hours of training. For fair comparison, we implemented the work of Lenz et al. [10], [11] and Multi-Grasp [12] using MATLAB or Tenforflow. They achieved similar performance and computation time that were reported in their original papers. All algorithms were tested on the platform with a single GPU (NVIDIA GeForce GTX1080Ti), a single CPU (Intel i7-7700K 4.20GHz) and 32GB memory.
B. Evaluation with 4-axis Robot Arm and RGB-D
We also evaluated our proposed methods with a small 4axis robot arm (Dobot Magician, Shenzhen YueJiang Tech Co., Ltd, China, Fig. 1 (Right)) and a RGB-D camera (Intel RealSense D435, Intel, USA) attached to have the field-ofview including the robot and its workspace from the top. The following 6 novel objects (toothbrush, candy, earphone cap, cable, styrofoam bowl, L-wrench were used for real grasp tasks as shown in Fig. 8. After our learning-based vision-robot calibration, for each object, 5 repetition were performed. If the robot arm is holding an object for more than 3 sec, it is counted as a success grasp.
V. RESULTS
A. Evaluation Results on Cornell Dataset
Table I summarizes all evaluation results on the Cornell robotic grasp dataset for all our proposed methods. Our proposed methods yielded state-of-the-art performance, up to 96.6% prediction accuracy for image-wise split with any metric with state-of-the-art computation time of 3-20 ms. For object-wise split, our proposed methods yielded comparable results for less tolerant metrics (25%, 30%), but yielded stateof-the-art performance for more strict metrics (35%, 40%), demonstrating that our methods yielded highly accurate grasp detection information with true real-time computation. The results of Table I also indicate the importance of good deep network (Darknet, Resnet over Alexnet), of using reparametrization (Offset), and of using high resolution images as input for better performance. Fig. 9 qualitatively illustrates some of these points. Using low resolution image and/or simple network architecture seems to result in missing small graspable candidates as indicated with missing small graspable areas around shoe neck.
B. Evaluation Results with 4-Axis Robot Arm
Fig. 10 illustrates our robot grasp experiment with "candy" object. While previous methods or our method with low image resolution tend to grasp candy part, our proposed method yielded grasp areas around stick part of the candy and our robot actually grasped it as shown in the figure. Table II summarizes our robot experiments showing that our proposed method with high resolution yielded 90% grasp success rate while other methods yielded 53% or less.
VI. CONCLUSIONS
We proposed real-time, highly accurate robotic grasp detection methods that yielded state-of-the-art prediction accuracies with state-of-the-art computation times. We also demonstrated that high accuracy of our proposed methods Fig. 9: One grasp detection results with different image resolution, data type, and with different deep network. All methods were able to detect large grasp areas, but the methods with small deep network and/or low image resolution missed some small grasp areas.
| 3,069 |
1809.06025
|
2892244049
|
We study the problem of visibility-based exploration, reconstruction and surveillance in the context of supervised learning. Using a level set representation of data and information, we train a convolutional neural network to determine vantage points that maximize visibility. We show that this method drastically reduces the on-line computational cost and determines a small set of vantage points that solve the problem. This enables us to efficiently produce highly-resolved and topologically accurate maps of complex 3D environments. We present realistic simulations on 2D and 3D urban environments.
|
The surveillance problem is related to the art gallery problem in computational geometry, where the task is to determine the minimum set of guards who can together observe a polygonal gallery. Vertex guards must be stationed at the vertices of the polygon, while point guards can be anywhere in the interior. For simply-connected polygonal scenes, Chv 'atal showed that @math vertex guards, where @math is the number of vertices, are sometimes necessary and always sufficient @cite_21 . For polygonal scenes with @math holes, @math point guards are sufficient @cite_27 @cite_15 . However, determining the optimal set of observers is NP-complete @cite_1 @cite_17 @cite_29 .
|
{
"abstract": [
"We study the computational complexity of the art gallery problem originally posed by Klee, and its variations. Specifically, the problem of determining the minimum number of vertex guards that can see an n -wall simply connected art gallery is shown to be NP-hard. The proof can be modified to show that the problems of determining the minimum number of edge guards and the minimum number of point guards in a simply connected polygonal region are also NP-hard. As a byproduct, the problem of decomposing a simple polygon into a minimum number of star-shaped polygons such that their union is the original polygon is also shown to be NP-hard.",
"",
"In 1973, Victor Klee posed the following question: How many guards are necessary, and how many are sufficient to patrol the paintings and works of art in an art gallery with n walls? This wonderfully naive question of combinatorial geometry has, since its formulation, stimulated a plethora of papers, surveys and a book, most of them written in the last fifteen years. The first result in this area, due to V. Chvatal, asserts that n 3 guards are occasionally necessary and always sufficient to guard an art gallery represented by a simple polygon with n vertices. Since ChvataFs result, numerous variations on the art gallery problem have been studied, including mobile guards, guards with limited visibility or mobility, illumination of families of convex sets on the plane, guarding of rectilinear polygons, and others. In this paper, we survey most of these results.",
"In this paper we consider the problem of placing guards to supervise an art gallery with holes. No gallery withn vertices andh holes requires more than [(n+h) 3] guards. For some galleries this number of guards is necessary. We present an algorithm which places the [(n+h) 3] guards inO(n2) time.",
"Art gallery problems which have been extensively studied over the last decade ask how to station a small (minimum) set of guards in a polygon such that every point of the polygon is watched by at least one guard. The graph-theoretic formulation and solution to the gallery problem for polygons in standard form is given. A complexity analysis is carried out, and open problems are discussed. >",
"The inherent computational complexity of polygon decomposition problems is of theoretical interest to researchers in the field of computational geometry and of practical interest to those working in syntactic pattern recognition. Three polygon decomposition problems are shown to be NP-hard and thus unlikely to admit efficient algorithms. The problems are to find minimum decompositions of a polygonal region into (perhaps overlapping) convex, star-shaped, or spiral subsets. We permit the polygonal region to contain holes. The proofs are by transformation from Boolean three-satisfiability, a known NP-complete problem. Several open problems are discussed."
],
"cite_N": [
"@cite_29",
"@cite_21",
"@cite_1",
"@cite_27",
"@cite_15",
"@cite_17"
],
"mid": [
"2063442090",
"2087422927",
"2502636724",
"2030535045",
"2126369940",
"1981988453"
]
}
|
Greedy Algorithms for Sparse Sensor Placement via Deep Learning
|
We consider the problem of generating a minimal sequence of observing locations to achieve complete line-of-sight visibility coverage of an environment.
In particular, we are interested in the case when environment is initially 2 unknown. This is particularly useful for autonomous agents to map out unknown, or otherwise unreachable environments, such as undersea caverns.
Military personnel may avoid dangerous situations by sending autonomous agents to scout new territory. We first assume the environment is known in order to gain insights.
Consider a domain Ω ⊆ R d . Partition the domain Ω = Ω free ∪ Ω obs into an open set Ω free representing the free space, and a closed set Ω obs of finite obstacles without holes. We will refer to the Ω obs as the environment, since it is characterized by the obstacles. Let x i ∈ Ω free be a vantage point, from which a range sensor, such as LiDAR, takes omnidirectional measurements P x i : S d−1 → R. That is, P x i outputs the distance to closest obstacle for each direction in the unit sphere. One can map the range measurements to the visibility set V x i ; points in V x i are visible from x i :
x ∈ V x i if x − x i 2 < P x i x − x i x − x i 2(1)
As more range measurements are acquired, Ω free can be approximated by the cumulatively visible set Ω k :
Ω k = k i=0 V x i(2)
By construction, Ω k admits partial ordering: Ω i−1 ⊂ Ω i . For suitable choices of x i , it is possible that Ω n → Ω free (say, in the Hausdorff distance).
We aim at determining a minimal set of vantage points O from which 3 every x ∈ Ω free can be seen. One may formulate a constrained optimization problem and look for sparse solutions. When the environment is known, we have the surveillance problem:
min O⊆Ω free |O| subject to Ω free = x∈O V x .(3)
When the environment is not known apriori, the agent must be careful to avoid collision with obstacles. New vantage points must be a point that is currently visible. That is, x k+1 ∈ Ω k . Define the set of admissible sequences:
A(Ω free ) := {(x 0 , . . . , x n−1 ) | n ∈ N, x 0 ∈ Ω free , x k+1 ∈ Ω k }.
For the unknown environment, we have the exploration problem:
min O∈A(Ω free ) |O| subject to Ω free = x∈O V x .(5)
The problem is feasible as long as obstacles do not have holes.
6
The approach of Bai et al. [1] terminates when there is no occlusion within view of the agent, even if the global map is still incomplete. Tai and Liu [30,31,20] train agents to learn obstacle avoidance.
Our work uses a gain function to steer a greedy approach, similar to the next-best-view algorithms. However, our measure of information gain takes the geometry of the environment into account. By taking advantage of precomputation via convolutional neural networks, our model learns shape priors for a large class of obstacles and is efficient at runtime. We use a volumetric representation which can handle arbitrary geometries in 2D and 3D. Also, we assume that the sensor range is larger than the domain, which makes the problem more global and challenging.
Greedy algorithm
We propose a greedy approach which sequentially determines a new vantage point, x k+1 , based on the information gathered from all previous vantage points, x 0 , x 1 , · · · , x k . The strategy is greedy because x k+1 would be a location that maximizes the information gain.
For the surveillance problem, the environment is known. We define the gain function:
g(x; Ω k ) := |V x ∪ Ω k | − |Ω k |,(6)
i.e. the volume of the region that is visible from x but not from x 0 , x 1 , · · · , x k .
Note that g depends on Ω obs , which we omit for clarity of notation. The next 7 vantage point should be chosen to maximize the newly-surveyed volume. We define the greedy surveillance algorithm as:
x k+1 = arg max x∈Ω free g(x; Ω k ).(7)
The problem of exploration is even more challenging since, by definition, the environment is not known. Subsequent vantage points must lie within the current visible set Ω k . The corresponding greedy exploration algorithm is
x k+1 = arg max x∈Ω k g(x; Ω k ).(8)
However, we remark that in practice, one is typically interested only in a subset S of all possible environments S := {Ω obs |Ω obs ⊆ R d }.
For example, cities generally follow a grid-like pattern. Knowing these priors can help guide our estimate of g for certain types of Ω obs , even when Ω obs is unknown initially.
We propose to encode these priors formally into the parameters, θ, of a learned function:
g θ (x; Ω k , B k ) for Ω obs ∈ S,(9)
where B k is the part of ∂Ω k that may actually lie in the free space Ω free :
B k = ∂Ω k \Ω obs .(10)
See Figure 2 for an example gain function. We shall demonstrate that 8 while training for g θ , incorporating the shadow boundaries helps, in some sense, localize the learning of g, and is essential in creating usable g θ .
A bound for the known environment
We present a bound on the optimality of the greedy algorithm, based on submodularity [14], a useful property of set functions. We start with standard definitions. Let V be a finite set and f : 2 V → R be a set function which assigns a value to each subset S ⊆ V .
Definition 2.1. (Monotonicity) A set function f is monotone if for every A ⊆ B ⊆ V , f (A) ≤ f (B). Definition 2.2. (Discrete derivative) The discrete derivative of f at S with respect to v ∈ V is ∆ f (v|S) := f (S ∪ {v}) − f (S). Definition 2.3. (Submodularity) A set function f is submodular if for every A ⊆ B ⊆ V and v ∈ V \ B, ∆ f (v|A) ≥ ∆ f (v|B).
In other words, set functions are submodular if they have diminishing returns. More details and extensions of submodularity can be found in [14].
Lemma 2.1. The function f is monotone.
Proof. Consider A ⊆ B ⊆ Ω free . Since f is the cardinality of unions of sets, we have
f (B) = x∈B V x = x∈A∪{B\A} V x ≥ x∈A V x = f (A).V x + x∈B V x ≥ x∈A∪{v}∪B V x + x∈(A∪{v})∩B V x = x∈B∪{v} V x + x∈A V x = f (B ∪ {v}) + f (A)
Rearranging, we have
f (A ∪ {v}) + f (B) ≥ f (B ∪ {v}) + f (A) f (A ∪ {v}) − f (A) ≥ f (B ∪ {v}) − f (B) ∆ f (v|A) ≥ ∆ f (v|B).
Submodularity and monotonicity enable a bound which compares the relative performance of the greedy algorithm to the optimal solution.
Theorem 2.3. Let O * k be the optimal set of k sensors. Let O n = {x i } n i=1 be
the set of n sensors placed using the greedy surveillance algorithm (7). Then,
f (O n ) ≥ (1 − e −n/k )f (O * k ).
Proof. For l < n we have
f (O * k ) ≤ f (O * k ∪ O l ) (12) = f (O l ) + ∆ f (O * k |O l ) (13) = f (O l ) + k i=1 ∆ f (x * i |O l ∪ {x * 1 , . . . , x * i−1 }) (14) ≤ f (O l ) + k i=1 ∆ f (x * i |O l ) (15) ≤ f (O l ) + k i=1 f (O l+1 ) − f (O l ) (16) = f (O l ) + k f (O l+1 ) − f (O l ) .(17)
Line (12) follows from monotonicity, (15) follows from submodularity of f , and (16) from definition of the greedy algorithm. Define
δ l := f (O * k ) − f (O l ), with δ 0 := f (O * k ). Then f (O * k ) − f (O l ) ≤ k f (O l+1 ) − f (O l ) δ l ≤ k δ l − δ l+1 δ l 1 − k ≤ −kδ l+1 δ l 1 − 1 k ≥ δ l+1
Expanding the recurrence relation with δ n , we have
δ n ≤ 1 − 1 k δ n−1 ≤ 1 − 1 k n δ 0 = 1 − 1 k n f (O * k )
Finally, substituting back the definition for δ n , we have the desired result:
δ n ≤ 1 − 1 k n f (O * k ) f (O * k ) − f (O n ) ≤ 1 − 1 k n f (O * k ) f (O * k ) 1 − (1 − 1/k) n ≤ f (O n ) f (O * k ) 1 − e −n/k ≤ f (O n )(18)
where (18)
follows from the inequality 1 − x ≤ e −x .
In particular, if n = k, then (1 − e −1 ) ≈ 0.63. This means that k steps of the greedy algorithm is guaranteed to cover at least 63% of the total volume, 13 if the optimal solution can also be obtained with k steps. When n = 3k, the greedy algorithm covers at least 95% of the total volume. In [22], it was shown that no polynomial time algorithm can achieve a better bound.
A bound for the unknown environment
When the environment is not known, subsequent vantage points must lie within the current visible set to avoid collision with obstacles:
x k+1 ∈ V(O k )(19)
Thus, the performance of the exploration algorithm has a strong dependence on the environment Ω obs and the initial vantage point x 1 . We characterize this dependence using the notion of the exploration ratio.
Given an environment Ω obs and A ⊆ Ω free , consider the ratio of the marginal value of the greedy exploration algorithm, to that of the greedy surveillance algorithm:
ρ(A) := sup x∈V(A) ∆ f (x|A) sup x∈Ω free ∆ f (x|A) .(20)
That is, ρ(A) characterizes the relative gap (for lack of a better word) caused
by the collision-avoidance constraint x ∈ V(A). Let A x = {A ⊆ Ω free |x ∈ A} 14
be the set of vantage points which contain x. Define the exploration ratio as
ρ x := inf A∈Ax ρ(A).(21)
The exploration ratio is the worst-case gap between the two greedy algorithms, conditioned on x. It helps to provide a bound for the difference between the optimal solution set of size k, and the one prescribed by n steps the greedy exploration algorithm.
Theorem 2.4. Let O * k = {x * i } k i=1 be the optimal sequence of k sensors which includes x * 1 = x 1 . Let O n = {x i } n i=1
be the sequence of n sensors placed using the greedy exploration algorithm (8). Then, for k, n > 1:
f (O n ) ≥ 1 − exp −(n − 1)ρ x 1 k − 1 1 − f (x 1 ) f (O * k ) f (O * k ).
This is reminiscent of Theorem 2.3, with two subtle differences. The
Proof. We have, for l < n:
f (O * k ) ≤ f (O * k ∪ O l ) = f (O l ) + ∆ f (O * k |O l ) = f (O l ) + k i=1 ∆ f (x * i |O l ∪ {x * 1 , . . . , x * i−1 }) (22) ≤ f (O l ) + k i=1 ∆ f (x * i |O l ) (23) = f (O l ) + ∆ f (x * 1 |O l ) + k i=2 ∆ f (x * i |O l ) = f (O l ) + k i=2 ∆ f (x * i |O l ) (24) ≤ f (O l ) + k i=2 max x∈Ω free ∆ f (x|O l ) ≤ f (O l ) + 1 ρ x 1 k i=2 max x∈V(O l ) ∆ f (x|O l ) (25) ≤ f (O l ) + 1 ρ x 1 k i=2 f (O l+1 ) − f (O l ) (26) = f (O l ) + k − 1 ρ x 1 f (O l+1 ) − f (O l ) .
Line (22) is a telescoping sum, (23) follows from submodularity of f , (24) uses the fact that x * 1 ∈ O l , (25) follows from the definition of ρ x 1 and (26) stems from the definition of the greedy exploration algorithm (8).
As before, define
δ l := f (O * k ) − f (O l )
. However, this time, note that
δ 1 := f (O * k ) − f (O 1 ) = f (O * k ) − f (x 1 ). Then f (O * k ) − f (O l ) ≤ k − 1 ρ x 1 f (O l+1 ) − f (O l ) δ l ≤ k − 1 ρ x 1 δ l − δ l+1 δ l 1 − k − 1 ρ x 1 ≤ − k − 1 ρ x 1 δ l+1 δ l 1 − ρ x 1 k − 1 ≥ δ l+1
Expanding the recurrence relation with δ n , we have
δ n ≤ 1 − ρ x 1 k − 1 δ n−1 ≤ 1 − ρ x 1 k − 1 n−1 δ 1 = 1 − ρ x 1 k − 1 n−1 f (O * k ) − f (x 1 )
Now, substituting back the definition for δ n , we arrive at
δ n ≤ 1 − ρ x 1 k − 1 n−1 f (O * k ) − f (x 1 ) f (O * k ) − f (O n ) ≤ 1 − ρ x 1 k − 1 n−1 f (O * k ) − f (x 1 ) f (O * k ) − f (x 1 ) − f (O n ) − f (x 1 ) ≤ 1 − ρ x 1 k − 1 n−1 f (O * k ) − f (x 1 ) f (O * k ) − f (x 1 ) 1 − 1 − ρ x 1 k − 1 n−1 ≤ f (O n ) − f (x 1 ) f (O * k ) − f (x 1 ) 1 − e − (n−1)ρx 1 k−1 ≤ f (O n ) − f (x 1 ) .
Finally, with some more algebra
f (O n ) − f (x 1 ) ≥ 1 − e − (n−1)ρx 1 k−1 f (O * k ) − f (x 1 ) f (O n ) ≥ f (x 1 ) + 1 − e − (n−1)ρx 1 k−1 f (O * k ) − f (x 1 ) f (O n ) ≥ f (x 1 ) + 1 − e − (n−1)ρx 1 k−1 f (O * k ) − f (x 1 ) + f (x 1 )e − (n−1)ρx 1 k−1 f (O n ) ≥ 1 − e − (n−1)ρx 1 k−1 f (O * k ) + f (x 1 )e − (n−1)ρx 1 k−1 f (O n ) ≥ 1 − e − (n−1)ρx 1 k−1 1 − f (x 1 ) f (O * n ) f (O * k ).
Exploration ratio example
We demonstrate an example where ρ x can be an arbitrarily small factor that is determined by the geometry of Ω free . Figure 3 depicts an illustration of the setup for the narrow alley environment.
Consider a domain Ω = [0, 1] × [0, 1] with a thin vertical wall of width ε 1, whose center stretches from ( 3 2 ε, 0) to ( 3 2 ε, 1). A narrow opening of size ε 2 × ε is centered at ( 3 2 ε, 1 2 ). Suppose
x 1 = x * 1 = A so that f ({x 1 }) = ε + O(ε 2 ),
where the ε 2 factor is due to the small sliver of the narrow alley visible from place
x 2 ∈ V(x 1 ) = [0, ε] × [0, 1]. One possible location is x 2 = B.
Then, after 2 steps of the greedy algorithm, we have
f (O 2 ) = ε + O(ε 2 ).
Meanwhile, the total visible area is
f (O * 2 ) = 1 − O(ε)
and the ratio of greedy to optimal area coverage is
f (O 2 ) f (O * 2 ) = ε + O(ε 2 ) 1 − O(ε) = O(ε)(27)
The exploration ratio is ρ
x 1 = O(ε 2 ), since max x∈V({x 1 }) ∆ f (x|{x 1 }) = O(ε 2 ) max x∈Ω free ∆ f (x|{x 1 }) = 1 − O(ε)(28)
According to the bound, with k = n = 2, we should have
f (O 2 ) f (O * 2 ) ≥ 1 − e − (n−1)ρx 1 k−1 1 − f (x 1 ) f (O * 2 ) = 1 − e −O(ε 2 ) 1 − O(ε) = Ω(ε)(29)
which reflects what we see in (27). and ρ x 1 = 1, since both the greedy exploration and surveillance step coincide.
According to the bound, with k = n = 2, we should have
f (O 2 ) f (O * 2 ) ≥ 1 − e − (n−1)ρx 1 k−1 1 − f (x 1 ) f (O * n ) ≥ 1 − O(ε)(30)
which is the case, since f (
O 2 ) = f (O * 2 ).
By considering the first vantage point x 1 as part of the bound, we ac-20 count for some of the unavoidable uncertainties associated with unknown environments during exploration.
Numerical comparison
We compare both greedy algorithms on random arrangements of up to 6 circular obstacles. Each algorithm starts from the same initial position and runs until all free area is covered. We record the number of vantage points required over 200 runs for each number of obstacles.
Surprisingly, the exploration algorithm sometimes requires fewer vantage points than the surveillance algorithm. Perhaps the latter is too aggressive, or perhaps the collision-avoidance constraint acts as a regularizer. For example, when there is a single circle, the greedy surveillance algorithm places the second vantage point x 2 on the opposite side of this obstacle. This may lead to two slivers of occlusion forming of either side of the circle, which will require 2 additional vantage points to cover. With the greedy exploration algorithm, we do not have this problem, due to the collision-avoidance constraint. Figure 4 shows an select example with 1 and 5 obstacles. Figure 5 show the histogram of the number of steps needed for each algorithm. On average, both algorithms require a similar number of steps, but the exploration algorithm has a slight advantage. In this section, we discuss the method for approximating the gain function when the map is not known. Given the set of previously-visited vantage points, we compute the cumulative visibility and shadow boundaries. We approximate the gain function by applying the trained neural network on this pair of inputs, and pick the next point according to (7). This procedure repeats until there are no shadow boundaries or occlusions.
The data needed for the training and evaluation of g θ are computed using level sets [26,28,25]. Occupancy grids may be applicable, but we choose level sets since they have proven to be accurate and robust. In particular, level sets are necessary for subpixel resolution of shadow boundaries and they allow for efficient visibility computation, which is crucial when generating the library of training examples.
The training geometry is embedded by a signed distance function, denoted by φ. For each vantage point x i , the visibility set is represented by the level set function ψ(·, x i ), which is computed efficiently using the algorithm described in [34].
In the calculus of level set functions, unions and intersections of sets are translated, respectively, into taking maximum and minimum of the corresponding characteristic functions. The cumulatively visible sets Ω k are 24 represented by the level set function Ψ k (x), which is defined recursively by
Ψ 0 (x) = ψ(x, x 0 ),(31)Ψ k (x) = max {Ψ k−1 (x), ψ(x, x k )} , k = 1, 2, . . .(32)
where the max is taken point-wise. Thus we have
Ω free = {x|φ(x) > 0},(33)V x i = {x|ψ(x, x i ) > 0},(34)Ω k = {x|Ψ k (x) > 0}.(35)
The shadow boundaries B k are approximated by the "smeared out" function:
b k (x) := δ ε (Ψ k ) · [1 − H(G k (x))] ,(36)
where H(x) is the Heaviside function and
δ ε (x) = 2 ε cos 2 πx ε · 1 [− ε 2 , ε 2 ] (x),(37)γ(x, x 0 ) = (x 0 − x) T · ∇φ(x),(38)G 0 = γ(x, x 0 ),(39)G k (x) = max{G k−1 (x), γ(x, x k )}, k = 1, 2, . . .(40)
Recall, the shadow boundaries are the portion of the ∂Ω k that lie in free space; the role of 1 − H(G k ) is to mask out the portion of obstacles that are currently visible from {x i } k i=1 . See Figure ?? for an example of γ. In our implementation, we take ε = 3∆x where ∆x is the grid node spacing. We refer the readers to [35] for a short review of relevant details.
When the environment Ω obs is known, we can compute the gain function
exactly g(x; Ω k ) = H H ψ(ξ, x) − H Ψ k (ξ) dξ.(41)
We remark that the integrand will be 1 where the new vantage point uncovers something not previously seen. Computing g for all x is costly; each visibility and volume computation requires O(m d ) operations, and repeating this for all points in the domain results in O(m 2d ) total flops. We approximate it with a functiong θ parameterized by θ:
g θ (x; Ψ k , φ, b k ) ≈ g(x; Ω k ).(42)
If the environment is unknown, we directly approximate the gain function by learning the parameters θ of a function
g θ (x; Ψ k , b k ) ≈ g(x; Ω k )H(Ψ k )(43)
using only the observations as input. Note the H(Ψ k ) factor is needed for collision avoidance during exploration because it is not known a priori whether an occluded location y is part of an obstacle or free space. Thus g θ (y) must 26 be zero.
Training procedure
We sample the environments uniformly from a library. For each Ω obs , a sequence of data pairs is generated and included into the training set T : consisting of k steps. Instead, to generate causally relevant data, we use an ε-greedy approach: we uniformly sample initial positions. With probability ε, the next vantage point is chosen randomly from admissible set. With probability 1 − ε, the next vantage point is chosen according to (7). Figure 6 shows an illustration of the generation of causal data along the subspace of relevant shapes.
{Ψ k , b k }, g(x; Ω k )H(Ψ k ) , k = 0, 1, 2, . . . .(44)
The function g θ is learned by minimizing the empirical loss across all data Figure 6: Causal data generation along the subspace of relevant shapes. Each dot is a data sample corresponding to a sequence of vantage points.
pairs for each Ω obs in the training set T :
argmin θ 1 N Ω obs ∈T k L g θ (x; Ψ k , b k ), g(x; Ω k )H(Ψ k ) ,(45)
where N is the total number of data pairs. We use the cross entropy loss function:
L(p, q) = p(x) log q(x) + (1 − p(x)) log(1 − q(x)) dx. (46) (a) a) (b) b) (c) c) 0 1 (d) d)
Network architecture
We use convolutional neural networks (CNNs) to approximate the gain function, which depends on the shape of Ω obs and the location x. CNNs have been used to approximate functions of shapes effectively in many applications. Their feedforward evaluations are efficient if the off-line training cost is ignored. The gain function g(x) does not depend directly on x, but rather,
x's visibility of Ω free , with a domain of dependence bounded by the sensor range. We employ a fully convolutional approach for learning g, which makes the network applicable to domains of different sizes. The generalization to 3D is also straight-forward.
We base the architecture of the CNN on U-Net [27], which has had great success in dense inference problems, such as image segmentation. It aggregates information from various layers in order to have wide receptive fields while maintaining pixel precision. The main design choice is to make sure that the receptive field of our model is sufficient. That is, we want to make sure that the value predicted at each voxel depends on a sufficiently large neighborhood. For efficiency, we use convolution kernels of size 3 in each dimension. By stacking multiple layers, we can achieve large receptive fields.
Thus the complexity for feedforward computations is linear in the total number of grid points.
Define a conv block as the following layers: convolution, batch norm, leaky relu, stride 2 convolution, batch norm, and leaky relu. Each conv block reduces the image size by a factor of 2. The latter half of the network increases 30 the image size using deconv blocks: bilinear 2x upsampling, convolution, batch norm, and leaky relu.
Our 2D network uses 6 conv blocks followed by 6 deconv blocks, while our 3D network uses 5 of each block. We choose the number of blocks to ensure that the receptive field is at least the size of the training images: 128 × 128 and 64 × 64 × 64. The first conv block outputs 4 channels. The number of channels doubles with each conv block, and halves with each deconv block.
The network ends with a single channel, kernel of size 1 convolution layer followed by the sigmoid activation. This ensures that the network aggregates all information into a prediction of the correct size and range.
Numerical results
We present some experiments to demonstrate the efficacy of our approach.
Also, we demonstrate its limitations. First, we train on 128 × 128 aerial city blocks cropped from INRIA Aerial Image Labeling Dataset [21]. It contains binary images with building labels from several urban areas, including
Austin, Chicago, Vienna, and Tyrol. We train on all the areas except Austin, which we hold out for evaluation. We call this model City-CNN. We train a similar model NoSB-CNN on the same training data, but omit the shadow boundary from the input. Third, we train another model Radial-CNN, on synthetically-generated radial maps, such as the one in Figure 13.
Given a map, we randomly select an initial location. In order to generate 31 the sequence of vantage points, we apply (7), using g θ in place of g. Ties are broken by choosing the closest point to x k . We repeat this process until there are no shadow boundaries, the gain function is smaller than , or the residual is less than δ, where the residual is defined as:
r = |Ω free \ Ω k | |Ω free | .(47)
We compare these against the algorithm which uses the exact gain function, which we call Exact. We also compare against Random, a random walker, which chooses subsequent vantage points uniformly from the visible region, and Random-SB which samples points uniformly in a small neighborhood of the shadow boundaries. We analyze the number of steps required to cover the scene and the residual as a function of the number of steps. The algorithm is robust to the initial positions. Figure 9 show the distribution of the number of steps and residual across over 800 runs from varying initial positions over a 512 × 512 Austin map. In practice, using the shadow 35 boundaries as a stopping criteria can be unreliable. Due to numerical precision and discretization effects, the shadow boundaries may never completely disappear. Instead, the algorithm terminates when the maximum predicted gain falls below a certain threshold . In this example, we used = 0.1.
Empirically, this strategy is robust. On average, the algorithm required 33
vantage points to reduce the occluded region to within 0.1% of the explorable area. Figure 10 shows an example sequence consisting of 36 vantage points.
Each subsequent step is generated in under 1 sec using the CPU and instantaneously with a GPU.
Even when the maximizer of the predicted gain function is different from that of the exact gain function, the difference in gain is negligible. This is evident when we see the residuals for City-CNN decrease at similar rates to Exact. Figure 11 demonstrates an example of the residual as a function of the number of steps for one such sequence generated by these algorithms on a 1024 × 1024 map of Austin. We see that City-CNN performs comparably to Exact approach in terms of residual. However, City-CNN takes 140 secs to generate 50 steps on the CPU while Exact, an O(m 4 ) algorithm, takes more than 16 hours to produce 50 steps.
Effect of shadow boundaries
The inclusion of the shadow boundaries as input to the CNN is critical for the algorithm to work. Without the shadow boundaries, the algorithm cannot in no change to the cumulative visibility. At the next iteration, the input is same as the previous iteration, and the result will be the same; the algorithm becomes stuck in a cycle. To avoid this, we prevent vantage points from repeating by zeroing out the gain function at that point and recomputing the argmax. Still, the vantage points tend to cluster near flat edges, as in Figure 12. This clustering behavior causes the NoSB-CNN model to be, at times, worse than Random. See Figure 11 to see how the clustering inhibits the reduction in the residual.
Effect of shape
The shape of the obstacles, i.e. Ω c , used in training affects the gain function predictions. Figure 13 compares the gain functions produced by City-CNN
and Radial-CNN.
Frequency map
Here we present one of our studies concerning the exclusivity of vantage point placements in Ω. We generated sequences of vantage points starting from over 800 different initial conditions using City-CNN model on a 512 × 512
Austin map. Then, we model each vantage point as a Gaussian with fixed width, and overlay the resulting distribution on the Austin map in Figure 14.
This gives us a frequency map of the most recurring vantage points. These hot spots reveal regions that are more secluded and therefore, the visibility of those regions is more sensitive to vantage point selection. The efficiency of the CNN method allows us to address many surveillance related questions for a large collection of relevant geometries.
40
Art gallery
Our proposed approach outperforms the computational geometry solution [23] to the art gallery problem, even though we do not assume the environment is known. The key issue with computational geometry approaches is that they are heavily dependent on the triangulation. In an extreme example, consider an art gallery that is a simple convex n-gon. Even though it is sufficient to place a single vantage point anywhere in the interior of the room, the triangulation-based approach produces a solution with n/3 vertex guards. Figure 15 shows an example gallery consisting of 58 vertices. The computational geometry approach requires n 3 = 19 vantage points to completely cover the scene, even if point guards are used [5,12]. The gallery contains r = 19 reflex angles, so the work of [8] requires r + 1 = 20 vantage points.
On average, City-CNN requires only 8 vantage points.
3D environment
We present a 3D simulation of a 250m×250m environment based on Castle Square Parks in Boston. Figure 16 for snapshots of the algorithm in action.
The map is discretized as a level set function on a 768 × 768 × 64 voxel grid. At this resolution, small pillars are accurately reconstructed by our exploration algorithm. Each step can be generated in 3 seconds using the GPU or 300 seconds using the CPU. Parallelization of the distance function computation will further reduce the computation time significantly. A map Figure 15: Comparison of the computational geometry approach and the City-CNN approach to the art gallery problem. The red circles are the vantage points computed by the methods. Left: A result computed by the computational geometry approach, given the environment. Right: An example sequence of 7 vantage points generated by the City-CNN model. of this size was previously unfeasible. Lastly, Figure 17 shows snapshots from the exploration of a more challenging, cluttered 3D scene with many nooks.
Conclusion
From the perspective of inverse problems, we proposed a greedy algorithm for autonomous surveillance and exploration. We show that this formulation can be well-approximated using convolutional neural networks, which learns geometric priors for a large class of obstacles. The inclusion of shadow boundaries, computed using the level set method, is crucial for the success of the algorithm. One of the advantages of using the gain function (6), an integral quantity, is its stability with respect to noise in positioning and sensor measurements. In practice, we envision that it can be used in conjuction with SLAM algorithms [7,2] for a wide range of real-world applications.
One may also consider n-step greedy algorithms, where n vantage points are chosen simultaneously. However, being more greedy is not necessarily better. If the performance metric is the cardinality of the solution set, then it is not clear that multi-step greedy algorithms lead to smaller solutions.
We saw in section 2 that, even for the single circular obstacle, the greedy surveillance algorithm may sometimes require more steps than the exploration algorithm to attain complete coverage.
If the performance metric is based on the rate in which the objective function increases, then a multi-step greedy approach would be appropriate. However, on a grid with m nodes in d dimensions, there are O(m nd ) possible combinations. For each combination, computing the visibility and gain function requires O(nm d ) cost. In total, the complexity is O(nm d(n+1) ), which is very expensive, even when used for offline training of a neural network. In such cases, it is necessary to selectively sample only the relevant combinations. One such way to do that, is through a tree search algorithm.
| 5,988 |
1809.06025
|
2892244049
|
We study the problem of visibility-based exploration, reconstruction and surveillance in the context of supervised learning. Using a level set representation of data and information, we train a convolutional neural network to determine vantage points that maximize visibility. We show that this method drastically reduces the on-line computational cost and determines a small set of vantage points that solve the problem. This enables us to efficiently produce highly-resolved and topologically accurate maps of complex 3D environments. We present realistic simulations on 2D and 3D urban environments.
|
propose an alternating minimization scheme for optimizing the visibility of @math observers @cite_25 . use a system of differential equations to optimize the location and orientation of @math sensors to maximize surveillance @cite_6 . Both works assume the number of sensors is given.
|
{
"abstract": [
"The visibility level set function introduced by allows for gradient based and variational formulations of many classical visibility optimization problems. In this work we propose solutions to two such problems. The first asks where to position n-observers such that the area visible to these observers is maximized. The second problem is to determine the shortest route an observer should take through a map such that every point in the map is visible from at least one vantage point on the route. These problems are similar to the art gallery and watchman route problems, respectively. We propose a greedy iterative algorithm, formulated in the level set framework as the solution to the art gallery problem. We also propose a variational solution to the watchman route problem which achieves complete visibility coverage of the domain while attaining a local minimum of path length.",
"We propose a computational method to optimally position a sensor system. Each sensor has limited range and viewing angle, and it may fail with a certain failure rate. The goal is to find the optimal locations as well as the viewing directions of all the sensors and achieve the maximal surveillance of the known environment. We set up the problem using the level set framework. Both the environment and the viewing range of the sensors are represented by level set functions. Then we solve a system of ordinary differential equations (ODEs) to find the optimal viewing directions and locations of all sensors together. Furthermore, we use the intermittent diffusion, which converts the ODEs into stochastic differential equations, to find the global maximum of the total surveillance area. The numerical examples include various failure rates of sensors, different rates of importance of the surveillance region, and 3-dimensional setups. They show the effectiveness of the proposed method."
],
"cite_N": [
"@cite_25",
"@cite_6"
],
"mid": [
"2329411194",
"2962862065"
]
}
|
Greedy Algorithms for Sparse Sensor Placement via Deep Learning
|
We consider the problem of generating a minimal sequence of observing locations to achieve complete line-of-sight visibility coverage of an environment.
In particular, we are interested in the case when environment is initially 2 unknown. This is particularly useful for autonomous agents to map out unknown, or otherwise unreachable environments, such as undersea caverns.
Military personnel may avoid dangerous situations by sending autonomous agents to scout new territory. We first assume the environment is known in order to gain insights.
Consider a domain Ω ⊆ R d . Partition the domain Ω = Ω free ∪ Ω obs into an open set Ω free representing the free space, and a closed set Ω obs of finite obstacles without holes. We will refer to the Ω obs as the environment, since it is characterized by the obstacles. Let x i ∈ Ω free be a vantage point, from which a range sensor, such as LiDAR, takes omnidirectional measurements P x i : S d−1 → R. That is, P x i outputs the distance to closest obstacle for each direction in the unit sphere. One can map the range measurements to the visibility set V x i ; points in V x i are visible from x i :
x ∈ V x i if x − x i 2 < P x i x − x i x − x i 2(1)
As more range measurements are acquired, Ω free can be approximated by the cumulatively visible set Ω k :
Ω k = k i=0 V x i(2)
By construction, Ω k admits partial ordering: Ω i−1 ⊂ Ω i . For suitable choices of x i , it is possible that Ω n → Ω free (say, in the Hausdorff distance).
We aim at determining a minimal set of vantage points O from which 3 every x ∈ Ω free can be seen. One may formulate a constrained optimization problem and look for sparse solutions. When the environment is known, we have the surveillance problem:
min O⊆Ω free |O| subject to Ω free = x∈O V x .(3)
When the environment is not known apriori, the agent must be careful to avoid collision with obstacles. New vantage points must be a point that is currently visible. That is, x k+1 ∈ Ω k . Define the set of admissible sequences:
A(Ω free ) := {(x 0 , . . . , x n−1 ) | n ∈ N, x 0 ∈ Ω free , x k+1 ∈ Ω k }.
For the unknown environment, we have the exploration problem:
min O∈A(Ω free ) |O| subject to Ω free = x∈O V x .(5)
The problem is feasible as long as obstacles do not have holes.
6
The approach of Bai et al. [1] terminates when there is no occlusion within view of the agent, even if the global map is still incomplete. Tai and Liu [30,31,20] train agents to learn obstacle avoidance.
Our work uses a gain function to steer a greedy approach, similar to the next-best-view algorithms. However, our measure of information gain takes the geometry of the environment into account. By taking advantage of precomputation via convolutional neural networks, our model learns shape priors for a large class of obstacles and is efficient at runtime. We use a volumetric representation which can handle arbitrary geometries in 2D and 3D. Also, we assume that the sensor range is larger than the domain, which makes the problem more global and challenging.
Greedy algorithm
We propose a greedy approach which sequentially determines a new vantage point, x k+1 , based on the information gathered from all previous vantage points, x 0 , x 1 , · · · , x k . The strategy is greedy because x k+1 would be a location that maximizes the information gain.
For the surveillance problem, the environment is known. We define the gain function:
g(x; Ω k ) := |V x ∪ Ω k | − |Ω k |,(6)
i.e. the volume of the region that is visible from x but not from x 0 , x 1 , · · · , x k .
Note that g depends on Ω obs , which we omit for clarity of notation. The next 7 vantage point should be chosen to maximize the newly-surveyed volume. We define the greedy surveillance algorithm as:
x k+1 = arg max x∈Ω free g(x; Ω k ).(7)
The problem of exploration is even more challenging since, by definition, the environment is not known. Subsequent vantage points must lie within the current visible set Ω k . The corresponding greedy exploration algorithm is
x k+1 = arg max x∈Ω k g(x; Ω k ).(8)
However, we remark that in practice, one is typically interested only in a subset S of all possible environments S := {Ω obs |Ω obs ⊆ R d }.
For example, cities generally follow a grid-like pattern. Knowing these priors can help guide our estimate of g for certain types of Ω obs , even when Ω obs is unknown initially.
We propose to encode these priors formally into the parameters, θ, of a learned function:
g θ (x; Ω k , B k ) for Ω obs ∈ S,(9)
where B k is the part of ∂Ω k that may actually lie in the free space Ω free :
B k = ∂Ω k \Ω obs .(10)
See Figure 2 for an example gain function. We shall demonstrate that 8 while training for g θ , incorporating the shadow boundaries helps, in some sense, localize the learning of g, and is essential in creating usable g θ .
A bound for the known environment
We present a bound on the optimality of the greedy algorithm, based on submodularity [14], a useful property of set functions. We start with standard definitions. Let V be a finite set and f : 2 V → R be a set function which assigns a value to each subset S ⊆ V .
Definition 2.1. (Monotonicity) A set function f is monotone if for every A ⊆ B ⊆ V , f (A) ≤ f (B). Definition 2.2. (Discrete derivative) The discrete derivative of f at S with respect to v ∈ V is ∆ f (v|S) := f (S ∪ {v}) − f (S). Definition 2.3. (Submodularity) A set function f is submodular if for every A ⊆ B ⊆ V and v ∈ V \ B, ∆ f (v|A) ≥ ∆ f (v|B).
In other words, set functions are submodular if they have diminishing returns. More details and extensions of submodularity can be found in [14].
Lemma 2.1. The function f is monotone.
Proof. Consider A ⊆ B ⊆ Ω free . Since f is the cardinality of unions of sets, we have
f (B) = x∈B V x = x∈A∪{B\A} V x ≥ x∈A V x = f (A).V x + x∈B V x ≥ x∈A∪{v}∪B V x + x∈(A∪{v})∩B V x = x∈B∪{v} V x + x∈A V x = f (B ∪ {v}) + f (A)
Rearranging, we have
f (A ∪ {v}) + f (B) ≥ f (B ∪ {v}) + f (A) f (A ∪ {v}) − f (A) ≥ f (B ∪ {v}) − f (B) ∆ f (v|A) ≥ ∆ f (v|B).
Submodularity and monotonicity enable a bound which compares the relative performance of the greedy algorithm to the optimal solution.
Theorem 2.3. Let O * k be the optimal set of k sensors. Let O n = {x i } n i=1 be
the set of n sensors placed using the greedy surveillance algorithm (7). Then,
f (O n ) ≥ (1 − e −n/k )f (O * k ).
Proof. For l < n we have
f (O * k ) ≤ f (O * k ∪ O l ) (12) = f (O l ) + ∆ f (O * k |O l ) (13) = f (O l ) + k i=1 ∆ f (x * i |O l ∪ {x * 1 , . . . , x * i−1 }) (14) ≤ f (O l ) + k i=1 ∆ f (x * i |O l ) (15) ≤ f (O l ) + k i=1 f (O l+1 ) − f (O l ) (16) = f (O l ) + k f (O l+1 ) − f (O l ) .(17)
Line (12) follows from monotonicity, (15) follows from submodularity of f , and (16) from definition of the greedy algorithm. Define
δ l := f (O * k ) − f (O l ), with δ 0 := f (O * k ). Then f (O * k ) − f (O l ) ≤ k f (O l+1 ) − f (O l ) δ l ≤ k δ l − δ l+1 δ l 1 − k ≤ −kδ l+1 δ l 1 − 1 k ≥ δ l+1
Expanding the recurrence relation with δ n , we have
δ n ≤ 1 − 1 k δ n−1 ≤ 1 − 1 k n δ 0 = 1 − 1 k n f (O * k )
Finally, substituting back the definition for δ n , we have the desired result:
δ n ≤ 1 − 1 k n f (O * k ) f (O * k ) − f (O n ) ≤ 1 − 1 k n f (O * k ) f (O * k ) 1 − (1 − 1/k) n ≤ f (O n ) f (O * k ) 1 − e −n/k ≤ f (O n )(18)
where (18)
follows from the inequality 1 − x ≤ e −x .
In particular, if n = k, then (1 − e −1 ) ≈ 0.63. This means that k steps of the greedy algorithm is guaranteed to cover at least 63% of the total volume, 13 if the optimal solution can also be obtained with k steps. When n = 3k, the greedy algorithm covers at least 95% of the total volume. In [22], it was shown that no polynomial time algorithm can achieve a better bound.
A bound for the unknown environment
When the environment is not known, subsequent vantage points must lie within the current visible set to avoid collision with obstacles:
x k+1 ∈ V(O k )(19)
Thus, the performance of the exploration algorithm has a strong dependence on the environment Ω obs and the initial vantage point x 1 . We characterize this dependence using the notion of the exploration ratio.
Given an environment Ω obs and A ⊆ Ω free , consider the ratio of the marginal value of the greedy exploration algorithm, to that of the greedy surveillance algorithm:
ρ(A) := sup x∈V(A) ∆ f (x|A) sup x∈Ω free ∆ f (x|A) .(20)
That is, ρ(A) characterizes the relative gap (for lack of a better word) caused
by the collision-avoidance constraint x ∈ V(A). Let A x = {A ⊆ Ω free |x ∈ A} 14
be the set of vantage points which contain x. Define the exploration ratio as
ρ x := inf A∈Ax ρ(A).(21)
The exploration ratio is the worst-case gap between the two greedy algorithms, conditioned on x. It helps to provide a bound for the difference between the optimal solution set of size k, and the one prescribed by n steps the greedy exploration algorithm.
Theorem 2.4. Let O * k = {x * i } k i=1 be the optimal sequence of k sensors which includes x * 1 = x 1 . Let O n = {x i } n i=1
be the sequence of n sensors placed using the greedy exploration algorithm (8). Then, for k, n > 1:
f (O n ) ≥ 1 − exp −(n − 1)ρ x 1 k − 1 1 − f (x 1 ) f (O * k ) f (O * k ).
This is reminiscent of Theorem 2.3, with two subtle differences. The
Proof. We have, for l < n:
f (O * k ) ≤ f (O * k ∪ O l ) = f (O l ) + ∆ f (O * k |O l ) = f (O l ) + k i=1 ∆ f (x * i |O l ∪ {x * 1 , . . . , x * i−1 }) (22) ≤ f (O l ) + k i=1 ∆ f (x * i |O l ) (23) = f (O l ) + ∆ f (x * 1 |O l ) + k i=2 ∆ f (x * i |O l ) = f (O l ) + k i=2 ∆ f (x * i |O l ) (24) ≤ f (O l ) + k i=2 max x∈Ω free ∆ f (x|O l ) ≤ f (O l ) + 1 ρ x 1 k i=2 max x∈V(O l ) ∆ f (x|O l ) (25) ≤ f (O l ) + 1 ρ x 1 k i=2 f (O l+1 ) − f (O l ) (26) = f (O l ) + k − 1 ρ x 1 f (O l+1 ) − f (O l ) .
Line (22) is a telescoping sum, (23) follows from submodularity of f , (24) uses the fact that x * 1 ∈ O l , (25) follows from the definition of ρ x 1 and (26) stems from the definition of the greedy exploration algorithm (8).
As before, define
δ l := f (O * k ) − f (O l )
. However, this time, note that
δ 1 := f (O * k ) − f (O 1 ) = f (O * k ) − f (x 1 ). Then f (O * k ) − f (O l ) ≤ k − 1 ρ x 1 f (O l+1 ) − f (O l ) δ l ≤ k − 1 ρ x 1 δ l − δ l+1 δ l 1 − k − 1 ρ x 1 ≤ − k − 1 ρ x 1 δ l+1 δ l 1 − ρ x 1 k − 1 ≥ δ l+1
Expanding the recurrence relation with δ n , we have
δ n ≤ 1 − ρ x 1 k − 1 δ n−1 ≤ 1 − ρ x 1 k − 1 n−1 δ 1 = 1 − ρ x 1 k − 1 n−1 f (O * k ) − f (x 1 )
Now, substituting back the definition for δ n , we arrive at
δ n ≤ 1 − ρ x 1 k − 1 n−1 f (O * k ) − f (x 1 ) f (O * k ) − f (O n ) ≤ 1 − ρ x 1 k − 1 n−1 f (O * k ) − f (x 1 ) f (O * k ) − f (x 1 ) − f (O n ) − f (x 1 ) ≤ 1 − ρ x 1 k − 1 n−1 f (O * k ) − f (x 1 ) f (O * k ) − f (x 1 ) 1 − 1 − ρ x 1 k − 1 n−1 ≤ f (O n ) − f (x 1 ) f (O * k ) − f (x 1 ) 1 − e − (n−1)ρx 1 k−1 ≤ f (O n ) − f (x 1 ) .
Finally, with some more algebra
f (O n ) − f (x 1 ) ≥ 1 − e − (n−1)ρx 1 k−1 f (O * k ) − f (x 1 ) f (O n ) ≥ f (x 1 ) + 1 − e − (n−1)ρx 1 k−1 f (O * k ) − f (x 1 ) f (O n ) ≥ f (x 1 ) + 1 − e − (n−1)ρx 1 k−1 f (O * k ) − f (x 1 ) + f (x 1 )e − (n−1)ρx 1 k−1 f (O n ) ≥ 1 − e − (n−1)ρx 1 k−1 f (O * k ) + f (x 1 )e − (n−1)ρx 1 k−1 f (O n ) ≥ 1 − e − (n−1)ρx 1 k−1 1 − f (x 1 ) f (O * n ) f (O * k ).
Exploration ratio example
We demonstrate an example where ρ x can be an arbitrarily small factor that is determined by the geometry of Ω free . Figure 3 depicts an illustration of the setup for the narrow alley environment.
Consider a domain Ω = [0, 1] × [0, 1] with a thin vertical wall of width ε 1, whose center stretches from ( 3 2 ε, 0) to ( 3 2 ε, 1). A narrow opening of size ε 2 × ε is centered at ( 3 2 ε, 1 2 ). Suppose
x 1 = x * 1 = A so that f ({x 1 }) = ε + O(ε 2 ),
where the ε 2 factor is due to the small sliver of the narrow alley visible from place
x 2 ∈ V(x 1 ) = [0, ε] × [0, 1]. One possible location is x 2 = B.
Then, after 2 steps of the greedy algorithm, we have
f (O 2 ) = ε + O(ε 2 ).
Meanwhile, the total visible area is
f (O * 2 ) = 1 − O(ε)
and the ratio of greedy to optimal area coverage is
f (O 2 ) f (O * 2 ) = ε + O(ε 2 ) 1 − O(ε) = O(ε)(27)
The exploration ratio is ρ
x 1 = O(ε 2 ), since max x∈V({x 1 }) ∆ f (x|{x 1 }) = O(ε 2 ) max x∈Ω free ∆ f (x|{x 1 }) = 1 − O(ε)(28)
According to the bound, with k = n = 2, we should have
f (O 2 ) f (O * 2 ) ≥ 1 − e − (n−1)ρx 1 k−1 1 − f (x 1 ) f (O * 2 ) = 1 − e −O(ε 2 ) 1 − O(ε) = Ω(ε)(29)
which reflects what we see in (27). and ρ x 1 = 1, since both the greedy exploration and surveillance step coincide.
According to the bound, with k = n = 2, we should have
f (O 2 ) f (O * 2 ) ≥ 1 − e − (n−1)ρx 1 k−1 1 − f (x 1 ) f (O * n ) ≥ 1 − O(ε)(30)
which is the case, since f (
O 2 ) = f (O * 2 ).
By considering the first vantage point x 1 as part of the bound, we ac-20 count for some of the unavoidable uncertainties associated with unknown environments during exploration.
Numerical comparison
We compare both greedy algorithms on random arrangements of up to 6 circular obstacles. Each algorithm starts from the same initial position and runs until all free area is covered. We record the number of vantage points required over 200 runs for each number of obstacles.
Surprisingly, the exploration algorithm sometimes requires fewer vantage points than the surveillance algorithm. Perhaps the latter is too aggressive, or perhaps the collision-avoidance constraint acts as a regularizer. For example, when there is a single circle, the greedy surveillance algorithm places the second vantage point x 2 on the opposite side of this obstacle. This may lead to two slivers of occlusion forming of either side of the circle, which will require 2 additional vantage points to cover. With the greedy exploration algorithm, we do not have this problem, due to the collision-avoidance constraint. Figure 4 shows an select example with 1 and 5 obstacles. Figure 5 show the histogram of the number of steps needed for each algorithm. On average, both algorithms require a similar number of steps, but the exploration algorithm has a slight advantage. In this section, we discuss the method for approximating the gain function when the map is not known. Given the set of previously-visited vantage points, we compute the cumulative visibility and shadow boundaries. We approximate the gain function by applying the trained neural network on this pair of inputs, and pick the next point according to (7). This procedure repeats until there are no shadow boundaries or occlusions.
The data needed for the training and evaluation of g θ are computed using level sets [26,28,25]. Occupancy grids may be applicable, but we choose level sets since they have proven to be accurate and robust. In particular, level sets are necessary for subpixel resolution of shadow boundaries and they allow for efficient visibility computation, which is crucial when generating the library of training examples.
The training geometry is embedded by a signed distance function, denoted by φ. For each vantage point x i , the visibility set is represented by the level set function ψ(·, x i ), which is computed efficiently using the algorithm described in [34].
In the calculus of level set functions, unions and intersections of sets are translated, respectively, into taking maximum and minimum of the corresponding characteristic functions. The cumulatively visible sets Ω k are 24 represented by the level set function Ψ k (x), which is defined recursively by
Ψ 0 (x) = ψ(x, x 0 ),(31)Ψ k (x) = max {Ψ k−1 (x), ψ(x, x k )} , k = 1, 2, . . .(32)
where the max is taken point-wise. Thus we have
Ω free = {x|φ(x) > 0},(33)V x i = {x|ψ(x, x i ) > 0},(34)Ω k = {x|Ψ k (x) > 0}.(35)
The shadow boundaries B k are approximated by the "smeared out" function:
b k (x) := δ ε (Ψ k ) · [1 − H(G k (x))] ,(36)
where H(x) is the Heaviside function and
δ ε (x) = 2 ε cos 2 πx ε · 1 [− ε 2 , ε 2 ] (x),(37)γ(x, x 0 ) = (x 0 − x) T · ∇φ(x),(38)G 0 = γ(x, x 0 ),(39)G k (x) = max{G k−1 (x), γ(x, x k )}, k = 1, 2, . . .(40)
Recall, the shadow boundaries are the portion of the ∂Ω k that lie in free space; the role of 1 − H(G k ) is to mask out the portion of obstacles that are currently visible from {x i } k i=1 . See Figure ?? for an example of γ. In our implementation, we take ε = 3∆x where ∆x is the grid node spacing. We refer the readers to [35] for a short review of relevant details.
When the environment Ω obs is known, we can compute the gain function
exactly g(x; Ω k ) = H H ψ(ξ, x) − H Ψ k (ξ) dξ.(41)
We remark that the integrand will be 1 where the new vantage point uncovers something not previously seen. Computing g for all x is costly; each visibility and volume computation requires O(m d ) operations, and repeating this for all points in the domain results in O(m 2d ) total flops. We approximate it with a functiong θ parameterized by θ:
g θ (x; Ψ k , φ, b k ) ≈ g(x; Ω k ).(42)
If the environment is unknown, we directly approximate the gain function by learning the parameters θ of a function
g θ (x; Ψ k , b k ) ≈ g(x; Ω k )H(Ψ k )(43)
using only the observations as input. Note the H(Ψ k ) factor is needed for collision avoidance during exploration because it is not known a priori whether an occluded location y is part of an obstacle or free space. Thus g θ (y) must 26 be zero.
Training procedure
We sample the environments uniformly from a library. For each Ω obs , a sequence of data pairs is generated and included into the training set T : consisting of k steps. Instead, to generate causally relevant data, we use an ε-greedy approach: we uniformly sample initial positions. With probability ε, the next vantage point is chosen randomly from admissible set. With probability 1 − ε, the next vantage point is chosen according to (7). Figure 6 shows an illustration of the generation of causal data along the subspace of relevant shapes.
{Ψ k , b k }, g(x; Ω k )H(Ψ k ) , k = 0, 1, 2, . . . .(44)
The function g θ is learned by minimizing the empirical loss across all data Figure 6: Causal data generation along the subspace of relevant shapes. Each dot is a data sample corresponding to a sequence of vantage points.
pairs for each Ω obs in the training set T :
argmin θ 1 N Ω obs ∈T k L g θ (x; Ψ k , b k ), g(x; Ω k )H(Ψ k ) ,(45)
where N is the total number of data pairs. We use the cross entropy loss function:
L(p, q) = p(x) log q(x) + (1 − p(x)) log(1 − q(x)) dx. (46) (a) a) (b) b) (c) c) 0 1 (d) d)
Network architecture
We use convolutional neural networks (CNNs) to approximate the gain function, which depends on the shape of Ω obs and the location x. CNNs have been used to approximate functions of shapes effectively in many applications. Their feedforward evaluations are efficient if the off-line training cost is ignored. The gain function g(x) does not depend directly on x, but rather,
x's visibility of Ω free , with a domain of dependence bounded by the sensor range. We employ a fully convolutional approach for learning g, which makes the network applicable to domains of different sizes. The generalization to 3D is also straight-forward.
We base the architecture of the CNN on U-Net [27], which has had great success in dense inference problems, such as image segmentation. It aggregates information from various layers in order to have wide receptive fields while maintaining pixel precision. The main design choice is to make sure that the receptive field of our model is sufficient. That is, we want to make sure that the value predicted at each voxel depends on a sufficiently large neighborhood. For efficiency, we use convolution kernels of size 3 in each dimension. By stacking multiple layers, we can achieve large receptive fields.
Thus the complexity for feedforward computations is linear in the total number of grid points.
Define a conv block as the following layers: convolution, batch norm, leaky relu, stride 2 convolution, batch norm, and leaky relu. Each conv block reduces the image size by a factor of 2. The latter half of the network increases 30 the image size using deconv blocks: bilinear 2x upsampling, convolution, batch norm, and leaky relu.
Our 2D network uses 6 conv blocks followed by 6 deconv blocks, while our 3D network uses 5 of each block. We choose the number of blocks to ensure that the receptive field is at least the size of the training images: 128 × 128 and 64 × 64 × 64. The first conv block outputs 4 channels. The number of channels doubles with each conv block, and halves with each deconv block.
The network ends with a single channel, kernel of size 1 convolution layer followed by the sigmoid activation. This ensures that the network aggregates all information into a prediction of the correct size and range.
Numerical results
We present some experiments to demonstrate the efficacy of our approach.
Also, we demonstrate its limitations. First, we train on 128 × 128 aerial city blocks cropped from INRIA Aerial Image Labeling Dataset [21]. It contains binary images with building labels from several urban areas, including
Austin, Chicago, Vienna, and Tyrol. We train on all the areas except Austin, which we hold out for evaluation. We call this model City-CNN. We train a similar model NoSB-CNN on the same training data, but omit the shadow boundary from the input. Third, we train another model Radial-CNN, on synthetically-generated radial maps, such as the one in Figure 13.
Given a map, we randomly select an initial location. In order to generate 31 the sequence of vantage points, we apply (7), using g θ in place of g. Ties are broken by choosing the closest point to x k . We repeat this process until there are no shadow boundaries, the gain function is smaller than , or the residual is less than δ, where the residual is defined as:
r = |Ω free \ Ω k | |Ω free | .(47)
We compare these against the algorithm which uses the exact gain function, which we call Exact. We also compare against Random, a random walker, which chooses subsequent vantage points uniformly from the visible region, and Random-SB which samples points uniformly in a small neighborhood of the shadow boundaries. We analyze the number of steps required to cover the scene and the residual as a function of the number of steps. The algorithm is robust to the initial positions. Figure 9 show the distribution of the number of steps and residual across over 800 runs from varying initial positions over a 512 × 512 Austin map. In practice, using the shadow 35 boundaries as a stopping criteria can be unreliable. Due to numerical precision and discretization effects, the shadow boundaries may never completely disappear. Instead, the algorithm terminates when the maximum predicted gain falls below a certain threshold . In this example, we used = 0.1.
Empirically, this strategy is robust. On average, the algorithm required 33
vantage points to reduce the occluded region to within 0.1% of the explorable area. Figure 10 shows an example sequence consisting of 36 vantage points.
Each subsequent step is generated in under 1 sec using the CPU and instantaneously with a GPU.
Even when the maximizer of the predicted gain function is different from that of the exact gain function, the difference in gain is negligible. This is evident when we see the residuals for City-CNN decrease at similar rates to Exact. Figure 11 demonstrates an example of the residual as a function of the number of steps for one such sequence generated by these algorithms on a 1024 × 1024 map of Austin. We see that City-CNN performs comparably to Exact approach in terms of residual. However, City-CNN takes 140 secs to generate 50 steps on the CPU while Exact, an O(m 4 ) algorithm, takes more than 16 hours to produce 50 steps.
Effect of shadow boundaries
The inclusion of the shadow boundaries as input to the CNN is critical for the algorithm to work. Without the shadow boundaries, the algorithm cannot in no change to the cumulative visibility. At the next iteration, the input is same as the previous iteration, and the result will be the same; the algorithm becomes stuck in a cycle. To avoid this, we prevent vantage points from repeating by zeroing out the gain function at that point and recomputing the argmax. Still, the vantage points tend to cluster near flat edges, as in Figure 12. This clustering behavior causes the NoSB-CNN model to be, at times, worse than Random. See Figure 11 to see how the clustering inhibits the reduction in the residual.
Effect of shape
The shape of the obstacles, i.e. Ω c , used in training affects the gain function predictions. Figure 13 compares the gain functions produced by City-CNN
and Radial-CNN.
Frequency map
Here we present one of our studies concerning the exclusivity of vantage point placements in Ω. We generated sequences of vantage points starting from over 800 different initial conditions using City-CNN model on a 512 × 512
Austin map. Then, we model each vantage point as a Gaussian with fixed width, and overlay the resulting distribution on the Austin map in Figure 14.
This gives us a frequency map of the most recurring vantage points. These hot spots reveal regions that are more secluded and therefore, the visibility of those regions is more sensitive to vantage point selection. The efficiency of the CNN method allows us to address many surveillance related questions for a large collection of relevant geometries.
40
Art gallery
Our proposed approach outperforms the computational geometry solution [23] to the art gallery problem, even though we do not assume the environment is known. The key issue with computational geometry approaches is that they are heavily dependent on the triangulation. In an extreme example, consider an art gallery that is a simple convex n-gon. Even though it is sufficient to place a single vantage point anywhere in the interior of the room, the triangulation-based approach produces a solution with n/3 vertex guards. Figure 15 shows an example gallery consisting of 58 vertices. The computational geometry approach requires n 3 = 19 vantage points to completely cover the scene, even if point guards are used [5,12]. The gallery contains r = 19 reflex angles, so the work of [8] requires r + 1 = 20 vantage points.
On average, City-CNN requires only 8 vantage points.
3D environment
We present a 3D simulation of a 250m×250m environment based on Castle Square Parks in Boston. Figure 16 for snapshots of the algorithm in action.
The map is discretized as a level set function on a 768 × 768 × 64 voxel grid. At this resolution, small pillars are accurately reconstructed by our exploration algorithm. Each step can be generated in 3 seconds using the GPU or 300 seconds using the CPU. Parallelization of the distance function computation will further reduce the computation time significantly. A map Figure 15: Comparison of the computational geometry approach and the City-CNN approach to the art gallery problem. The red circles are the vantage points computed by the methods. Left: A result computed by the computational geometry approach, given the environment. Right: An example sequence of 7 vantage points generated by the City-CNN model. of this size was previously unfeasible. Lastly, Figure 17 shows snapshots from the exploration of a more challenging, cluttered 3D scene with many nooks.
Conclusion
From the perspective of inverse problems, we proposed a greedy algorithm for autonomous surveillance and exploration. We show that this formulation can be well-approximated using convolutional neural networks, which learns geometric priors for a large class of obstacles. The inclusion of shadow boundaries, computed using the level set method, is crucial for the success of the algorithm. One of the advantages of using the gain function (6), an integral quantity, is its stability with respect to noise in positioning and sensor measurements. In practice, we envision that it can be used in conjuction with SLAM algorithms [7,2] for a wide range of real-world applications.
One may also consider n-step greedy algorithms, where n vantage points are chosen simultaneously. However, being more greedy is not necessarily better. If the performance metric is the cardinality of the solution set, then it is not clear that multi-step greedy algorithms lead to smaller solutions.
We saw in section 2 that, even for the single circular obstacle, the greedy surveillance algorithm may sometimes require more steps than the exploration algorithm to attain complete coverage.
If the performance metric is based on the rate in which the objective function increases, then a multi-step greedy approach would be appropriate. However, on a grid with m nodes in d dimensions, there are O(m nd ) possible combinations. For each combination, computing the visibility and gain function requires O(nm d ) cost. In total, the complexity is O(nm d(n+1) ), which is very expensive, even when used for offline training of a neural network. In such cases, it is necessary to selectively sample only the relevant combinations. One such way to do that, is through a tree search algorithm.
| 5,988 |
1809.06025
|
2892244049
|
We study the problem of visibility-based exploration, reconstruction and surveillance in the context of supervised learning. Using a level set representation of data and information, we train a convolutional neural network to determine vantage points that maximize visibility. We show that this method drastically reduces the on-line computational cost and determines a small set of vantage points that solve the problem. This enables us to efficiently produce highly-resolved and topologically accurate maps of complex 3D environments. We present realistic simulations on 2D and 3D urban environments.
|
For the exploration problem, a class of approaches pick new vantage points along shadow boundaries (aka frontiers), the boundary between free and occluded regions @cite_22 . propose a frontier-based approach for 2D polygonal environments which requires @math views, where @math is the number of reflex angles @cite_10 . For general 2D environments, @cite_3 @cite_23 @cite_32 use high order ENO interpolation to estimate curvature, which is then used to determine how far past the horizon to step. However, it is not necessarily optimal to pick only points along the shadow boundary, e.g. when the map is a star-shaped polygon @cite_10 .
|
{
"abstract": [
"We introduce a new approach for exploration based on the concept of frontiers, regions on the boundary between open space and unexplored space. By moving to new frontiers, a mobile robot can extend its map into new territory until the entire environment has been explored. We describe a method for detecting frontiers in evidence grids and navigating to these frontiers. We also introduce a technique for minimizing specular reflections in evidence grids using laser-limited sonar. We have tested this approach with a real mobile robot, exploring real-world office environments cluttered with a variety of obstacles. An advantage of our approach is its ability to explore both large open spaces and narrow cluttered spaces, with walls and obstacles in arbitrary orientation.",
"A combustion apparatus comprises a combustor liner forming a combustion chamber, and a swirler for introducing a fuel gas into the combustion chamber in the form of a swirl. The combustor liner has an air film forming device provided on the wall thereof and capable of forming a film of cooling air on the inner peripheral wall of the combustor liner so as to protect the combustor liner from the hot combustion gas in the combustion chamber. The air film forming means is formed such that the flowing direction of the air forming the film becomes the same direction as the swirling direction of the combustion gas, so that the film of the cooling air is not broken by the hot combustion gas.",
"We present an algorithm for interpolating the visible portions of a point cloud that are sampled from opaque objects in the environment. Our algorithm projects point clouds onto a sphere centered at the observing locations and performs essentially non-oscillatory (ENO) interpolation to the projected data. Curvatures of the occluding objects can be approximated and used in many ways. We show how this algorithm can be incorporated into novel algorithms for mapping an unknown environment.",
"Autonomous robotic systems (observers) equipped with range sensors must be able to discover their surroundings, in an initially unknown environment, for navigational purposes. We present an implementation of a recent environment- mapping algorithm [1] based on Essentially Non-oscillatory (ENO) interpolation [2]. An economical cooperative control tank-based platform [3] is used to validate our algorithm. Each vehicle on the test-bed is equipped with a flexible caterpillar drive, range sensor, limited onboard computing, and wireless communication.",
"The context of this work is the exploration of unknown polygonal environments with obstacles. Both the outer boundary and the boundaries of obstacles are piecewise linear. The boundaries can be nonconvex. The exploration problem can be motivated by the following application. Imagine that a robot has to explore the interior of a collapsed building, which has crumbled due to an earthquake, to search for human survivors. It is clearly impossible to have a knowledge of the building's interior geometry prior to the exploration. Thus, the robot must be able to see, with its onboard vision sensors, all points in the building's interior while following its exploration path. In this way, no potential survivors will be missed by the exploring robot. The exploratory path must clearly reflect the topology of the free space, and, therefore, such exploratory paths can be used to guide future robot excursions (such as would arise in our example from a rescue operation)."
],
"cite_N": [
"@cite_22",
"@cite_32",
"@cite_3",
"@cite_23",
"@cite_10"
],
"mid": [
"2107667896",
"1975138405",
"1525891753",
"2098012807",
"2142617093"
]
}
|
Greedy Algorithms for Sparse Sensor Placement via Deep Learning
|
We consider the problem of generating a minimal sequence of observing locations to achieve complete line-of-sight visibility coverage of an environment.
In particular, we are interested in the case when environment is initially 2 unknown. This is particularly useful for autonomous agents to map out unknown, or otherwise unreachable environments, such as undersea caverns.
Military personnel may avoid dangerous situations by sending autonomous agents to scout new territory. We first assume the environment is known in order to gain insights.
Consider a domain Ω ⊆ R d . Partition the domain Ω = Ω free ∪ Ω obs into an open set Ω free representing the free space, and a closed set Ω obs of finite obstacles without holes. We will refer to the Ω obs as the environment, since it is characterized by the obstacles. Let x i ∈ Ω free be a vantage point, from which a range sensor, such as LiDAR, takes omnidirectional measurements P x i : S d−1 → R. That is, P x i outputs the distance to closest obstacle for each direction in the unit sphere. One can map the range measurements to the visibility set V x i ; points in V x i are visible from x i :
x ∈ V x i if x − x i 2 < P x i x − x i x − x i 2(1)
As more range measurements are acquired, Ω free can be approximated by the cumulatively visible set Ω k :
Ω k = k i=0 V x i(2)
By construction, Ω k admits partial ordering: Ω i−1 ⊂ Ω i . For suitable choices of x i , it is possible that Ω n → Ω free (say, in the Hausdorff distance).
We aim at determining a minimal set of vantage points O from which 3 every x ∈ Ω free can be seen. One may formulate a constrained optimization problem and look for sparse solutions. When the environment is known, we have the surveillance problem:
min O⊆Ω free |O| subject to Ω free = x∈O V x .(3)
When the environment is not known apriori, the agent must be careful to avoid collision with obstacles. New vantage points must be a point that is currently visible. That is, x k+1 ∈ Ω k . Define the set of admissible sequences:
A(Ω free ) := {(x 0 , . . . , x n−1 ) | n ∈ N, x 0 ∈ Ω free , x k+1 ∈ Ω k }.
For the unknown environment, we have the exploration problem:
min O∈A(Ω free ) |O| subject to Ω free = x∈O V x .(5)
The problem is feasible as long as obstacles do not have holes.
6
The approach of Bai et al. [1] terminates when there is no occlusion within view of the agent, even if the global map is still incomplete. Tai and Liu [30,31,20] train agents to learn obstacle avoidance.
Our work uses a gain function to steer a greedy approach, similar to the next-best-view algorithms. However, our measure of information gain takes the geometry of the environment into account. By taking advantage of precomputation via convolutional neural networks, our model learns shape priors for a large class of obstacles and is efficient at runtime. We use a volumetric representation which can handle arbitrary geometries in 2D and 3D. Also, we assume that the sensor range is larger than the domain, which makes the problem more global and challenging.
Greedy algorithm
We propose a greedy approach which sequentially determines a new vantage point, x k+1 , based on the information gathered from all previous vantage points, x 0 , x 1 , · · · , x k . The strategy is greedy because x k+1 would be a location that maximizes the information gain.
For the surveillance problem, the environment is known. We define the gain function:
g(x; Ω k ) := |V x ∪ Ω k | − |Ω k |,(6)
i.e. the volume of the region that is visible from x but not from x 0 , x 1 , · · · , x k .
Note that g depends on Ω obs , which we omit for clarity of notation. The next 7 vantage point should be chosen to maximize the newly-surveyed volume. We define the greedy surveillance algorithm as:
x k+1 = arg max x∈Ω free g(x; Ω k ).(7)
The problem of exploration is even more challenging since, by definition, the environment is not known. Subsequent vantage points must lie within the current visible set Ω k . The corresponding greedy exploration algorithm is
x k+1 = arg max x∈Ω k g(x; Ω k ).(8)
However, we remark that in practice, one is typically interested only in a subset S of all possible environments S := {Ω obs |Ω obs ⊆ R d }.
For example, cities generally follow a grid-like pattern. Knowing these priors can help guide our estimate of g for certain types of Ω obs , even when Ω obs is unknown initially.
We propose to encode these priors formally into the parameters, θ, of a learned function:
g θ (x; Ω k , B k ) for Ω obs ∈ S,(9)
where B k is the part of ∂Ω k that may actually lie in the free space Ω free :
B k = ∂Ω k \Ω obs .(10)
See Figure 2 for an example gain function. We shall demonstrate that 8 while training for g θ , incorporating the shadow boundaries helps, in some sense, localize the learning of g, and is essential in creating usable g θ .
A bound for the known environment
We present a bound on the optimality of the greedy algorithm, based on submodularity [14], a useful property of set functions. We start with standard definitions. Let V be a finite set and f : 2 V → R be a set function which assigns a value to each subset S ⊆ V .
Definition 2.1. (Monotonicity) A set function f is monotone if for every A ⊆ B ⊆ V , f (A) ≤ f (B). Definition 2.2. (Discrete derivative) The discrete derivative of f at S with respect to v ∈ V is ∆ f (v|S) := f (S ∪ {v}) − f (S). Definition 2.3. (Submodularity) A set function f is submodular if for every A ⊆ B ⊆ V and v ∈ V \ B, ∆ f (v|A) ≥ ∆ f (v|B).
In other words, set functions are submodular if they have diminishing returns. More details and extensions of submodularity can be found in [14].
Lemma 2.1. The function f is monotone.
Proof. Consider A ⊆ B ⊆ Ω free . Since f is the cardinality of unions of sets, we have
f (B) = x∈B V x = x∈A∪{B\A} V x ≥ x∈A V x = f (A).V x + x∈B V x ≥ x∈A∪{v}∪B V x + x∈(A∪{v})∩B V x = x∈B∪{v} V x + x∈A V x = f (B ∪ {v}) + f (A)
Rearranging, we have
f (A ∪ {v}) + f (B) ≥ f (B ∪ {v}) + f (A) f (A ∪ {v}) − f (A) ≥ f (B ∪ {v}) − f (B) ∆ f (v|A) ≥ ∆ f (v|B).
Submodularity and monotonicity enable a bound which compares the relative performance of the greedy algorithm to the optimal solution.
Theorem 2.3. Let O * k be the optimal set of k sensors. Let O n = {x i } n i=1 be
the set of n sensors placed using the greedy surveillance algorithm (7). Then,
f (O n ) ≥ (1 − e −n/k )f (O * k ).
Proof. For l < n we have
f (O * k ) ≤ f (O * k ∪ O l ) (12) = f (O l ) + ∆ f (O * k |O l ) (13) = f (O l ) + k i=1 ∆ f (x * i |O l ∪ {x * 1 , . . . , x * i−1 }) (14) ≤ f (O l ) + k i=1 ∆ f (x * i |O l ) (15) ≤ f (O l ) + k i=1 f (O l+1 ) − f (O l ) (16) = f (O l ) + k f (O l+1 ) − f (O l ) .(17)
Line (12) follows from monotonicity, (15) follows from submodularity of f , and (16) from definition of the greedy algorithm. Define
δ l := f (O * k ) − f (O l ), with δ 0 := f (O * k ). Then f (O * k ) − f (O l ) ≤ k f (O l+1 ) − f (O l ) δ l ≤ k δ l − δ l+1 δ l 1 − k ≤ −kδ l+1 δ l 1 − 1 k ≥ δ l+1
Expanding the recurrence relation with δ n , we have
δ n ≤ 1 − 1 k δ n−1 ≤ 1 − 1 k n δ 0 = 1 − 1 k n f (O * k )
Finally, substituting back the definition for δ n , we have the desired result:
δ n ≤ 1 − 1 k n f (O * k ) f (O * k ) − f (O n ) ≤ 1 − 1 k n f (O * k ) f (O * k ) 1 − (1 − 1/k) n ≤ f (O n ) f (O * k ) 1 − e −n/k ≤ f (O n )(18)
where (18)
follows from the inequality 1 − x ≤ e −x .
In particular, if n = k, then (1 − e −1 ) ≈ 0.63. This means that k steps of the greedy algorithm is guaranteed to cover at least 63% of the total volume, 13 if the optimal solution can also be obtained with k steps. When n = 3k, the greedy algorithm covers at least 95% of the total volume. In [22], it was shown that no polynomial time algorithm can achieve a better bound.
A bound for the unknown environment
When the environment is not known, subsequent vantage points must lie within the current visible set to avoid collision with obstacles:
x k+1 ∈ V(O k )(19)
Thus, the performance of the exploration algorithm has a strong dependence on the environment Ω obs and the initial vantage point x 1 . We characterize this dependence using the notion of the exploration ratio.
Given an environment Ω obs and A ⊆ Ω free , consider the ratio of the marginal value of the greedy exploration algorithm, to that of the greedy surveillance algorithm:
ρ(A) := sup x∈V(A) ∆ f (x|A) sup x∈Ω free ∆ f (x|A) .(20)
That is, ρ(A) characterizes the relative gap (for lack of a better word) caused
by the collision-avoidance constraint x ∈ V(A). Let A x = {A ⊆ Ω free |x ∈ A} 14
be the set of vantage points which contain x. Define the exploration ratio as
ρ x := inf A∈Ax ρ(A).(21)
The exploration ratio is the worst-case gap between the two greedy algorithms, conditioned on x. It helps to provide a bound for the difference between the optimal solution set of size k, and the one prescribed by n steps the greedy exploration algorithm.
Theorem 2.4. Let O * k = {x * i } k i=1 be the optimal sequence of k sensors which includes x * 1 = x 1 . Let O n = {x i } n i=1
be the sequence of n sensors placed using the greedy exploration algorithm (8). Then, for k, n > 1:
f (O n ) ≥ 1 − exp −(n − 1)ρ x 1 k − 1 1 − f (x 1 ) f (O * k ) f (O * k ).
This is reminiscent of Theorem 2.3, with two subtle differences. The
Proof. We have, for l < n:
f (O * k ) ≤ f (O * k ∪ O l ) = f (O l ) + ∆ f (O * k |O l ) = f (O l ) + k i=1 ∆ f (x * i |O l ∪ {x * 1 , . . . , x * i−1 }) (22) ≤ f (O l ) + k i=1 ∆ f (x * i |O l ) (23) = f (O l ) + ∆ f (x * 1 |O l ) + k i=2 ∆ f (x * i |O l ) = f (O l ) + k i=2 ∆ f (x * i |O l ) (24) ≤ f (O l ) + k i=2 max x∈Ω free ∆ f (x|O l ) ≤ f (O l ) + 1 ρ x 1 k i=2 max x∈V(O l ) ∆ f (x|O l ) (25) ≤ f (O l ) + 1 ρ x 1 k i=2 f (O l+1 ) − f (O l ) (26) = f (O l ) + k − 1 ρ x 1 f (O l+1 ) − f (O l ) .
Line (22) is a telescoping sum, (23) follows from submodularity of f , (24) uses the fact that x * 1 ∈ O l , (25) follows from the definition of ρ x 1 and (26) stems from the definition of the greedy exploration algorithm (8).
As before, define
δ l := f (O * k ) − f (O l )
. However, this time, note that
δ 1 := f (O * k ) − f (O 1 ) = f (O * k ) − f (x 1 ). Then f (O * k ) − f (O l ) ≤ k − 1 ρ x 1 f (O l+1 ) − f (O l ) δ l ≤ k − 1 ρ x 1 δ l − δ l+1 δ l 1 − k − 1 ρ x 1 ≤ − k − 1 ρ x 1 δ l+1 δ l 1 − ρ x 1 k − 1 ≥ δ l+1
Expanding the recurrence relation with δ n , we have
δ n ≤ 1 − ρ x 1 k − 1 δ n−1 ≤ 1 − ρ x 1 k − 1 n−1 δ 1 = 1 − ρ x 1 k − 1 n−1 f (O * k ) − f (x 1 )
Now, substituting back the definition for δ n , we arrive at
δ n ≤ 1 − ρ x 1 k − 1 n−1 f (O * k ) − f (x 1 ) f (O * k ) − f (O n ) ≤ 1 − ρ x 1 k − 1 n−1 f (O * k ) − f (x 1 ) f (O * k ) − f (x 1 ) − f (O n ) − f (x 1 ) ≤ 1 − ρ x 1 k − 1 n−1 f (O * k ) − f (x 1 ) f (O * k ) − f (x 1 ) 1 − 1 − ρ x 1 k − 1 n−1 ≤ f (O n ) − f (x 1 ) f (O * k ) − f (x 1 ) 1 − e − (n−1)ρx 1 k−1 ≤ f (O n ) − f (x 1 ) .
Finally, with some more algebra
f (O n ) − f (x 1 ) ≥ 1 − e − (n−1)ρx 1 k−1 f (O * k ) − f (x 1 ) f (O n ) ≥ f (x 1 ) + 1 − e − (n−1)ρx 1 k−1 f (O * k ) − f (x 1 ) f (O n ) ≥ f (x 1 ) + 1 − e − (n−1)ρx 1 k−1 f (O * k ) − f (x 1 ) + f (x 1 )e − (n−1)ρx 1 k−1 f (O n ) ≥ 1 − e − (n−1)ρx 1 k−1 f (O * k ) + f (x 1 )e − (n−1)ρx 1 k−1 f (O n ) ≥ 1 − e − (n−1)ρx 1 k−1 1 − f (x 1 ) f (O * n ) f (O * k ).
Exploration ratio example
We demonstrate an example where ρ x can be an arbitrarily small factor that is determined by the geometry of Ω free . Figure 3 depicts an illustration of the setup for the narrow alley environment.
Consider a domain Ω = [0, 1] × [0, 1] with a thin vertical wall of width ε 1, whose center stretches from ( 3 2 ε, 0) to ( 3 2 ε, 1). A narrow opening of size ε 2 × ε is centered at ( 3 2 ε, 1 2 ). Suppose
x 1 = x * 1 = A so that f ({x 1 }) = ε + O(ε 2 ),
where the ε 2 factor is due to the small sliver of the narrow alley visible from place
x 2 ∈ V(x 1 ) = [0, ε] × [0, 1]. One possible location is x 2 = B.
Then, after 2 steps of the greedy algorithm, we have
f (O 2 ) = ε + O(ε 2 ).
Meanwhile, the total visible area is
f (O * 2 ) = 1 − O(ε)
and the ratio of greedy to optimal area coverage is
f (O 2 ) f (O * 2 ) = ε + O(ε 2 ) 1 − O(ε) = O(ε)(27)
The exploration ratio is ρ
x 1 = O(ε 2 ), since max x∈V({x 1 }) ∆ f (x|{x 1 }) = O(ε 2 ) max x∈Ω free ∆ f (x|{x 1 }) = 1 − O(ε)(28)
According to the bound, with k = n = 2, we should have
f (O 2 ) f (O * 2 ) ≥ 1 − e − (n−1)ρx 1 k−1 1 − f (x 1 ) f (O * 2 ) = 1 − e −O(ε 2 ) 1 − O(ε) = Ω(ε)(29)
which reflects what we see in (27). and ρ x 1 = 1, since both the greedy exploration and surveillance step coincide.
According to the bound, with k = n = 2, we should have
f (O 2 ) f (O * 2 ) ≥ 1 − e − (n−1)ρx 1 k−1 1 − f (x 1 ) f (O * n ) ≥ 1 − O(ε)(30)
which is the case, since f (
O 2 ) = f (O * 2 ).
By considering the first vantage point x 1 as part of the bound, we ac-20 count for some of the unavoidable uncertainties associated with unknown environments during exploration.
Numerical comparison
We compare both greedy algorithms on random arrangements of up to 6 circular obstacles. Each algorithm starts from the same initial position and runs until all free area is covered. We record the number of vantage points required over 200 runs for each number of obstacles.
Surprisingly, the exploration algorithm sometimes requires fewer vantage points than the surveillance algorithm. Perhaps the latter is too aggressive, or perhaps the collision-avoidance constraint acts as a regularizer. For example, when there is a single circle, the greedy surveillance algorithm places the second vantage point x 2 on the opposite side of this obstacle. This may lead to two slivers of occlusion forming of either side of the circle, which will require 2 additional vantage points to cover. With the greedy exploration algorithm, we do not have this problem, due to the collision-avoidance constraint. Figure 4 shows an select example with 1 and 5 obstacles. Figure 5 show the histogram of the number of steps needed for each algorithm. On average, both algorithms require a similar number of steps, but the exploration algorithm has a slight advantage. In this section, we discuss the method for approximating the gain function when the map is not known. Given the set of previously-visited vantage points, we compute the cumulative visibility and shadow boundaries. We approximate the gain function by applying the trained neural network on this pair of inputs, and pick the next point according to (7). This procedure repeats until there are no shadow boundaries or occlusions.
The data needed for the training and evaluation of g θ are computed using level sets [26,28,25]. Occupancy grids may be applicable, but we choose level sets since they have proven to be accurate and robust. In particular, level sets are necessary for subpixel resolution of shadow boundaries and they allow for efficient visibility computation, which is crucial when generating the library of training examples.
The training geometry is embedded by a signed distance function, denoted by φ. For each vantage point x i , the visibility set is represented by the level set function ψ(·, x i ), which is computed efficiently using the algorithm described in [34].
In the calculus of level set functions, unions and intersections of sets are translated, respectively, into taking maximum and minimum of the corresponding characteristic functions. The cumulatively visible sets Ω k are 24 represented by the level set function Ψ k (x), which is defined recursively by
Ψ 0 (x) = ψ(x, x 0 ),(31)Ψ k (x) = max {Ψ k−1 (x), ψ(x, x k )} , k = 1, 2, . . .(32)
where the max is taken point-wise. Thus we have
Ω free = {x|φ(x) > 0},(33)V x i = {x|ψ(x, x i ) > 0},(34)Ω k = {x|Ψ k (x) > 0}.(35)
The shadow boundaries B k are approximated by the "smeared out" function:
b k (x) := δ ε (Ψ k ) · [1 − H(G k (x))] ,(36)
where H(x) is the Heaviside function and
δ ε (x) = 2 ε cos 2 πx ε · 1 [− ε 2 , ε 2 ] (x),(37)γ(x, x 0 ) = (x 0 − x) T · ∇φ(x),(38)G 0 = γ(x, x 0 ),(39)G k (x) = max{G k−1 (x), γ(x, x k )}, k = 1, 2, . . .(40)
Recall, the shadow boundaries are the portion of the ∂Ω k that lie in free space; the role of 1 − H(G k ) is to mask out the portion of obstacles that are currently visible from {x i } k i=1 . See Figure ?? for an example of γ. In our implementation, we take ε = 3∆x where ∆x is the grid node spacing. We refer the readers to [35] for a short review of relevant details.
When the environment Ω obs is known, we can compute the gain function
exactly g(x; Ω k ) = H H ψ(ξ, x) − H Ψ k (ξ) dξ.(41)
We remark that the integrand will be 1 where the new vantage point uncovers something not previously seen. Computing g for all x is costly; each visibility and volume computation requires O(m d ) operations, and repeating this for all points in the domain results in O(m 2d ) total flops. We approximate it with a functiong θ parameterized by θ:
g θ (x; Ψ k , φ, b k ) ≈ g(x; Ω k ).(42)
If the environment is unknown, we directly approximate the gain function by learning the parameters θ of a function
g θ (x; Ψ k , b k ) ≈ g(x; Ω k )H(Ψ k )(43)
using only the observations as input. Note the H(Ψ k ) factor is needed for collision avoidance during exploration because it is not known a priori whether an occluded location y is part of an obstacle or free space. Thus g θ (y) must 26 be zero.
Training procedure
We sample the environments uniformly from a library. For each Ω obs , a sequence of data pairs is generated and included into the training set T : consisting of k steps. Instead, to generate causally relevant data, we use an ε-greedy approach: we uniformly sample initial positions. With probability ε, the next vantage point is chosen randomly from admissible set. With probability 1 − ε, the next vantage point is chosen according to (7). Figure 6 shows an illustration of the generation of causal data along the subspace of relevant shapes.
{Ψ k , b k }, g(x; Ω k )H(Ψ k ) , k = 0, 1, 2, . . . .(44)
The function g θ is learned by minimizing the empirical loss across all data Figure 6: Causal data generation along the subspace of relevant shapes. Each dot is a data sample corresponding to a sequence of vantage points.
pairs for each Ω obs in the training set T :
argmin θ 1 N Ω obs ∈T k L g θ (x; Ψ k , b k ), g(x; Ω k )H(Ψ k ) ,(45)
where N is the total number of data pairs. We use the cross entropy loss function:
L(p, q) = p(x) log q(x) + (1 − p(x)) log(1 − q(x)) dx. (46) (a) a) (b) b) (c) c) 0 1 (d) d)
Network architecture
We use convolutional neural networks (CNNs) to approximate the gain function, which depends on the shape of Ω obs and the location x. CNNs have been used to approximate functions of shapes effectively in many applications. Their feedforward evaluations are efficient if the off-line training cost is ignored. The gain function g(x) does not depend directly on x, but rather,
x's visibility of Ω free , with a domain of dependence bounded by the sensor range. We employ a fully convolutional approach for learning g, which makes the network applicable to domains of different sizes. The generalization to 3D is also straight-forward.
We base the architecture of the CNN on U-Net [27], which has had great success in dense inference problems, such as image segmentation. It aggregates information from various layers in order to have wide receptive fields while maintaining pixel precision. The main design choice is to make sure that the receptive field of our model is sufficient. That is, we want to make sure that the value predicted at each voxel depends on a sufficiently large neighborhood. For efficiency, we use convolution kernels of size 3 in each dimension. By stacking multiple layers, we can achieve large receptive fields.
Thus the complexity for feedforward computations is linear in the total number of grid points.
Define a conv block as the following layers: convolution, batch norm, leaky relu, stride 2 convolution, batch norm, and leaky relu. Each conv block reduces the image size by a factor of 2. The latter half of the network increases 30 the image size using deconv blocks: bilinear 2x upsampling, convolution, batch norm, and leaky relu.
Our 2D network uses 6 conv blocks followed by 6 deconv blocks, while our 3D network uses 5 of each block. We choose the number of blocks to ensure that the receptive field is at least the size of the training images: 128 × 128 and 64 × 64 × 64. The first conv block outputs 4 channels. The number of channels doubles with each conv block, and halves with each deconv block.
The network ends with a single channel, kernel of size 1 convolution layer followed by the sigmoid activation. This ensures that the network aggregates all information into a prediction of the correct size and range.
Numerical results
We present some experiments to demonstrate the efficacy of our approach.
Also, we demonstrate its limitations. First, we train on 128 × 128 aerial city blocks cropped from INRIA Aerial Image Labeling Dataset [21]. It contains binary images with building labels from several urban areas, including
Austin, Chicago, Vienna, and Tyrol. We train on all the areas except Austin, which we hold out for evaluation. We call this model City-CNN. We train a similar model NoSB-CNN on the same training data, but omit the shadow boundary from the input. Third, we train another model Radial-CNN, on synthetically-generated radial maps, such as the one in Figure 13.
Given a map, we randomly select an initial location. In order to generate 31 the sequence of vantage points, we apply (7), using g θ in place of g. Ties are broken by choosing the closest point to x k . We repeat this process until there are no shadow boundaries, the gain function is smaller than , or the residual is less than δ, where the residual is defined as:
r = |Ω free \ Ω k | |Ω free | .(47)
We compare these against the algorithm which uses the exact gain function, which we call Exact. We also compare against Random, a random walker, which chooses subsequent vantage points uniformly from the visible region, and Random-SB which samples points uniformly in a small neighborhood of the shadow boundaries. We analyze the number of steps required to cover the scene and the residual as a function of the number of steps. The algorithm is robust to the initial positions. Figure 9 show the distribution of the number of steps and residual across over 800 runs from varying initial positions over a 512 × 512 Austin map. In practice, using the shadow 35 boundaries as a stopping criteria can be unreliable. Due to numerical precision and discretization effects, the shadow boundaries may never completely disappear. Instead, the algorithm terminates when the maximum predicted gain falls below a certain threshold . In this example, we used = 0.1.
Empirically, this strategy is robust. On average, the algorithm required 33
vantage points to reduce the occluded region to within 0.1% of the explorable area. Figure 10 shows an example sequence consisting of 36 vantage points.
Each subsequent step is generated in under 1 sec using the CPU and instantaneously with a GPU.
Even when the maximizer of the predicted gain function is different from that of the exact gain function, the difference in gain is negligible. This is evident when we see the residuals for City-CNN decrease at similar rates to Exact. Figure 11 demonstrates an example of the residual as a function of the number of steps for one such sequence generated by these algorithms on a 1024 × 1024 map of Austin. We see that City-CNN performs comparably to Exact approach in terms of residual. However, City-CNN takes 140 secs to generate 50 steps on the CPU while Exact, an O(m 4 ) algorithm, takes more than 16 hours to produce 50 steps.
Effect of shadow boundaries
The inclusion of the shadow boundaries as input to the CNN is critical for the algorithm to work. Without the shadow boundaries, the algorithm cannot in no change to the cumulative visibility. At the next iteration, the input is same as the previous iteration, and the result will be the same; the algorithm becomes stuck in a cycle. To avoid this, we prevent vantage points from repeating by zeroing out the gain function at that point and recomputing the argmax. Still, the vantage points tend to cluster near flat edges, as in Figure 12. This clustering behavior causes the NoSB-CNN model to be, at times, worse than Random. See Figure 11 to see how the clustering inhibits the reduction in the residual.
Effect of shape
The shape of the obstacles, i.e. Ω c , used in training affects the gain function predictions. Figure 13 compares the gain functions produced by City-CNN
and Radial-CNN.
Frequency map
Here we present one of our studies concerning the exclusivity of vantage point placements in Ω. We generated sequences of vantage points starting from over 800 different initial conditions using City-CNN model on a 512 × 512
Austin map. Then, we model each vantage point as a Gaussian with fixed width, and overlay the resulting distribution on the Austin map in Figure 14.
This gives us a frequency map of the most recurring vantage points. These hot spots reveal regions that are more secluded and therefore, the visibility of those regions is more sensitive to vantage point selection. The efficiency of the CNN method allows us to address many surveillance related questions for a large collection of relevant geometries.
40
Art gallery
Our proposed approach outperforms the computational geometry solution [23] to the art gallery problem, even though we do not assume the environment is known. The key issue with computational geometry approaches is that they are heavily dependent on the triangulation. In an extreme example, consider an art gallery that is a simple convex n-gon. Even though it is sufficient to place a single vantage point anywhere in the interior of the room, the triangulation-based approach produces a solution with n/3 vertex guards. Figure 15 shows an example gallery consisting of 58 vertices. The computational geometry approach requires n 3 = 19 vantage points to completely cover the scene, even if point guards are used [5,12]. The gallery contains r = 19 reflex angles, so the work of [8] requires r + 1 = 20 vantage points.
On average, City-CNN requires only 8 vantage points.
3D environment
We present a 3D simulation of a 250m×250m environment based on Castle Square Parks in Boston. Figure 16 for snapshots of the algorithm in action.
The map is discretized as a level set function on a 768 × 768 × 64 voxel grid. At this resolution, small pillars are accurately reconstructed by our exploration algorithm. Each step can be generated in 3 seconds using the GPU or 300 seconds using the CPU. Parallelization of the distance function computation will further reduce the computation time significantly. A map Figure 15: Comparison of the computational geometry approach and the City-CNN approach to the art gallery problem. The red circles are the vantage points computed by the methods. Left: A result computed by the computational geometry approach, given the environment. Right: An example sequence of 7 vantage points generated by the City-CNN model. of this size was previously unfeasible. Lastly, Figure 17 shows snapshots from the exploration of a more challenging, cluttered 3D scene with many nooks.
Conclusion
From the perspective of inverse problems, we proposed a greedy algorithm for autonomous surveillance and exploration. We show that this formulation can be well-approximated using convolutional neural networks, which learns geometric priors for a large class of obstacles. The inclusion of shadow boundaries, computed using the level set method, is crucial for the success of the algorithm. One of the advantages of using the gain function (6), an integral quantity, is its stability with respect to noise in positioning and sensor measurements. In practice, we envision that it can be used in conjuction with SLAM algorithms [7,2] for a wide range of real-world applications.
One may also consider n-step greedy algorithms, where n vantage points are chosen simultaneously. However, being more greedy is not necessarily better. If the performance metric is the cardinality of the solution set, then it is not clear that multi-step greedy algorithms lead to smaller solutions.
We saw in section 2 that, even for the single circular obstacle, the greedy surveillance algorithm may sometimes require more steps than the exploration algorithm to attain complete coverage.
If the performance metric is based on the rate in which the objective function increases, then a multi-step greedy approach would be appropriate. However, on a grid with m nodes in d dimensions, there are O(m nd ) possible combinations. For each combination, computing the visibility and gain function requires O(nm d ) cost. In total, the complexity is O(nm d(n+1) ), which is very expensive, even when used for offline training of a neural network. In such cases, it is necessary to selectively sample only the relevant combinations. One such way to do that, is through a tree search algorithm.
| 5,988 |
1809.06025
|
2892244049
|
We study the problem of visibility-based exploration, reconstruction and surveillance in the context of supervised learning. Using a level set representation of data and information, we train a convolutional neural network to determine vantage points that maximize visibility. We show that this method drastically reduces the on-line computational cost and determines a small set of vantage points that solve the problem. This enables us to efficiently produce highly-resolved and topologically accurate maps of complex 3D environments. We present realistic simulations on 2D and 3D urban environments.
|
There has been some attempts to incorporate deep learning into the exploration problem, but they are myopic and focus on navigation rather than exploration. The approach of @cite_7 terminates when there is no occlusion within view of the agent, even if the global map is still incomplete. Tai and Liu @cite_4 @cite_24 @cite_18 train agents to learn obstacle avoidance.
|
{
"abstract": [
"We present a learning-based mapless motion planner by taking the sparse 10-dimensional range findings and the target position with respect to the mobile robot coordinate frame as input and the continuous steering commands as output. Traditional motion planners for mobile ground robots with a laser range sensor mostly depend on the obstacle map of the navigation environment where both the highly precise laser sensor and the obstacle map building work of the environment are indispensable. We show that, through an asynchronous deep reinforcement learning method, a mapless motion planner can be trained end-to-end without any manually designed features and prior demonstrations. The trained planner can be directly applied in unseen virtual and real environments. The experiments show that the proposed mapless motion planner can navigate the nonholonomic mobile robot to the desired targets without colliding with any obstacles.",
"",
"Exploration in an unknown environment is an elemental application for mobile robots. In this paper, we outlined a reinforcement learning method aiming for solving the exploration problem in a corridor environment. The learning model took the depth image from an RGB-D sensor as the only input. The feature representation of the depth image was extracted through a pre-trained convolutional-neural-networks model. Based on the recent success of deep Q-network on artificial intelligence, the robot controller achieved the exploration and obstacle avoidance abilities in several different simulated environments. It is the first time that the reinforcement learning is used to build an exploration strategy for mobile robots through raw sensor information.",
"We consider an autonomous mapping and exploration problem in which a range-sensing mobile robot is guided by an information-based controller through an a priori unknown environment, choosing to collect its next measurement at the location estimated to yield the maximum information gain within its current field of view. We propose a novel and time-efficient approach to predict the most informative sensing action using a deep neural network. After training the deep neural network on a series of thousands of randomly-generated “dungeon maps”, the predicted optimal sensing action can be computed in constant time, with prospects for appealing scalability in the testing phase to higher dimensional systems. We evaluated the performance of deep neural networks on the autonomous exploration of two-dimensional workspaces, comparing several different neural networks that were selected due to their success in recent ImageNet challenges. Our computational results demonstrate that the proposed method provides high efficiency as well as accuracy in selecting informative sensing actions that support autonomous mobile robot exploration."
],
"cite_N": [
"@cite_24",
"@cite_18",
"@cite_4",
"@cite_7"
],
"mid": [
"2963428623",
"",
"2563670399",
"2771342126"
]
}
|
Greedy Algorithms for Sparse Sensor Placement via Deep Learning
|
We consider the problem of generating a minimal sequence of observing locations to achieve complete line-of-sight visibility coverage of an environment.
In particular, we are interested in the case when environment is initially 2 unknown. This is particularly useful for autonomous agents to map out unknown, or otherwise unreachable environments, such as undersea caverns.
Military personnel may avoid dangerous situations by sending autonomous agents to scout new territory. We first assume the environment is known in order to gain insights.
Consider a domain Ω ⊆ R d . Partition the domain Ω = Ω free ∪ Ω obs into an open set Ω free representing the free space, and a closed set Ω obs of finite obstacles without holes. We will refer to the Ω obs as the environment, since it is characterized by the obstacles. Let x i ∈ Ω free be a vantage point, from which a range sensor, such as LiDAR, takes omnidirectional measurements P x i : S d−1 → R. That is, P x i outputs the distance to closest obstacle for each direction in the unit sphere. One can map the range measurements to the visibility set V x i ; points in V x i are visible from x i :
x ∈ V x i if x − x i 2 < P x i x − x i x − x i 2(1)
As more range measurements are acquired, Ω free can be approximated by the cumulatively visible set Ω k :
Ω k = k i=0 V x i(2)
By construction, Ω k admits partial ordering: Ω i−1 ⊂ Ω i . For suitable choices of x i , it is possible that Ω n → Ω free (say, in the Hausdorff distance).
We aim at determining a minimal set of vantage points O from which 3 every x ∈ Ω free can be seen. One may formulate a constrained optimization problem and look for sparse solutions. When the environment is known, we have the surveillance problem:
min O⊆Ω free |O| subject to Ω free = x∈O V x .(3)
When the environment is not known apriori, the agent must be careful to avoid collision with obstacles. New vantage points must be a point that is currently visible. That is, x k+1 ∈ Ω k . Define the set of admissible sequences:
A(Ω free ) := {(x 0 , . . . , x n−1 ) | n ∈ N, x 0 ∈ Ω free , x k+1 ∈ Ω k }.
For the unknown environment, we have the exploration problem:
min O∈A(Ω free ) |O| subject to Ω free = x∈O V x .(5)
The problem is feasible as long as obstacles do not have holes.
6
The approach of Bai et al. [1] terminates when there is no occlusion within view of the agent, even if the global map is still incomplete. Tai and Liu [30,31,20] train agents to learn obstacle avoidance.
Our work uses a gain function to steer a greedy approach, similar to the next-best-view algorithms. However, our measure of information gain takes the geometry of the environment into account. By taking advantage of precomputation via convolutional neural networks, our model learns shape priors for a large class of obstacles and is efficient at runtime. We use a volumetric representation which can handle arbitrary geometries in 2D and 3D. Also, we assume that the sensor range is larger than the domain, which makes the problem more global and challenging.
Greedy algorithm
We propose a greedy approach which sequentially determines a new vantage point, x k+1 , based on the information gathered from all previous vantage points, x 0 , x 1 , · · · , x k . The strategy is greedy because x k+1 would be a location that maximizes the information gain.
For the surveillance problem, the environment is known. We define the gain function:
g(x; Ω k ) := |V x ∪ Ω k | − |Ω k |,(6)
i.e. the volume of the region that is visible from x but not from x 0 , x 1 , · · · , x k .
Note that g depends on Ω obs , which we omit for clarity of notation. The next 7 vantage point should be chosen to maximize the newly-surveyed volume. We define the greedy surveillance algorithm as:
x k+1 = arg max x∈Ω free g(x; Ω k ).(7)
The problem of exploration is even more challenging since, by definition, the environment is not known. Subsequent vantage points must lie within the current visible set Ω k . The corresponding greedy exploration algorithm is
x k+1 = arg max x∈Ω k g(x; Ω k ).(8)
However, we remark that in practice, one is typically interested only in a subset S of all possible environments S := {Ω obs |Ω obs ⊆ R d }.
For example, cities generally follow a grid-like pattern. Knowing these priors can help guide our estimate of g for certain types of Ω obs , even when Ω obs is unknown initially.
We propose to encode these priors formally into the parameters, θ, of a learned function:
g θ (x; Ω k , B k ) for Ω obs ∈ S,(9)
where B k is the part of ∂Ω k that may actually lie in the free space Ω free :
B k = ∂Ω k \Ω obs .(10)
See Figure 2 for an example gain function. We shall demonstrate that 8 while training for g θ , incorporating the shadow boundaries helps, in some sense, localize the learning of g, and is essential in creating usable g θ .
A bound for the known environment
We present a bound on the optimality of the greedy algorithm, based on submodularity [14], a useful property of set functions. We start with standard definitions. Let V be a finite set and f : 2 V → R be a set function which assigns a value to each subset S ⊆ V .
Definition 2.1. (Monotonicity) A set function f is monotone if for every A ⊆ B ⊆ V , f (A) ≤ f (B). Definition 2.2. (Discrete derivative) The discrete derivative of f at S with respect to v ∈ V is ∆ f (v|S) := f (S ∪ {v}) − f (S). Definition 2.3. (Submodularity) A set function f is submodular if for every A ⊆ B ⊆ V and v ∈ V \ B, ∆ f (v|A) ≥ ∆ f (v|B).
In other words, set functions are submodular if they have diminishing returns. More details and extensions of submodularity can be found in [14].
Lemma 2.1. The function f is monotone.
Proof. Consider A ⊆ B ⊆ Ω free . Since f is the cardinality of unions of sets, we have
f (B) = x∈B V x = x∈A∪{B\A} V x ≥ x∈A V x = f (A).V x + x∈B V x ≥ x∈A∪{v}∪B V x + x∈(A∪{v})∩B V x = x∈B∪{v} V x + x∈A V x = f (B ∪ {v}) + f (A)
Rearranging, we have
f (A ∪ {v}) + f (B) ≥ f (B ∪ {v}) + f (A) f (A ∪ {v}) − f (A) ≥ f (B ∪ {v}) − f (B) ∆ f (v|A) ≥ ∆ f (v|B).
Submodularity and monotonicity enable a bound which compares the relative performance of the greedy algorithm to the optimal solution.
Theorem 2.3. Let O * k be the optimal set of k sensors. Let O n = {x i } n i=1 be
the set of n sensors placed using the greedy surveillance algorithm (7). Then,
f (O n ) ≥ (1 − e −n/k )f (O * k ).
Proof. For l < n we have
f (O * k ) ≤ f (O * k ∪ O l ) (12) = f (O l ) + ∆ f (O * k |O l ) (13) = f (O l ) + k i=1 ∆ f (x * i |O l ∪ {x * 1 , . . . , x * i−1 }) (14) ≤ f (O l ) + k i=1 ∆ f (x * i |O l ) (15) ≤ f (O l ) + k i=1 f (O l+1 ) − f (O l ) (16) = f (O l ) + k f (O l+1 ) − f (O l ) .(17)
Line (12) follows from monotonicity, (15) follows from submodularity of f , and (16) from definition of the greedy algorithm. Define
δ l := f (O * k ) − f (O l ), with δ 0 := f (O * k ). Then f (O * k ) − f (O l ) ≤ k f (O l+1 ) − f (O l ) δ l ≤ k δ l − δ l+1 δ l 1 − k ≤ −kδ l+1 δ l 1 − 1 k ≥ δ l+1
Expanding the recurrence relation with δ n , we have
δ n ≤ 1 − 1 k δ n−1 ≤ 1 − 1 k n δ 0 = 1 − 1 k n f (O * k )
Finally, substituting back the definition for δ n , we have the desired result:
δ n ≤ 1 − 1 k n f (O * k ) f (O * k ) − f (O n ) ≤ 1 − 1 k n f (O * k ) f (O * k ) 1 − (1 − 1/k) n ≤ f (O n ) f (O * k ) 1 − e −n/k ≤ f (O n )(18)
where (18)
follows from the inequality 1 − x ≤ e −x .
In particular, if n = k, then (1 − e −1 ) ≈ 0.63. This means that k steps of the greedy algorithm is guaranteed to cover at least 63% of the total volume, 13 if the optimal solution can also be obtained with k steps. When n = 3k, the greedy algorithm covers at least 95% of the total volume. In [22], it was shown that no polynomial time algorithm can achieve a better bound.
A bound for the unknown environment
When the environment is not known, subsequent vantage points must lie within the current visible set to avoid collision with obstacles:
x k+1 ∈ V(O k )(19)
Thus, the performance of the exploration algorithm has a strong dependence on the environment Ω obs and the initial vantage point x 1 . We characterize this dependence using the notion of the exploration ratio.
Given an environment Ω obs and A ⊆ Ω free , consider the ratio of the marginal value of the greedy exploration algorithm, to that of the greedy surveillance algorithm:
ρ(A) := sup x∈V(A) ∆ f (x|A) sup x∈Ω free ∆ f (x|A) .(20)
That is, ρ(A) characterizes the relative gap (for lack of a better word) caused
by the collision-avoidance constraint x ∈ V(A). Let A x = {A ⊆ Ω free |x ∈ A} 14
be the set of vantage points which contain x. Define the exploration ratio as
ρ x := inf A∈Ax ρ(A).(21)
The exploration ratio is the worst-case gap between the two greedy algorithms, conditioned on x. It helps to provide a bound for the difference between the optimal solution set of size k, and the one prescribed by n steps the greedy exploration algorithm.
Theorem 2.4. Let O * k = {x * i } k i=1 be the optimal sequence of k sensors which includes x * 1 = x 1 . Let O n = {x i } n i=1
be the sequence of n sensors placed using the greedy exploration algorithm (8). Then, for k, n > 1:
f (O n ) ≥ 1 − exp −(n − 1)ρ x 1 k − 1 1 − f (x 1 ) f (O * k ) f (O * k ).
This is reminiscent of Theorem 2.3, with two subtle differences. The
Proof. We have, for l < n:
f (O * k ) ≤ f (O * k ∪ O l ) = f (O l ) + ∆ f (O * k |O l ) = f (O l ) + k i=1 ∆ f (x * i |O l ∪ {x * 1 , . . . , x * i−1 }) (22) ≤ f (O l ) + k i=1 ∆ f (x * i |O l ) (23) = f (O l ) + ∆ f (x * 1 |O l ) + k i=2 ∆ f (x * i |O l ) = f (O l ) + k i=2 ∆ f (x * i |O l ) (24) ≤ f (O l ) + k i=2 max x∈Ω free ∆ f (x|O l ) ≤ f (O l ) + 1 ρ x 1 k i=2 max x∈V(O l ) ∆ f (x|O l ) (25) ≤ f (O l ) + 1 ρ x 1 k i=2 f (O l+1 ) − f (O l ) (26) = f (O l ) + k − 1 ρ x 1 f (O l+1 ) − f (O l ) .
Line (22) is a telescoping sum, (23) follows from submodularity of f , (24) uses the fact that x * 1 ∈ O l , (25) follows from the definition of ρ x 1 and (26) stems from the definition of the greedy exploration algorithm (8).
As before, define
δ l := f (O * k ) − f (O l )
. However, this time, note that
δ 1 := f (O * k ) − f (O 1 ) = f (O * k ) − f (x 1 ). Then f (O * k ) − f (O l ) ≤ k − 1 ρ x 1 f (O l+1 ) − f (O l ) δ l ≤ k − 1 ρ x 1 δ l − δ l+1 δ l 1 − k − 1 ρ x 1 ≤ − k − 1 ρ x 1 δ l+1 δ l 1 − ρ x 1 k − 1 ≥ δ l+1
Expanding the recurrence relation with δ n , we have
δ n ≤ 1 − ρ x 1 k − 1 δ n−1 ≤ 1 − ρ x 1 k − 1 n−1 δ 1 = 1 − ρ x 1 k − 1 n−1 f (O * k ) − f (x 1 )
Now, substituting back the definition for δ n , we arrive at
δ n ≤ 1 − ρ x 1 k − 1 n−1 f (O * k ) − f (x 1 ) f (O * k ) − f (O n ) ≤ 1 − ρ x 1 k − 1 n−1 f (O * k ) − f (x 1 ) f (O * k ) − f (x 1 ) − f (O n ) − f (x 1 ) ≤ 1 − ρ x 1 k − 1 n−1 f (O * k ) − f (x 1 ) f (O * k ) − f (x 1 ) 1 − 1 − ρ x 1 k − 1 n−1 ≤ f (O n ) − f (x 1 ) f (O * k ) − f (x 1 ) 1 − e − (n−1)ρx 1 k−1 ≤ f (O n ) − f (x 1 ) .
Finally, with some more algebra
f (O n ) − f (x 1 ) ≥ 1 − e − (n−1)ρx 1 k−1 f (O * k ) − f (x 1 ) f (O n ) ≥ f (x 1 ) + 1 − e − (n−1)ρx 1 k−1 f (O * k ) − f (x 1 ) f (O n ) ≥ f (x 1 ) + 1 − e − (n−1)ρx 1 k−1 f (O * k ) − f (x 1 ) + f (x 1 )e − (n−1)ρx 1 k−1 f (O n ) ≥ 1 − e − (n−1)ρx 1 k−1 f (O * k ) + f (x 1 )e − (n−1)ρx 1 k−1 f (O n ) ≥ 1 − e − (n−1)ρx 1 k−1 1 − f (x 1 ) f (O * n ) f (O * k ).
Exploration ratio example
We demonstrate an example where ρ x can be an arbitrarily small factor that is determined by the geometry of Ω free . Figure 3 depicts an illustration of the setup for the narrow alley environment.
Consider a domain Ω = [0, 1] × [0, 1] with a thin vertical wall of width ε 1, whose center stretches from ( 3 2 ε, 0) to ( 3 2 ε, 1). A narrow opening of size ε 2 × ε is centered at ( 3 2 ε, 1 2 ). Suppose
x 1 = x * 1 = A so that f ({x 1 }) = ε + O(ε 2 ),
where the ε 2 factor is due to the small sliver of the narrow alley visible from place
x 2 ∈ V(x 1 ) = [0, ε] × [0, 1]. One possible location is x 2 = B.
Then, after 2 steps of the greedy algorithm, we have
f (O 2 ) = ε + O(ε 2 ).
Meanwhile, the total visible area is
f (O * 2 ) = 1 − O(ε)
and the ratio of greedy to optimal area coverage is
f (O 2 ) f (O * 2 ) = ε + O(ε 2 ) 1 − O(ε) = O(ε)(27)
The exploration ratio is ρ
x 1 = O(ε 2 ), since max x∈V({x 1 }) ∆ f (x|{x 1 }) = O(ε 2 ) max x∈Ω free ∆ f (x|{x 1 }) = 1 − O(ε)(28)
According to the bound, with k = n = 2, we should have
f (O 2 ) f (O * 2 ) ≥ 1 − e − (n−1)ρx 1 k−1 1 − f (x 1 ) f (O * 2 ) = 1 − e −O(ε 2 ) 1 − O(ε) = Ω(ε)(29)
which reflects what we see in (27). and ρ x 1 = 1, since both the greedy exploration and surveillance step coincide.
According to the bound, with k = n = 2, we should have
f (O 2 ) f (O * 2 ) ≥ 1 − e − (n−1)ρx 1 k−1 1 − f (x 1 ) f (O * n ) ≥ 1 − O(ε)(30)
which is the case, since f (
O 2 ) = f (O * 2 ).
By considering the first vantage point x 1 as part of the bound, we ac-20 count for some of the unavoidable uncertainties associated with unknown environments during exploration.
Numerical comparison
We compare both greedy algorithms on random arrangements of up to 6 circular obstacles. Each algorithm starts from the same initial position and runs until all free area is covered. We record the number of vantage points required over 200 runs for each number of obstacles.
Surprisingly, the exploration algorithm sometimes requires fewer vantage points than the surveillance algorithm. Perhaps the latter is too aggressive, or perhaps the collision-avoidance constraint acts as a regularizer. For example, when there is a single circle, the greedy surveillance algorithm places the second vantage point x 2 on the opposite side of this obstacle. This may lead to two slivers of occlusion forming of either side of the circle, which will require 2 additional vantage points to cover. With the greedy exploration algorithm, we do not have this problem, due to the collision-avoidance constraint. Figure 4 shows an select example with 1 and 5 obstacles. Figure 5 show the histogram of the number of steps needed for each algorithm. On average, both algorithms require a similar number of steps, but the exploration algorithm has a slight advantage. In this section, we discuss the method for approximating the gain function when the map is not known. Given the set of previously-visited vantage points, we compute the cumulative visibility and shadow boundaries. We approximate the gain function by applying the trained neural network on this pair of inputs, and pick the next point according to (7). This procedure repeats until there are no shadow boundaries or occlusions.
The data needed for the training and evaluation of g θ are computed using level sets [26,28,25]. Occupancy grids may be applicable, but we choose level sets since they have proven to be accurate and robust. In particular, level sets are necessary for subpixel resolution of shadow boundaries and they allow for efficient visibility computation, which is crucial when generating the library of training examples.
The training geometry is embedded by a signed distance function, denoted by φ. For each vantage point x i , the visibility set is represented by the level set function ψ(·, x i ), which is computed efficiently using the algorithm described in [34].
In the calculus of level set functions, unions and intersections of sets are translated, respectively, into taking maximum and minimum of the corresponding characteristic functions. The cumulatively visible sets Ω k are 24 represented by the level set function Ψ k (x), which is defined recursively by
Ψ 0 (x) = ψ(x, x 0 ),(31)Ψ k (x) = max {Ψ k−1 (x), ψ(x, x k )} , k = 1, 2, . . .(32)
where the max is taken point-wise. Thus we have
Ω free = {x|φ(x) > 0},(33)V x i = {x|ψ(x, x i ) > 0},(34)Ω k = {x|Ψ k (x) > 0}.(35)
The shadow boundaries B k are approximated by the "smeared out" function:
b k (x) := δ ε (Ψ k ) · [1 − H(G k (x))] ,(36)
where H(x) is the Heaviside function and
δ ε (x) = 2 ε cos 2 πx ε · 1 [− ε 2 , ε 2 ] (x),(37)γ(x, x 0 ) = (x 0 − x) T · ∇φ(x),(38)G 0 = γ(x, x 0 ),(39)G k (x) = max{G k−1 (x), γ(x, x k )}, k = 1, 2, . . .(40)
Recall, the shadow boundaries are the portion of the ∂Ω k that lie in free space; the role of 1 − H(G k ) is to mask out the portion of obstacles that are currently visible from {x i } k i=1 . See Figure ?? for an example of γ. In our implementation, we take ε = 3∆x where ∆x is the grid node spacing. We refer the readers to [35] for a short review of relevant details.
When the environment Ω obs is known, we can compute the gain function
exactly g(x; Ω k ) = H H ψ(ξ, x) − H Ψ k (ξ) dξ.(41)
We remark that the integrand will be 1 where the new vantage point uncovers something not previously seen. Computing g for all x is costly; each visibility and volume computation requires O(m d ) operations, and repeating this for all points in the domain results in O(m 2d ) total flops. We approximate it with a functiong θ parameterized by θ:
g θ (x; Ψ k , φ, b k ) ≈ g(x; Ω k ).(42)
If the environment is unknown, we directly approximate the gain function by learning the parameters θ of a function
g θ (x; Ψ k , b k ) ≈ g(x; Ω k )H(Ψ k )(43)
using only the observations as input. Note the H(Ψ k ) factor is needed for collision avoidance during exploration because it is not known a priori whether an occluded location y is part of an obstacle or free space. Thus g θ (y) must 26 be zero.
Training procedure
We sample the environments uniformly from a library. For each Ω obs , a sequence of data pairs is generated and included into the training set T : consisting of k steps. Instead, to generate causally relevant data, we use an ε-greedy approach: we uniformly sample initial positions. With probability ε, the next vantage point is chosen randomly from admissible set. With probability 1 − ε, the next vantage point is chosen according to (7). Figure 6 shows an illustration of the generation of causal data along the subspace of relevant shapes.
{Ψ k , b k }, g(x; Ω k )H(Ψ k ) , k = 0, 1, 2, . . . .(44)
The function g θ is learned by minimizing the empirical loss across all data Figure 6: Causal data generation along the subspace of relevant shapes. Each dot is a data sample corresponding to a sequence of vantage points.
pairs for each Ω obs in the training set T :
argmin θ 1 N Ω obs ∈T k L g θ (x; Ψ k , b k ), g(x; Ω k )H(Ψ k ) ,(45)
where N is the total number of data pairs. We use the cross entropy loss function:
L(p, q) = p(x) log q(x) + (1 − p(x)) log(1 − q(x)) dx. (46) (a) a) (b) b) (c) c) 0 1 (d) d)
Network architecture
We use convolutional neural networks (CNNs) to approximate the gain function, which depends on the shape of Ω obs and the location x. CNNs have been used to approximate functions of shapes effectively in many applications. Their feedforward evaluations are efficient if the off-line training cost is ignored. The gain function g(x) does not depend directly on x, but rather,
x's visibility of Ω free , with a domain of dependence bounded by the sensor range. We employ a fully convolutional approach for learning g, which makes the network applicable to domains of different sizes. The generalization to 3D is also straight-forward.
We base the architecture of the CNN on U-Net [27], which has had great success in dense inference problems, such as image segmentation. It aggregates information from various layers in order to have wide receptive fields while maintaining pixel precision. The main design choice is to make sure that the receptive field of our model is sufficient. That is, we want to make sure that the value predicted at each voxel depends on a sufficiently large neighborhood. For efficiency, we use convolution kernels of size 3 in each dimension. By stacking multiple layers, we can achieve large receptive fields.
Thus the complexity for feedforward computations is linear in the total number of grid points.
Define a conv block as the following layers: convolution, batch norm, leaky relu, stride 2 convolution, batch norm, and leaky relu. Each conv block reduces the image size by a factor of 2. The latter half of the network increases 30 the image size using deconv blocks: bilinear 2x upsampling, convolution, batch norm, and leaky relu.
Our 2D network uses 6 conv blocks followed by 6 deconv blocks, while our 3D network uses 5 of each block. We choose the number of blocks to ensure that the receptive field is at least the size of the training images: 128 × 128 and 64 × 64 × 64. The first conv block outputs 4 channels. The number of channels doubles with each conv block, and halves with each deconv block.
The network ends with a single channel, kernel of size 1 convolution layer followed by the sigmoid activation. This ensures that the network aggregates all information into a prediction of the correct size and range.
Numerical results
We present some experiments to demonstrate the efficacy of our approach.
Also, we demonstrate its limitations. First, we train on 128 × 128 aerial city blocks cropped from INRIA Aerial Image Labeling Dataset [21]. It contains binary images with building labels from several urban areas, including
Austin, Chicago, Vienna, and Tyrol. We train on all the areas except Austin, which we hold out for evaluation. We call this model City-CNN. We train a similar model NoSB-CNN on the same training data, but omit the shadow boundary from the input. Third, we train another model Radial-CNN, on synthetically-generated radial maps, such as the one in Figure 13.
Given a map, we randomly select an initial location. In order to generate 31 the sequence of vantage points, we apply (7), using g θ in place of g. Ties are broken by choosing the closest point to x k . We repeat this process until there are no shadow boundaries, the gain function is smaller than , or the residual is less than δ, where the residual is defined as:
r = |Ω free \ Ω k | |Ω free | .(47)
We compare these against the algorithm which uses the exact gain function, which we call Exact. We also compare against Random, a random walker, which chooses subsequent vantage points uniformly from the visible region, and Random-SB which samples points uniformly in a small neighborhood of the shadow boundaries. We analyze the number of steps required to cover the scene and the residual as a function of the number of steps. The algorithm is robust to the initial positions. Figure 9 show the distribution of the number of steps and residual across over 800 runs from varying initial positions over a 512 × 512 Austin map. In practice, using the shadow 35 boundaries as a stopping criteria can be unreliable. Due to numerical precision and discretization effects, the shadow boundaries may never completely disappear. Instead, the algorithm terminates when the maximum predicted gain falls below a certain threshold . In this example, we used = 0.1.
Empirically, this strategy is robust. On average, the algorithm required 33
vantage points to reduce the occluded region to within 0.1% of the explorable area. Figure 10 shows an example sequence consisting of 36 vantage points.
Each subsequent step is generated in under 1 sec using the CPU and instantaneously with a GPU.
Even when the maximizer of the predicted gain function is different from that of the exact gain function, the difference in gain is negligible. This is evident when we see the residuals for City-CNN decrease at similar rates to Exact. Figure 11 demonstrates an example of the residual as a function of the number of steps for one such sequence generated by these algorithms on a 1024 × 1024 map of Austin. We see that City-CNN performs comparably to Exact approach in terms of residual. However, City-CNN takes 140 secs to generate 50 steps on the CPU while Exact, an O(m 4 ) algorithm, takes more than 16 hours to produce 50 steps.
Effect of shadow boundaries
The inclusion of the shadow boundaries as input to the CNN is critical for the algorithm to work. Without the shadow boundaries, the algorithm cannot in no change to the cumulative visibility. At the next iteration, the input is same as the previous iteration, and the result will be the same; the algorithm becomes stuck in a cycle. To avoid this, we prevent vantage points from repeating by zeroing out the gain function at that point and recomputing the argmax. Still, the vantage points tend to cluster near flat edges, as in Figure 12. This clustering behavior causes the NoSB-CNN model to be, at times, worse than Random. See Figure 11 to see how the clustering inhibits the reduction in the residual.
Effect of shape
The shape of the obstacles, i.e. Ω c , used in training affects the gain function predictions. Figure 13 compares the gain functions produced by City-CNN
and Radial-CNN.
Frequency map
Here we present one of our studies concerning the exclusivity of vantage point placements in Ω. We generated sequences of vantage points starting from over 800 different initial conditions using City-CNN model on a 512 × 512
Austin map. Then, we model each vantage point as a Gaussian with fixed width, and overlay the resulting distribution on the Austin map in Figure 14.
This gives us a frequency map of the most recurring vantage points. These hot spots reveal regions that are more secluded and therefore, the visibility of those regions is more sensitive to vantage point selection. The efficiency of the CNN method allows us to address many surveillance related questions for a large collection of relevant geometries.
40
Art gallery
Our proposed approach outperforms the computational geometry solution [23] to the art gallery problem, even though we do not assume the environment is known. The key issue with computational geometry approaches is that they are heavily dependent on the triangulation. In an extreme example, consider an art gallery that is a simple convex n-gon. Even though it is sufficient to place a single vantage point anywhere in the interior of the room, the triangulation-based approach produces a solution with n/3 vertex guards. Figure 15 shows an example gallery consisting of 58 vertices. The computational geometry approach requires n 3 = 19 vantage points to completely cover the scene, even if point guards are used [5,12]. The gallery contains r = 19 reflex angles, so the work of [8] requires r + 1 = 20 vantage points.
On average, City-CNN requires only 8 vantage points.
3D environment
We present a 3D simulation of a 250m×250m environment based on Castle Square Parks in Boston. Figure 16 for snapshots of the algorithm in action.
The map is discretized as a level set function on a 768 × 768 × 64 voxel grid. At this resolution, small pillars are accurately reconstructed by our exploration algorithm. Each step can be generated in 3 seconds using the GPU or 300 seconds using the CPU. Parallelization of the distance function computation will further reduce the computation time significantly. A map Figure 15: Comparison of the computational geometry approach and the City-CNN approach to the art gallery problem. The red circles are the vantage points computed by the methods. Left: A result computed by the computational geometry approach, given the environment. Right: An example sequence of 7 vantage points generated by the City-CNN model. of this size was previously unfeasible. Lastly, Figure 17 shows snapshots from the exploration of a more challenging, cluttered 3D scene with many nooks.
Conclusion
From the perspective of inverse problems, we proposed a greedy algorithm for autonomous surveillance and exploration. We show that this formulation can be well-approximated using convolutional neural networks, which learns geometric priors for a large class of obstacles. The inclusion of shadow boundaries, computed using the level set method, is crucial for the success of the algorithm. One of the advantages of using the gain function (6), an integral quantity, is its stability with respect to noise in positioning and sensor measurements. In practice, we envision that it can be used in conjuction with SLAM algorithms [7,2] for a wide range of real-world applications.
One may also consider n-step greedy algorithms, where n vantage points are chosen simultaneously. However, being more greedy is not necessarily better. If the performance metric is the cardinality of the solution set, then it is not clear that multi-step greedy algorithms lead to smaller solutions.
We saw in section 2 that, even for the single circular obstacle, the greedy surveillance algorithm may sometimes require more steps than the exploration algorithm to attain complete coverage.
If the performance metric is based on the rate in which the objective function increases, then a multi-step greedy approach would be appropriate. However, on a grid with m nodes in d dimensions, there are O(m nd ) possible combinations. For each combination, computing the visibility and gain function requires O(nm d ) cost. In total, the complexity is O(nm d(n+1) ), which is very expensive, even when used for offline training of a neural network. In such cases, it is necessary to selectively sample only the relevant combinations. One such way to do that, is through a tree search algorithm.
| 5,988 |
1809.05343
|
2952029609
|
Graph Convolutional Networks (GCNs) have become a crucial tool on learning representations of graph vertices. The main challenge of adapting GCNs on large-scale graphs is the scalability issue that it incurs heavy cost both in computation and memory due to the uncontrollable neighborhood expansion across layers. In this paper, we accelerate the training of GCNs through developing an adaptive layer-wise sampling method. By constructing the network layer by layer in a top-down passway, we sample the lower layer conditioned on the top one, where the sampled neighborhoods are shared by different parent nodes and the over expansion is avoided owing to the fixed-size sampling. More importantly, the proposed sampler is adaptive and applicable for explicit variance reduction, which in turn enhances the training of our method. Furthermore, we propose a novel and economical approach to promote the message passing over distant nodes by applying skip connections. Intensive experiments on several benchmarks verify the effectiveness of our method regarding the classification accuracy while enjoying faster convergence speed.
|
While graph structures are central tools for various learning tasks ( semi-supervised learning in @cite_15 @cite_13 ), how to design efficient graph convolution networks has become a popular research topic. Graph convolutional approaches are often categorized into spectral and non-spectral classes @cite_24 . The spectral approach first proposed by @cite_21 defines the convolution operation in Fourier domain. Later, @cite_12 enables localized filtering by applying efficient spectral filters, and @cite_19 employs Chebyshev expansion of the graph Laplacian to avoid the eigendecomposition. Recently, GCN is proposed in @cite_13 to simplify previous methods with first-order expansion and re-parameterization trick. Non-spectral approaches define convolution on graph by using the spatial connections directly. For instance, @cite_7 learns a weight matrix for each node degree, the work by @cite_20 defines multiple-hop neighborhoods by using the powers series of a transition matrix, and other authors @cite_23 extracted normalized neighborhoods that contain a fixed number of nodes.
|
{
"abstract": [
"Numerous important problems can be framed as learning from graph data. We propose a framework for learning convolutional neural networks for arbitrary graphs. These graphs may be undirected, directed, and with both discrete and continuous node and edge attributes. Analogous to image-based convolutional networks that operate on locally connected regions of the input, we present a general approach to extracting locally connected regions from graphs. Using established benchmark data sets, we demonstrate that the learned feature representations are competitive with state of the art graph kernels and that their computation is highly efficient.",
"Convolutional Neural Networks are extremely efficient architectures in image and audio recognition tasks, thanks to their ability to exploit the local translational invariance of signal classes over their domain. In this paper we consider possible generalizations of CNNs to signals defined on more general domains without the action of a translation group. In particular, we propose two constructions, one based upon a hierarchical clustering of the domain, and another based on the spectrum of the graph Laplacian. We show through experiments that for low-dimensional graphs it is possible to learn convolutional layers with a number of parameters independent of the input size, resulting in efficient deep architectures.",
"We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations. By stacking layers in which nodes are able to attend over their neighborhoods' features, we enable (implicitly) specifying different weights to different nodes in a neighborhood, without requiring any kind of costly matrix operation (such as inversion) or depending on knowing the graph structure upfront. In this way, we address several key challenges of spectral-based graph neural networks simultaneously, and make our model readily applicable to inductive as well as transductive problems. Our GAT models have achieved or matched state-of-the-art results across four established transductive and inductive graph benchmarks: the Cora, Citeseer and Pubmed citation network datasets, as well as a protein-protein interaction dataset (wherein test graphs remain unseen during training).",
"",
"",
"Graph-based semisupervised learning (GSSL) provides a promising paradigm for modeling the manifold structures that may exist in massive data sources in high-dimensional spaces. It has been shown effective in propagating a limited amount of initial labels to a large amount of unlabeled data, matching the needs of many emerging applications such as image annotation and information retrieval. In this paper, we provide reviews of several classical GSSL methods and a few promising methods in handling challenging issues often encountered in web-scale applications. First, to successfully incorporate the contaminated noisy labels associated with web data, label diagnosis and tuning techniques applied to GSSL are surveyed. Second, to support scalability to the gigantic scale (millions or billions of samples), recent solutions based on anchor graphs are reviewed. To help researchers pursue new ideas in this area, we also summarize a few popular data sets and software tools publicly available. Important open issues are discussed at the end to stimulate future research.",
"We present diffusion-convolutional neural networks (DCNNs), a new model for graph-structured data. Through the introduction of a diffusion-convolution operation, we show how diffusion-based representations can be learned from graph-structured data and used as an effective basis for node classification. DCNNs have several attractive qualities, including a latent representation for graphical data that is invariant under isomorphism, as well as polynomial-time prediction and learning that can be represented as tensor operations and efficiently implemented on a GPU. Through several experiments with real structured datasets, we demonstrate that DCNNs are able to outperform probabilistic relational models and kernel-on-graph methods at relational node classification tasks.",
"We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin.",
"Deep Learning's recent successes have mostly relied on Convolutional Networks, which exploit fundamental statistical properties of images, sounds and video data: the local stationarity and multi-scale compositional structure, that allows expressing long range interactions in terms of shorter, localized interactions. However, there exist other important examples, such as text documents or bioinformatic data, that may lack some or all of these strong statistical regularities. In this paper we consider the general question of how to construct deep architectures with small learning complexity on general non-Euclidean domains, which are typically unknown and need to be estimated from the data. In particular, we develop an extension of Spectral Networks which incorporates a Graph Estimation procedure, that we test on large-scale classification problems, matching or improving over Dropout Networks with far less parameters to estimate."
],
"cite_N": [
"@cite_7",
"@cite_21",
"@cite_24",
"@cite_19",
"@cite_23",
"@cite_15",
"@cite_20",
"@cite_13",
"@cite_12"
],
"mid": [
"2406128552",
"1662382123",
"2766453196",
"",
"",
"2026731079",
"2963984147",
"2519887557",
"637153065"
]
}
|
Adaptive Sampling Towards Fast Graph Representation Learning
|
Deep Learning, especially Convolutional Neural Networks (CNNs), has revolutionized various machine learning tasks with grid-like input data, such as image classification [1] and machine translation [2]. By making use of local connection and weight sharing, CNNs are able to pursue translational invariance of the data. In many other contexts, however, the input data are lying on irregular or non-euclidean domains, such as graphs which encode the pairwise relationships. This includes examples of social networks [3], protein interfaces [4], and 3D meshes [5]. How to define convolutional operations on graphs is still an ongoing research topic.
There have been several attempts in the literature to develop neural networks to handle arbitrarily structured graphs. Whereas learning the graph embedding is already an important topic [6,7,8], this paper mainly focus on learning the representations for graph vertices by aggregating their features/attributes. The closest work to this vein is the Graph Convolution Network (GCN) [9] that applies connections between vertices as convolution filters to perform neighborhood aggregation. As demonstrated in [9], GCNs have achieved the state-of-the-art performance on node classification.
An obvious challenge for applying current graph networks is the scalability. Calculating convolutions requires the recursive expansion of neighborhoods across layers, which however is computationally prohibitive and demands hefty memory footprints. Even for a single node, it will quickly cover a large portion of the graph due to the neighborhood expansion layer by layer if particularly the graph is dense or powerlaw. Conventional mini-batch training is unable to speed up the convolution computations, since every batch will involve a large amount of vertices, even the batch size is small. To illustrate the effectiveness of the layer-wise sampling, we assume that the nodes denoted by the red circle in (a) and (b) have at least two parents in the upper layer. In the node-wise sampling, the neighborhoods of each parent are not seen by other parents, hence the connections between the neighborhoods and other parents are unused. In contrast, for the layer-wise strategy, all neighborhoods are shared by nodes in the parent layer, thus all between-layer connections are utilized.
To avoid the over-expansion issue, we accelerate the training of GCNs by controlling the size of the sampled neighborhoods in each layer (see Figure 5). Our method is to build up the network layer by layer in a top-down way, where the nodes in the lower layer 1 are sampled conditionally based on the upper layer's. Such layer-wise sampling is efficient in two technical aspects. First, we can reuse the information of the sampled neighborhoods since the nodes in the lower layer are visible and shared by their different parents in the upper layer. Second, it is easy to fix the size of each layer to avoid over-expansion of the neighborhoods, as the nodes of the lower layer are sampled as a whole.
The core of our method is to define an appropriate sampler for the layer-wise sampling. A common objective to design the sampler is to minimize the resulting variance. Unfortunately, the optimal sampler to minimize the variance is uncomputable due to the inconsistency between the top-down sampling and the bottom-up propagation in our network (see § 4.2 for details). To tackle this issue, we approximate the optimal sampler by replacing the uncomputable part with a self-dependent function, and then adding the variance to the loss function. As a result, the variance is explicitly reduced by training the network parameters and the sampler.
Moreover, we explore how to enable efficient message passing across distant nodes. Current methods [6,10] resort to random walks to generate neighborhoods of various steps, and then take integration of the multi-hop neighborhoods. Instead, this paper proposes a novel mechanism by further adding a skip connection between the (l+1)-th and (l−1)-th layers. This short-cut connection reuses the nodes in the (l − 1)-th layer as the 2-hop neighborhoods of the (l + 1)-th layer, thus it naturally maintains the second-order proximity without incurring extra computations.
To sum up, we make the following contributions in this paper: I.We develop a novel layer-wise sampling method to speed up the GCN model, where the between-layer information is shared and the size of the sampling nodes is controllable. II. The sampler for the layer-wise sampling is adaptive and determined by explicit variance reduction in the training phase. III. We propose a simple yet efficient approach to preserve the second-order proximity by formulating a skip connection across two layers. We evaluate the performance of our method on four popular benchmarks for node classification, including Cora, Citeseer, Pubmed [11] and Reddit [3]. Intensive experiments verify the effectiveness of our method regarding the classification accuracy and convergence speed.
Notations and Preliminaries
Notations. This paper mainly focuses on undirected graphs. Let G = (V, E) denote the undirected graph with nodes v i ∈ V, edges (v i , v j ) ∈ E, and N defines the number of the nodes. The adjacency matrix A ∈ R N ×N represents the weight associated to edge (v i , v j ) by each element A ij . We also have a feature matrix X ∈ R N ×D with x i denoting the D-dimensional feature for node v i .
GCN. The GCN model developed by Kipf and Welling [9] is one of the most successful convolutional networks for graph representation learning. If we define h (l) (v i ) as the hidden feature of the l-th layer for node v i , the feed forward propagation becomes
h (l+1) (v i ) = σ N j=1â (v i , u j )h (l) (u j )W (l) , i = 1, · · · , N,(1)
where = (â(v i , u j )) ∈ R N ×N is the re-normalization of the adjacency matrix; σ(·) is a nonlinear function; W (l) ∈ R D (l) ×D (l−1) is the filter matrix in the l-th layer; and we denote the nodes in the l-th layer as u j to distinguish them from those in the (l + 1)-th layer.
Adaptive Sampling
Eq. (1) indicates that, GCNs require the full expansion of neighborhoods for the feed forward computation of each node. This makes it computationally intensive and memory-consuming for learning on large-scale graphs containing more than hundreds of thousands of nodes. To circumvent this issue, this paper speeds up the feed forward propagation by adaptive sampling. The proposed sampler is adaptable and applicable for variance reduction.
We first re-formulate the GCN update to the expectation form and introduce the node-wise sampling accordingly. Then, we generalize the node-wise sampling to a more efficient framework that is termed as the layer-wise sampling. To minimize the resulting variance, we further propose to learn the layer-wise sampler by performing variance reduction explicitly. Lastly, we introduce the concept of skip-connection, and apply it to enable the second-order proximity for the feed-forward propagation.
From Node-Wise Sampling to Layer-Wise Sampling
Node-Wise Sampling. We first observe that Eq (1) can be rewritten to the expectation form, namely,
h (l+1) (v i ) = σ W (l) (N (v i )E p(uj |vi) [h (l) (u j )]),(2)
where we have included the weight matrix W (l) into the function σ(·) for concision;
p(u j |v i ) = a(v i , u j )/N (v i ) defines the probability of sampling u j given v i , with N (v i ) = N j=1â (v i , u j ).
A natural idea to speed up Eq. (2) is to approximate the expectation by Monte-Carlo sampling. To be specific, we estimate the expectation
µ p (v i ) = E p(uj |vi) [h (l) (u j )] withμ p (v i ) given bŷ µ p (v i ) = 1 n n j=1 h (l) (û j ),û j ∼ p(u j |v i ).(3)
By setting n N , the Monte-Carlo estimation can reduce the complexity of (1) from O(|E|D (l) D (l−1) ) (|E| denotes the number of edges) to O(n 2 D (l) D (l−1) ) if the numbers of the sampling points for the (l + 1)-th and l-th layers are both n.
By applying Eq. (3) in a multi-layer network, we construct the network structure in a top-down manner: sampling the neighbours of each node in the current layer recursively (see Figure 5 (a)). However, such node-wise sampling is still computationally expensive for deep networks, because the number of the nodes to be sampled grows exponentially with the number of layers. Taking a network with depth d for example, the number of sampling nodes in the input layer will increase to O(n d ), leading to significant computational burden for large d 2 .
Layer-Wise Sampling. We equivalently transform Eq. (2) to the following form by applying importance sampling, i.e.,
h (l+1) (v i ) = σ W (l) (N (v i )E q(uj |v1,··· ,vn) [ p(u j |v i ) q(u j |v 1 , · · · , v n ) h (l) (u j )]),(4)
where q(u j |v 1 , · · · , v n ) is defined as the probability of sampling u j given all the nodes of the current layer (i.e., v 1 , · · · , v n ). Similarly, we can speed up Eq. (4) by approximating the expectation with the Monte-Carlo mean, namely, computing
h (l+1) (v i ) = σ W (l) (N (v i )μ q (v i )) witĥ µ q (v i ) = 1 n n j=1 p(û j |v i ) q(û j |v 1 , · · · , v n ) h (l) (û j ),û j ∼ q(û j |v 1 , · · · , v n ).(5)
We term the sampling in Eq. (5) as the layer-wise sampling strategy. As opposed to the node-wise method in Eq. (3) where the nodes {û j } n j=1 are generated for each parent v i independently, the sampling in Eq. (5) is required to be performed only once. Besides, in the node-wise sampling, the neighborhoods of each node are not visible to other parents; while for the layer-wise sampling all sampling nodes {û j } n j=1 are shared by all nodes of the current layer. This sharing property is able to enhance the message passing at utmost. More importantly, the size of each layer is fixed to n, and the total number of sampling nodes only grows linearly with the network depth.
Explicit Variance Reduction
The remaining question for the layer-wise sampling is how to define the exact form of the sampler q(u j |v 1 , · · · , v n ). Indeed, a good estimator should reduce the variance caused by the sampling process, since high variance probably impedes efficient training. For simplicity, we concisely denote the distribution q(u j |v 1 , · · · , v n ) as q(u j ) below.
According to the derivations of importance sampling in [23], we immediately conclude that Proposition 1. The variance of the estimatorμ q (v i ) in Eq. (5) is given by
Var q (μ q (v i )) = 1 n E q(uj ) [ (p(u j |v i )|h (l) (u j )| − µ q (v i )q(u j )) 2 q 2 (u j ) ].(6)
The optimal sampler to minimize the variance Var q(uj ) (μ q (v i )) in Eq. (6) is given by
q * (u j ) = p(u j |v i )|h (l) (u j )| N j=1 p(u j |v i )|h (l) (u j )| .(7)
Unfortunately, it is infeasible to compute the optimal sampler in our case. By its definition, the sampler q * (u j ) is computed based on the hidden feature h (l) (u j ) that is aggregated by its neighborhoods in previous layers. However, under our top-down sampling framework, the neural units of lower layers are unknown unless the network is completely constructed by the sampling.
To alleviate this chicken-and-egg dilemma, we learn a self-dependent function of each node to determine its importance for the sampling. Let g(x(u j )) be the self-dependent function computed based on the node feature x(u j ). Replacing the hidden function in Eq. (7) with g(x(u j )) arrives at
q * (u j ) = p(u j |v i )|g(x(u j ))| N j=1 p(u j |v i )|g(x(u j ))| ,(8)
The sampler by Eq. (8) is node-wise and varies for different v i . To make it applicable for the layer-wise sampling, we summarize the computations over all nodes {v i } n i=1 , thus we attain
q * (u j ) = n i=1 p(u j |v i )|g(x(u j ))| N j=1 n i=1 p(u j |v i )|g(x(v j ))| .(9)
In this paper, we define g(x(u j )) as a linear function i.e. g(x(u j )) = W g x(u j ) parameterized by the matrix W g ∈ R 1×D . Computing the sampler in Eq. (9) is efficient, since computing p(u j |v i ) (i.e. the adjacent value) and the self-dependent function g(x(u j )) is fast.
Note that applying the sampler given by Eq. (9) not necessarily results in a minimal variance. To fulfill variance reduction, we add the variance to the loss function and explicitly minimize the variance by model training. Suppose we have a mini-batch of data pairs
{(v i , y i )} n i=1 ,
where v i is the target nodes and y i is the corresponded ground-true label. By the layer-wise sampling (Eq. (9)), the nodes of previous layer are sampled given {v i } n i=1 , and this process is recursively called layer by layer until we reaching the input domain. Then we perform a bottom-up propagation to compute the hidden features and obtain the estimated activation for node v i , i.e.μ q (v i ). Certain nonlinear and soft-max functions are further added onμ q (v i ) to produce the predictionȳ(μ q (v i ))). By taking the classification loss and variance (Eq. (6)) into account, we formulate a hybrid loss as
L = 1 n n i=1 L c (y i ,ȳ(μ q (v i ))) + λVar q (μ q (v i ))),(10)
where L c is the classification loss (e.g., the crossing entropy); λ is the trade-off parameter and fixed as 0.5 in our experiments. Note that the activations for other hidden layers are also stochastic, and the resulting variances should be reduced. In Eq. (10) we only penalize the variance of the top layer for efficient computation and find it sufficient to deliver promising performance in our experiments.
To minimize the hybrid loss in Eq. (10), it requires to perform gradient calculations. For the network parameters, e.g. W (l) in Eq. (2), the gradient calculation is straightforward and can be easily derived by the automatically-differential platform, e.g., TensorFlow [24]. For the parameters of the sampler, e.g. W g in Eq. (9), calculating the gradient is nontrivial as the sampling process (Eq. (5)) is nondifferential. Fortunately, we prove that the gradient of the classification loss with respect to the sampler is zero. We also derive the gradient of the variance term regarding the sampler, and detail the gradient calculation in the supplementary material
Preserving Second-Order Proximities by Skip Connections
The GCN update in Eq. (1) only aggregates messages passed from 1-hop neighborhoods. To allow the network to better utilize information across distant nodes, we can sample the multi-hop neighborhoods for the GCN update in a similar way as the random walk [6,10]. However, the random walk requires extra sampling to obtain distant nodes which is computationally expensive for dense graphs. In this paper, we propose to propagate the information over distant nodes via skip connections.
The key idea of the skip connection is to reuse the nodes of the (l − 1)-th layer to preserve the second-order proximity (see the definition in [7]). For the (l + 1)-th layer, the nodes of the (l − 1)-th layer are actually the 2-hop neighborhoods. If we further add a skip connection from the (l − 1)-th to the (l + 1)-th layer, as illustrated in Figure 5 (c), the aggregation will involve both the 1-hop and 2-hop neighborhoods. The calculations along the skip connection are formulated as
h (l+1) skip (v i ) = n j=1â skip (v i , s j )h (l−1) (s j )W (l−1) skip , i = 1, · · · , n,(11)
where s = {s j } n j=1 denote the nodes in the (l − 1)-th layer. Due to the 2-hop distance between v i and s j , the weightâ skip (v i , s j ) is supposed to be the element of 2 . Here, to avoid the full computation of 2 , we estimate the weight with the sampled nodes of the l-th layer, i.e.,
a skip (v i , s j ) ≈ n k=1â (v i , u k )â(u k , s j ).(12)
Instead of learning a free W (l−1) skip in Eq. (11), we decompose it to be W (l−1)
skip = W (l−1) W (l) ,(13)
where W (l) and W (l−1) are the filters of the l-th and (l − 1)-th layers in original network, respectively. The output of skip-connection will be added to the GCN layer (Eq.(1)) before nonlinearity.
By the skip connection, the second-order proximity is maintained without extra 2-hop sampling. Besides, the skip connection allows the information to pass between two distant layers thus enabling more efficient back-propagation and model training.
While the designs are similar, our motivation of applying the skip connection is different to the residual function in ResNets [1]. The purpose of employing the skip connection in [1] is to gain accuracy by increasing the network depth. Here, we apply it to preserve the second-order proximity.
In contrast to the identity mappings used in ResNets, the calculation along the skip-connection in our model should be derived specifically (see Eq. (12) and Eq. (13)).
Discussions and Extensions
Relation to other sampling methods. We contrast our approach with GraphSAGE [3] and Fast-GCN [21] regarding the following aspects:
1. The proposed layer-wise sampling method is novel. GraphSAGE randomly samples a fixed-size neighborhoods of each node, while FastGCN constructs each layer independently according to an identical distribution. As for our layer-wise approach, the nodes in lower layers are sampled conditioned on the upper ones, which is capable of capturing the between-layer correlations.
2. Our framework is general. Both GraphSAGE and FastGCN can be categorized as the specific variants of our framework. Specifically, the GraphSAGE model is regarded as a node-wise sampler in Eq (3) if p(u j |v i ) is defined as the uniform distribution; FastGCN can be considered as a special layer-wise method by applying the sampler q(u j ) that is independent to the nodes {v i } n i=1 in Eq. (5). 3. Our sampler is parameterized and trainable for explicit variance reduction. The sampler of GraphSAGE or FastGCN involves no parameter and is not adaptive for minimizing variance. In contrast, our sampler modifies the optimal importance sampling distribution with a self-dependent function. The resulting variance is explicitly reduced by fine-tuning the network and sampler.
Taking the attention into account. The GAT model [13] applies the idea of self-attention to graph representation learning. Concisely, it replaces the re-normalization of the adjacency matrix in Eq. (1) with specific attention values, i.e.,
h (l+1) (v i ) = σ( N j=1 a((h (l) (v i ), (h (l) (u j ))h (l) (v j )W (l) ), where a(h (l) (v i ), h (l) (u j )
) measures the attention value between the hidden features v i and u j , which is derived as a(h (l) (v i ), h (l) (u j )) = SoftMax(LeakyReLU(W 1 h (l) (v i ), W 2 h (l) (u j ))) by using the LeakyReLU nonlinearity and SoftMax normalization with parameters W 1 and W 2 .
It is impracticable to apply the GAT-like attention mechanism directly in our framework, as the probability p(u j |v i ) in Eq. (9) will become related to the attention value a(h (l) (v i ), h (l) (u j )) that is determined by the hidden features of the l-th layer. As discussed in § 4.2, computing the hidden features of lower layers is impossible unless the network is already built after sampling. To solve this issue, we develop a novel attention mechanism by applying the self-dependent function similar to Eq. (9). The attention is computed as
a(x(v i ), x(u j )) = 1 n ReLu(W 1 g(x(v i )) + W 2 g(x(u j ))),(14)
where W 1 and W 2 are the learnable parameters.
Experiments
We evaluate the performance of our methods on the following benchmarks: (1) categorizing academic papers in the citation network datasets-Cora, Citeseer and Pubmed [11]; (2) predicting which Our sampling framework is inductive in the sense that it clearly separates out test data from training. In contrast to the transductive learning where all vertices should be provided, our approach aggregates the information from each node's neighborhoods to learn structural properties that can be generalized to unseen nodes. For testing, the embedding of a new node may be either computed by using the full GCN architecture or approximated through sampling as is done in model training. Here we use the full architecture as it is more straightforward and easier to implement. For all datasets, we employ the network with two hidden layers as usual. The hidden dimensions for the citation network datasets (i.e., Cora, Citeseer and Pubmed) are set to be 16. For the Reddit dataset, the hidden dimensions are selected to be 256 as suggested by [3]. The numbers of the sampling nodes for all layers excluding the top one are set to 128 for Cora and Citeseer, 256 for Pubmed and 512 for Reddit. The sizes of the top layer (i.e. the stochastic mini-batch size) are chosen to be 256 for all datasets. We train all models using early stopping with a window size of 30, as suggested by [9]. Further details on the network architectures and training settings are contained in the supplementary material.
Alation Studies on the Adaptive Sampling
Baselines. The codes of GraphSAGE [3] and FastGCNN [21] provided by the authors are implemented inconsistently; here we re-implement them based on our framework to make the comparisons more fair 3 . In details, we implement the GraphSAGE method by applying the node-wise strategy with a uniform sampler in Eq. (3), where the number of the sampling neighborhoods for each node are set to 5. For FastGCN, we adopt the Independent-Identical-Distribution (IID) sampler proposed by [21] in Eq. (5), where the number of the sampling nodes for each layer is the same as our method. For consistence, the re-implementations of GraphSAGE and FastGCN are named as Node-Wise and IID in our experiments. We also implement the Full GCN architecture as a strong baseline. All compared methods shared the same network structure and training settings for fair comparison. We have also conducted the attention mechanism introduced in § 6 for all methods.
Comparisons with other sampling methods. The random seeds are fixed and no early stopping is used for the experiments here. Figure 5 reports the converging behaviors of all compared methods during training on Cora, Citeseer and Reddit 4 . It demonstrates that our method, denoted as Adapt, converges faster than other sampling counterparts on all three datasets. Interestingly, our method even outperforms the Full model on Cora and Reddit. Similar to our method, the IID sampling is also layer-wise, but it constructs each layer independently. Thanks to the conditional sampling, our method achieves more stable convergent curve than the IID method as Figure 5 shown. It turns out that considering the between-layer information helps in stability and accuracy.
Moreover, we draw the training time in Figure 3 (a). Clearly, all sampling methods run faster than the Full model. Compared to the Node-Wise method, our approach exhibits a higher training speed due to the more compact architecture. To show this, suppose the number of nodes in the top layer is n, then Table 1: Accuracy Comparisons with state-of-the-art methods.
Methods
Cora Citeseer Pubmed Reddit KLED [25] 0.8229 -0.8228 -2-hop DCNN [18] 0.8677 -0.8976 -FastGCN [21] 0.8500 0.7760 0.8800 0.9370 GraphSAGE [3] 0 for the Node-Wise method the input, hidden and top layers are of sizes 25n, 5n and n, respectively, while the numbers of the nodes in all layers are n for our model. Even with less sampling nodes, our model still surpasses the Node-Wise method by the results in Figure 5.
How important is the variance reduction? To justify the importance of the variance reduction, we implement a variant of our model by setting the trade-off parameter as λ = 0 in Eq. (10). By this, the parameters of the self-dependent function are randomly initialized and no training is performed. Figure 5 shows that, removing the variance loss does decrease the accuracies of our method on Cora and Reddit. For Citeseer, the effect of removing the variance reduction is not so significant. We conjecture that the average degree of Citeseer (i.e. 1.4) is smaller than Cora (i.e. 2.0) and Reddit (i.e. 492), and penalizing the variance is not so impeding due to the limited diversity of neighborhoods.
Comparisons with other state-of-the-art methods. We contrast the performance of our methods with the graph kernel method KLED [25] and Diffusion Convolutional Network (DCN) [18]. We use the reported results of KLED and DCN on Cora and Pubmed in [18]. We also summarize the results of GraphSAGE and FastGCN by their original implementations. For GraphSAGE, we report the results by the mean aggregator with the default parameters. For FastGCN, we directly make use of the provided results by [21]. For the baselines and our approach, we run the experiments with random seeds over 20 trials and record the mean accuracies and the standard variances. All results are organized in Table 1. As expected, our method achieves the best performance among all datasets, which are consistent with the results in Figure 5. It is also observed that removing the variance reduction will decrease the performance of our method especially on Cora and Reddit.
Evaluations of the Skip Connection
We evaluate the effectiveness of the skip connection on Cora. For the experiments on other datasets, we present the details in the supplementary material. The original network has two hidden layers. We further add a skip connection between the input and top layers, by using the computations in Eq. (12) and Eq. (13). Figure 5 displays the convergent curves of the original Adapt method and its variant with the skip connection, where the random seeds are shared and no early stopping is adapted. Although the improvement by our skip connection is not big regarding the final accuracy, it indeed speeds up the convergence significantly. This can be observed from Figure 3 (b) where adding the skip connection reduces the required epoches to converge from around 150 to 100.
We run experiments with different random seeds over 20 trials and report the mean results obtained by early stopping in Table 2. It is observed that the skip connection slightly improves the performance. Besides, we explicitly involve the 2-hop neighborhood sampling in our method by replacing the re-normalization matrix with its 2-order power expansion, i.e. + 2 . As displayed in Table 2, the explicit 2-hop sampling further boosts the classification accuracy. Although the skip-connection method is slightly inferior to the explicit 2-hop sampling, it avoids the computation of (i.e. 2 ) and yields more computationally beneficial for large and dense graphs.
Conclusion
We present a framework to accelerate the training of GCNs through developing a sampling method by constructing the network layer by layer. The developed layer-wise sampler is adaptive for variance reduction. Our method outperforms the other sampling-based counterparts: GraphSAGE and FastGCN in effectiveness and accuracy on extensive experiments. We also explore how to preserve the second-order proximity by using the skip connection. The experimental evaluations demonstrate that the skip connection further enhances our method in terms of the convergence speed and eventual classification accuracy. Table 3.
Further implementation details. The initial learning rates for the Adam optimizer are set to be 0.001 for Cora, Citeseer and Pubmed, and 0.01 for Reddit. The weight decays for all datasets are selected to be 0.0004. We apply ReLu function as the activation function and no dropout in our experiments. As presented in the paper, all models are implemented with 2-hidden-layer networks. For the Reddit dataset, we follow the suggestion by [21] to fix the weight of the bottom layer and pre-compute the productÂH (0) given the input features for efficiency. All experiments are conducted on a single Tesla P40 GPU. We apply the early-stopping for the training with a window size of 30 and apply the model that achieves the best validation accuracy for testing.
More results on the variance reduction. As shown in Table 1, it is sufficient to boost the performance by only reducing the variance of the top layer. Indeed, it is convenient to reduce the variances of all layers in our method, e.g., adding them all to the loss. To show this, we conduct an experiment on Cora by minimizing the variances of both the first and top hidden layers, where the experimental settings are the same as Table 1. The result is 0.8780 ± 0.0014, which slightly outperforms the original accuracy in Table 1 (i.e. 0.8744 ± 0.0034).
Comparisons with FastGCN by using the official codes. We use the public code to re-run the experiments of FastGCN in Figure 2 and Table 1. The average accuracies of FastGCN for four datasets are 0.840 ± 0.005, 0.774 ± 0.004, 0.881 ± 0.002 and 0.920 ± 0.005. The running curves of Figure 2 in the paper are updated by Figure 5 here. Clearly, our method still outperforms FastGCN remarkably. We have observed the inconsistences between the official implementations of GraphSAGE and FastGraph including the adjacent matrix construction, hidden dimensions, mini-batch sizes, maximal training epoches and other engineering tricks not mentioned in their papers. For fair comparisons, we re-implements them and uses the same experimental settings as our method in the main text. More results on Pubmed. In the paper, Figure 2 displays the accuracy curves of test data on Cora, Citeseer and Reddit, where the random seeds are fixed. For those on Pubmed, we provide results in Figure 5. Obviously, our method outperforms the IID and Node-Wise counterparts consistently. The Full model achieves the best accuracy around the 30-th epoch, but drops down after the 60-th epoch properly due to the overfitting. In contrast, our performance is more stable and it gives even better results in the end. Performing the variance reduction on this dataset is only helpful during the early stage, but contributes little when the model converges. Table 3 (b) reports the accuracy curve of the model with the skip connection on Cora. Here, we evaluate the effectiveness of the skip connection on Citeseer and Pubmed in Figure 6. It demonstrates that the skip connection is helpful to speed up the convergence on Citeseer. While on the Pubmed dataset, adding the skip connection boosts the performance only during early training epochs. For the Reddit dataset, we can not apply the skip connection in the network since the bottom layer is fixed and the output features are pre-computed. Figure 6: Accuracy curves of testing data on Citeseer and Pubmed for our Adapt method and its variant by adding skip connections.
| 5,296 |
1809.05343
|
2952029609
|
Graph Convolutional Networks (GCNs) have become a crucial tool on learning representations of graph vertices. The main challenge of adapting GCNs on large-scale graphs is the scalability issue that it incurs heavy cost both in computation and memory due to the uncontrollable neighborhood expansion across layers. In this paper, we accelerate the training of GCNs through developing an adaptive layer-wise sampling method. By constructing the network layer by layer in a top-down passway, we sample the lower layer conditioned on the top one, where the sampled neighborhoods are shared by different parent nodes and the over expansion is avoided owing to the fixed-size sampling. More importantly, the proposed sampler is adaptive and applicable for explicit variance reduction, which in turn enhances the training of our method. Furthermore, we propose a novel and economical approach to promote the message passing over distant nodes by applying skip connections. Intensive experiments on several benchmarks verify the effectiveness of our method regarding the classification accuracy while enjoying faster convergence speed.
|
A recent line of research is to generalize convolutions by making use of the patch operation @cite_14 and self-attention @cite_24 . As opposed to GCNs, these methods implicitly assign different importance weights to nodes of a same neighborhood, thus enabling a leap in model capacity. Particularly, Monti al @cite_14 presents mixture model CNNs to build CNN architectures on graphs using the patch operation, while the graph attention networks @cite_24 compute the hidden representations of each node on graph by attending over its neighbors following a self-attention strategy.
|
{
"abstract": [
"We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations. By stacking layers in which nodes are able to attend over their neighborhoods' features, we enable (implicitly) specifying different weights to different nodes in a neighborhood, without requiring any kind of costly matrix operation (such as inversion) or depending on knowing the graph structure upfront. In this way, we address several key challenges of spectral-based graph neural networks simultaneously, and make our model readily applicable to inductive as well as transductive problems. Our GAT models have achieved or matched state-of-the-art results across four established transductive and inductive graph benchmarks: the Cora, Citeseer and Pubmed citation network datasets, as well as a protein-protein interaction dataset (wherein test graphs remain unseen during training).",
"Deep learning has achieved a remarkable performance breakthrough in several fields, most notably in speech recognition, natural language processing, and computer vision. In particular, convolutional neural network (CNN) architectures currently produce state-of-the-art performance on a variety of image analysis tasks such as object detection and recognition. Most of deep learning research has so far focused on dealing with 1D, 2D, or 3D Euclidean-structured data such as acoustic signals, images, or videos. Recently, there has been an increasing interest in geometric deep learning, attempting to generalize deep learning methods to non-Euclidean structured data such as graphs and manifolds, with a variety of applications from the domains of network analysis, computational social science, or computer graphics. In this paper, we propose a unified framework allowing to generalize CNN architectures to non-Euclidean domains (graphs and manifolds) and learn local, stationary, and compositional task-specific features. We show that various non-Euclidean CNN methods previously proposed in the literature can be considered as particular instances of our framework. We test the proposed method on standard tasks from the realms of image-, graph-and 3D shape analysis and show that it consistently outperforms previous approaches."
],
"cite_N": [
"@cite_24",
"@cite_14"
],
"mid": [
"2766453196",
"2558460151"
]
}
|
Adaptive Sampling Towards Fast Graph Representation Learning
|
Deep Learning, especially Convolutional Neural Networks (CNNs), has revolutionized various machine learning tasks with grid-like input data, such as image classification [1] and machine translation [2]. By making use of local connection and weight sharing, CNNs are able to pursue translational invariance of the data. In many other contexts, however, the input data are lying on irregular or non-euclidean domains, such as graphs which encode the pairwise relationships. This includes examples of social networks [3], protein interfaces [4], and 3D meshes [5]. How to define convolutional operations on graphs is still an ongoing research topic.
There have been several attempts in the literature to develop neural networks to handle arbitrarily structured graphs. Whereas learning the graph embedding is already an important topic [6,7,8], this paper mainly focus on learning the representations for graph vertices by aggregating their features/attributes. The closest work to this vein is the Graph Convolution Network (GCN) [9] that applies connections between vertices as convolution filters to perform neighborhood aggregation. As demonstrated in [9], GCNs have achieved the state-of-the-art performance on node classification.
An obvious challenge for applying current graph networks is the scalability. Calculating convolutions requires the recursive expansion of neighborhoods across layers, which however is computationally prohibitive and demands hefty memory footprints. Even for a single node, it will quickly cover a large portion of the graph due to the neighborhood expansion layer by layer if particularly the graph is dense or powerlaw. Conventional mini-batch training is unable to speed up the convolution computations, since every batch will involve a large amount of vertices, even the batch size is small. To illustrate the effectiveness of the layer-wise sampling, we assume that the nodes denoted by the red circle in (a) and (b) have at least two parents in the upper layer. In the node-wise sampling, the neighborhoods of each parent are not seen by other parents, hence the connections between the neighborhoods and other parents are unused. In contrast, for the layer-wise strategy, all neighborhoods are shared by nodes in the parent layer, thus all between-layer connections are utilized.
To avoid the over-expansion issue, we accelerate the training of GCNs by controlling the size of the sampled neighborhoods in each layer (see Figure 5). Our method is to build up the network layer by layer in a top-down way, where the nodes in the lower layer 1 are sampled conditionally based on the upper layer's. Such layer-wise sampling is efficient in two technical aspects. First, we can reuse the information of the sampled neighborhoods since the nodes in the lower layer are visible and shared by their different parents in the upper layer. Second, it is easy to fix the size of each layer to avoid over-expansion of the neighborhoods, as the nodes of the lower layer are sampled as a whole.
The core of our method is to define an appropriate sampler for the layer-wise sampling. A common objective to design the sampler is to minimize the resulting variance. Unfortunately, the optimal sampler to minimize the variance is uncomputable due to the inconsistency between the top-down sampling and the bottom-up propagation in our network (see § 4.2 for details). To tackle this issue, we approximate the optimal sampler by replacing the uncomputable part with a self-dependent function, and then adding the variance to the loss function. As a result, the variance is explicitly reduced by training the network parameters and the sampler.
Moreover, we explore how to enable efficient message passing across distant nodes. Current methods [6,10] resort to random walks to generate neighborhoods of various steps, and then take integration of the multi-hop neighborhoods. Instead, this paper proposes a novel mechanism by further adding a skip connection between the (l+1)-th and (l−1)-th layers. This short-cut connection reuses the nodes in the (l − 1)-th layer as the 2-hop neighborhoods of the (l + 1)-th layer, thus it naturally maintains the second-order proximity without incurring extra computations.
To sum up, we make the following contributions in this paper: I.We develop a novel layer-wise sampling method to speed up the GCN model, where the between-layer information is shared and the size of the sampling nodes is controllable. II. The sampler for the layer-wise sampling is adaptive and determined by explicit variance reduction in the training phase. III. We propose a simple yet efficient approach to preserve the second-order proximity by formulating a skip connection across two layers. We evaluate the performance of our method on four popular benchmarks for node classification, including Cora, Citeseer, Pubmed [11] and Reddit [3]. Intensive experiments verify the effectiveness of our method regarding the classification accuracy and convergence speed.
Notations and Preliminaries
Notations. This paper mainly focuses on undirected graphs. Let G = (V, E) denote the undirected graph with nodes v i ∈ V, edges (v i , v j ) ∈ E, and N defines the number of the nodes. The adjacency matrix A ∈ R N ×N represents the weight associated to edge (v i , v j ) by each element A ij . We also have a feature matrix X ∈ R N ×D with x i denoting the D-dimensional feature for node v i .
GCN. The GCN model developed by Kipf and Welling [9] is one of the most successful convolutional networks for graph representation learning. If we define h (l) (v i ) as the hidden feature of the l-th layer for node v i , the feed forward propagation becomes
h (l+1) (v i ) = σ N j=1â (v i , u j )h (l) (u j )W (l) , i = 1, · · · , N,(1)
where = (â(v i , u j )) ∈ R N ×N is the re-normalization of the adjacency matrix; σ(·) is a nonlinear function; W (l) ∈ R D (l) ×D (l−1) is the filter matrix in the l-th layer; and we denote the nodes in the l-th layer as u j to distinguish them from those in the (l + 1)-th layer.
Adaptive Sampling
Eq. (1) indicates that, GCNs require the full expansion of neighborhoods for the feed forward computation of each node. This makes it computationally intensive and memory-consuming for learning on large-scale graphs containing more than hundreds of thousands of nodes. To circumvent this issue, this paper speeds up the feed forward propagation by adaptive sampling. The proposed sampler is adaptable and applicable for variance reduction.
We first re-formulate the GCN update to the expectation form and introduce the node-wise sampling accordingly. Then, we generalize the node-wise sampling to a more efficient framework that is termed as the layer-wise sampling. To minimize the resulting variance, we further propose to learn the layer-wise sampler by performing variance reduction explicitly. Lastly, we introduce the concept of skip-connection, and apply it to enable the second-order proximity for the feed-forward propagation.
From Node-Wise Sampling to Layer-Wise Sampling
Node-Wise Sampling. We first observe that Eq (1) can be rewritten to the expectation form, namely,
h (l+1) (v i ) = σ W (l) (N (v i )E p(uj |vi) [h (l) (u j )]),(2)
where we have included the weight matrix W (l) into the function σ(·) for concision;
p(u j |v i ) = a(v i , u j )/N (v i ) defines the probability of sampling u j given v i , with N (v i ) = N j=1â (v i , u j ).
A natural idea to speed up Eq. (2) is to approximate the expectation by Monte-Carlo sampling. To be specific, we estimate the expectation
µ p (v i ) = E p(uj |vi) [h (l) (u j )] withμ p (v i ) given bŷ µ p (v i ) = 1 n n j=1 h (l) (û j ),û j ∼ p(u j |v i ).(3)
By setting n N , the Monte-Carlo estimation can reduce the complexity of (1) from O(|E|D (l) D (l−1) ) (|E| denotes the number of edges) to O(n 2 D (l) D (l−1) ) if the numbers of the sampling points for the (l + 1)-th and l-th layers are both n.
By applying Eq. (3) in a multi-layer network, we construct the network structure in a top-down manner: sampling the neighbours of each node in the current layer recursively (see Figure 5 (a)). However, such node-wise sampling is still computationally expensive for deep networks, because the number of the nodes to be sampled grows exponentially with the number of layers. Taking a network with depth d for example, the number of sampling nodes in the input layer will increase to O(n d ), leading to significant computational burden for large d 2 .
Layer-Wise Sampling. We equivalently transform Eq. (2) to the following form by applying importance sampling, i.e.,
h (l+1) (v i ) = σ W (l) (N (v i )E q(uj |v1,··· ,vn) [ p(u j |v i ) q(u j |v 1 , · · · , v n ) h (l) (u j )]),(4)
where q(u j |v 1 , · · · , v n ) is defined as the probability of sampling u j given all the nodes of the current layer (i.e., v 1 , · · · , v n ). Similarly, we can speed up Eq. (4) by approximating the expectation with the Monte-Carlo mean, namely, computing
h (l+1) (v i ) = σ W (l) (N (v i )μ q (v i )) witĥ µ q (v i ) = 1 n n j=1 p(û j |v i ) q(û j |v 1 , · · · , v n ) h (l) (û j ),û j ∼ q(û j |v 1 , · · · , v n ).(5)
We term the sampling in Eq. (5) as the layer-wise sampling strategy. As opposed to the node-wise method in Eq. (3) where the nodes {û j } n j=1 are generated for each parent v i independently, the sampling in Eq. (5) is required to be performed only once. Besides, in the node-wise sampling, the neighborhoods of each node are not visible to other parents; while for the layer-wise sampling all sampling nodes {û j } n j=1 are shared by all nodes of the current layer. This sharing property is able to enhance the message passing at utmost. More importantly, the size of each layer is fixed to n, and the total number of sampling nodes only grows linearly with the network depth.
Explicit Variance Reduction
The remaining question for the layer-wise sampling is how to define the exact form of the sampler q(u j |v 1 , · · · , v n ). Indeed, a good estimator should reduce the variance caused by the sampling process, since high variance probably impedes efficient training. For simplicity, we concisely denote the distribution q(u j |v 1 , · · · , v n ) as q(u j ) below.
According to the derivations of importance sampling in [23], we immediately conclude that Proposition 1. The variance of the estimatorμ q (v i ) in Eq. (5) is given by
Var q (μ q (v i )) = 1 n E q(uj ) [ (p(u j |v i )|h (l) (u j )| − µ q (v i )q(u j )) 2 q 2 (u j ) ].(6)
The optimal sampler to minimize the variance Var q(uj ) (μ q (v i )) in Eq. (6) is given by
q * (u j ) = p(u j |v i )|h (l) (u j )| N j=1 p(u j |v i )|h (l) (u j )| .(7)
Unfortunately, it is infeasible to compute the optimal sampler in our case. By its definition, the sampler q * (u j ) is computed based on the hidden feature h (l) (u j ) that is aggregated by its neighborhoods in previous layers. However, under our top-down sampling framework, the neural units of lower layers are unknown unless the network is completely constructed by the sampling.
To alleviate this chicken-and-egg dilemma, we learn a self-dependent function of each node to determine its importance for the sampling. Let g(x(u j )) be the self-dependent function computed based on the node feature x(u j ). Replacing the hidden function in Eq. (7) with g(x(u j )) arrives at
q * (u j ) = p(u j |v i )|g(x(u j ))| N j=1 p(u j |v i )|g(x(u j ))| ,(8)
The sampler by Eq. (8) is node-wise and varies for different v i . To make it applicable for the layer-wise sampling, we summarize the computations over all nodes {v i } n i=1 , thus we attain
q * (u j ) = n i=1 p(u j |v i )|g(x(u j ))| N j=1 n i=1 p(u j |v i )|g(x(v j ))| .(9)
In this paper, we define g(x(u j )) as a linear function i.e. g(x(u j )) = W g x(u j ) parameterized by the matrix W g ∈ R 1×D . Computing the sampler in Eq. (9) is efficient, since computing p(u j |v i ) (i.e. the adjacent value) and the self-dependent function g(x(u j )) is fast.
Note that applying the sampler given by Eq. (9) not necessarily results in a minimal variance. To fulfill variance reduction, we add the variance to the loss function and explicitly minimize the variance by model training. Suppose we have a mini-batch of data pairs
{(v i , y i )} n i=1 ,
where v i is the target nodes and y i is the corresponded ground-true label. By the layer-wise sampling (Eq. (9)), the nodes of previous layer are sampled given {v i } n i=1 , and this process is recursively called layer by layer until we reaching the input domain. Then we perform a bottom-up propagation to compute the hidden features and obtain the estimated activation for node v i , i.e.μ q (v i ). Certain nonlinear and soft-max functions are further added onμ q (v i ) to produce the predictionȳ(μ q (v i ))). By taking the classification loss and variance (Eq. (6)) into account, we formulate a hybrid loss as
L = 1 n n i=1 L c (y i ,ȳ(μ q (v i ))) + λVar q (μ q (v i ))),(10)
where L c is the classification loss (e.g., the crossing entropy); λ is the trade-off parameter and fixed as 0.5 in our experiments. Note that the activations for other hidden layers are also stochastic, and the resulting variances should be reduced. In Eq. (10) we only penalize the variance of the top layer for efficient computation and find it sufficient to deliver promising performance in our experiments.
To minimize the hybrid loss in Eq. (10), it requires to perform gradient calculations. For the network parameters, e.g. W (l) in Eq. (2), the gradient calculation is straightforward and can be easily derived by the automatically-differential platform, e.g., TensorFlow [24]. For the parameters of the sampler, e.g. W g in Eq. (9), calculating the gradient is nontrivial as the sampling process (Eq. (5)) is nondifferential. Fortunately, we prove that the gradient of the classification loss with respect to the sampler is zero. We also derive the gradient of the variance term regarding the sampler, and detail the gradient calculation in the supplementary material
Preserving Second-Order Proximities by Skip Connections
The GCN update in Eq. (1) only aggregates messages passed from 1-hop neighborhoods. To allow the network to better utilize information across distant nodes, we can sample the multi-hop neighborhoods for the GCN update in a similar way as the random walk [6,10]. However, the random walk requires extra sampling to obtain distant nodes which is computationally expensive for dense graphs. In this paper, we propose to propagate the information over distant nodes via skip connections.
The key idea of the skip connection is to reuse the nodes of the (l − 1)-th layer to preserve the second-order proximity (see the definition in [7]). For the (l + 1)-th layer, the nodes of the (l − 1)-th layer are actually the 2-hop neighborhoods. If we further add a skip connection from the (l − 1)-th to the (l + 1)-th layer, as illustrated in Figure 5 (c), the aggregation will involve both the 1-hop and 2-hop neighborhoods. The calculations along the skip connection are formulated as
h (l+1) skip (v i ) = n j=1â skip (v i , s j )h (l−1) (s j )W (l−1) skip , i = 1, · · · , n,(11)
where s = {s j } n j=1 denote the nodes in the (l − 1)-th layer. Due to the 2-hop distance between v i and s j , the weightâ skip (v i , s j ) is supposed to be the element of 2 . Here, to avoid the full computation of 2 , we estimate the weight with the sampled nodes of the l-th layer, i.e.,
a skip (v i , s j ) ≈ n k=1â (v i , u k )â(u k , s j ).(12)
Instead of learning a free W (l−1) skip in Eq. (11), we decompose it to be W (l−1)
skip = W (l−1) W (l) ,(13)
where W (l) and W (l−1) are the filters of the l-th and (l − 1)-th layers in original network, respectively. The output of skip-connection will be added to the GCN layer (Eq.(1)) before nonlinearity.
By the skip connection, the second-order proximity is maintained without extra 2-hop sampling. Besides, the skip connection allows the information to pass between two distant layers thus enabling more efficient back-propagation and model training.
While the designs are similar, our motivation of applying the skip connection is different to the residual function in ResNets [1]. The purpose of employing the skip connection in [1] is to gain accuracy by increasing the network depth. Here, we apply it to preserve the second-order proximity.
In contrast to the identity mappings used in ResNets, the calculation along the skip-connection in our model should be derived specifically (see Eq. (12) and Eq. (13)).
Discussions and Extensions
Relation to other sampling methods. We contrast our approach with GraphSAGE [3] and Fast-GCN [21] regarding the following aspects:
1. The proposed layer-wise sampling method is novel. GraphSAGE randomly samples a fixed-size neighborhoods of each node, while FastGCN constructs each layer independently according to an identical distribution. As for our layer-wise approach, the nodes in lower layers are sampled conditioned on the upper ones, which is capable of capturing the between-layer correlations.
2. Our framework is general. Both GraphSAGE and FastGCN can be categorized as the specific variants of our framework. Specifically, the GraphSAGE model is regarded as a node-wise sampler in Eq (3) if p(u j |v i ) is defined as the uniform distribution; FastGCN can be considered as a special layer-wise method by applying the sampler q(u j ) that is independent to the nodes {v i } n i=1 in Eq. (5). 3. Our sampler is parameterized and trainable for explicit variance reduction. The sampler of GraphSAGE or FastGCN involves no parameter and is not adaptive for minimizing variance. In contrast, our sampler modifies the optimal importance sampling distribution with a self-dependent function. The resulting variance is explicitly reduced by fine-tuning the network and sampler.
Taking the attention into account. The GAT model [13] applies the idea of self-attention to graph representation learning. Concisely, it replaces the re-normalization of the adjacency matrix in Eq. (1) with specific attention values, i.e.,
h (l+1) (v i ) = σ( N j=1 a((h (l) (v i ), (h (l) (u j ))h (l) (v j )W (l) ), where a(h (l) (v i ), h (l) (u j )
) measures the attention value between the hidden features v i and u j , which is derived as a(h (l) (v i ), h (l) (u j )) = SoftMax(LeakyReLU(W 1 h (l) (v i ), W 2 h (l) (u j ))) by using the LeakyReLU nonlinearity and SoftMax normalization with parameters W 1 and W 2 .
It is impracticable to apply the GAT-like attention mechanism directly in our framework, as the probability p(u j |v i ) in Eq. (9) will become related to the attention value a(h (l) (v i ), h (l) (u j )) that is determined by the hidden features of the l-th layer. As discussed in § 4.2, computing the hidden features of lower layers is impossible unless the network is already built after sampling. To solve this issue, we develop a novel attention mechanism by applying the self-dependent function similar to Eq. (9). The attention is computed as
a(x(v i ), x(u j )) = 1 n ReLu(W 1 g(x(v i )) + W 2 g(x(u j ))),(14)
where W 1 and W 2 are the learnable parameters.
Experiments
We evaluate the performance of our methods on the following benchmarks: (1) categorizing academic papers in the citation network datasets-Cora, Citeseer and Pubmed [11]; (2) predicting which Our sampling framework is inductive in the sense that it clearly separates out test data from training. In contrast to the transductive learning where all vertices should be provided, our approach aggregates the information from each node's neighborhoods to learn structural properties that can be generalized to unseen nodes. For testing, the embedding of a new node may be either computed by using the full GCN architecture or approximated through sampling as is done in model training. Here we use the full architecture as it is more straightforward and easier to implement. For all datasets, we employ the network with two hidden layers as usual. The hidden dimensions for the citation network datasets (i.e., Cora, Citeseer and Pubmed) are set to be 16. For the Reddit dataset, the hidden dimensions are selected to be 256 as suggested by [3]. The numbers of the sampling nodes for all layers excluding the top one are set to 128 for Cora and Citeseer, 256 for Pubmed and 512 for Reddit. The sizes of the top layer (i.e. the stochastic mini-batch size) are chosen to be 256 for all datasets. We train all models using early stopping with a window size of 30, as suggested by [9]. Further details on the network architectures and training settings are contained in the supplementary material.
Alation Studies on the Adaptive Sampling
Baselines. The codes of GraphSAGE [3] and FastGCNN [21] provided by the authors are implemented inconsistently; here we re-implement them based on our framework to make the comparisons more fair 3 . In details, we implement the GraphSAGE method by applying the node-wise strategy with a uniform sampler in Eq. (3), where the number of the sampling neighborhoods for each node are set to 5. For FastGCN, we adopt the Independent-Identical-Distribution (IID) sampler proposed by [21] in Eq. (5), where the number of the sampling nodes for each layer is the same as our method. For consistence, the re-implementations of GraphSAGE and FastGCN are named as Node-Wise and IID in our experiments. We also implement the Full GCN architecture as a strong baseline. All compared methods shared the same network structure and training settings for fair comparison. We have also conducted the attention mechanism introduced in § 6 for all methods.
Comparisons with other sampling methods. The random seeds are fixed and no early stopping is used for the experiments here. Figure 5 reports the converging behaviors of all compared methods during training on Cora, Citeseer and Reddit 4 . It demonstrates that our method, denoted as Adapt, converges faster than other sampling counterparts on all three datasets. Interestingly, our method even outperforms the Full model on Cora and Reddit. Similar to our method, the IID sampling is also layer-wise, but it constructs each layer independently. Thanks to the conditional sampling, our method achieves more stable convergent curve than the IID method as Figure 5 shown. It turns out that considering the between-layer information helps in stability and accuracy.
Moreover, we draw the training time in Figure 3 (a). Clearly, all sampling methods run faster than the Full model. Compared to the Node-Wise method, our approach exhibits a higher training speed due to the more compact architecture. To show this, suppose the number of nodes in the top layer is n, then Table 1: Accuracy Comparisons with state-of-the-art methods.
Methods
Cora Citeseer Pubmed Reddit KLED [25] 0.8229 -0.8228 -2-hop DCNN [18] 0.8677 -0.8976 -FastGCN [21] 0.8500 0.7760 0.8800 0.9370 GraphSAGE [3] 0 for the Node-Wise method the input, hidden and top layers are of sizes 25n, 5n and n, respectively, while the numbers of the nodes in all layers are n for our model. Even with less sampling nodes, our model still surpasses the Node-Wise method by the results in Figure 5.
How important is the variance reduction? To justify the importance of the variance reduction, we implement a variant of our model by setting the trade-off parameter as λ = 0 in Eq. (10). By this, the parameters of the self-dependent function are randomly initialized and no training is performed. Figure 5 shows that, removing the variance loss does decrease the accuracies of our method on Cora and Reddit. For Citeseer, the effect of removing the variance reduction is not so significant. We conjecture that the average degree of Citeseer (i.e. 1.4) is smaller than Cora (i.e. 2.0) and Reddit (i.e. 492), and penalizing the variance is not so impeding due to the limited diversity of neighborhoods.
Comparisons with other state-of-the-art methods. We contrast the performance of our methods with the graph kernel method KLED [25] and Diffusion Convolutional Network (DCN) [18]. We use the reported results of KLED and DCN on Cora and Pubmed in [18]. We also summarize the results of GraphSAGE and FastGCN by their original implementations. For GraphSAGE, we report the results by the mean aggregator with the default parameters. For FastGCN, we directly make use of the provided results by [21]. For the baselines and our approach, we run the experiments with random seeds over 20 trials and record the mean accuracies and the standard variances. All results are organized in Table 1. As expected, our method achieves the best performance among all datasets, which are consistent with the results in Figure 5. It is also observed that removing the variance reduction will decrease the performance of our method especially on Cora and Reddit.
Evaluations of the Skip Connection
We evaluate the effectiveness of the skip connection on Cora. For the experiments on other datasets, we present the details in the supplementary material. The original network has two hidden layers. We further add a skip connection between the input and top layers, by using the computations in Eq. (12) and Eq. (13). Figure 5 displays the convergent curves of the original Adapt method and its variant with the skip connection, where the random seeds are shared and no early stopping is adapted. Although the improvement by our skip connection is not big regarding the final accuracy, it indeed speeds up the convergence significantly. This can be observed from Figure 3 (b) where adding the skip connection reduces the required epoches to converge from around 150 to 100.
We run experiments with different random seeds over 20 trials and report the mean results obtained by early stopping in Table 2. It is observed that the skip connection slightly improves the performance. Besides, we explicitly involve the 2-hop neighborhood sampling in our method by replacing the re-normalization matrix with its 2-order power expansion, i.e. + 2 . As displayed in Table 2, the explicit 2-hop sampling further boosts the classification accuracy. Although the skip-connection method is slightly inferior to the explicit 2-hop sampling, it avoids the computation of (i.e. 2 ) and yields more computationally beneficial for large and dense graphs.
Conclusion
We present a framework to accelerate the training of GCNs through developing a sampling method by constructing the network layer by layer. The developed layer-wise sampler is adaptive for variance reduction. Our method outperforms the other sampling-based counterparts: GraphSAGE and FastGCN in effectiveness and accuracy on extensive experiments. We also explore how to preserve the second-order proximity by using the skip connection. The experimental evaluations demonstrate that the skip connection further enhances our method in terms of the convergence speed and eventual classification accuracy. Table 3.
Further implementation details. The initial learning rates for the Adam optimizer are set to be 0.001 for Cora, Citeseer and Pubmed, and 0.01 for Reddit. The weight decays for all datasets are selected to be 0.0004. We apply ReLu function as the activation function and no dropout in our experiments. As presented in the paper, all models are implemented with 2-hidden-layer networks. For the Reddit dataset, we follow the suggestion by [21] to fix the weight of the bottom layer and pre-compute the productÂH (0) given the input features for efficiency. All experiments are conducted on a single Tesla P40 GPU. We apply the early-stopping for the training with a window size of 30 and apply the model that achieves the best validation accuracy for testing.
More results on the variance reduction. As shown in Table 1, it is sufficient to boost the performance by only reducing the variance of the top layer. Indeed, it is convenient to reduce the variances of all layers in our method, e.g., adding them all to the loss. To show this, we conduct an experiment on Cora by minimizing the variances of both the first and top hidden layers, where the experimental settings are the same as Table 1. The result is 0.8780 ± 0.0014, which slightly outperforms the original accuracy in Table 1 (i.e. 0.8744 ± 0.0034).
Comparisons with FastGCN by using the official codes. We use the public code to re-run the experiments of FastGCN in Figure 2 and Table 1. The average accuracies of FastGCN for four datasets are 0.840 ± 0.005, 0.774 ± 0.004, 0.881 ± 0.002 and 0.920 ± 0.005. The running curves of Figure 2 in the paper are updated by Figure 5 here. Clearly, our method still outperforms FastGCN remarkably. We have observed the inconsistences between the official implementations of GraphSAGE and FastGraph including the adjacent matrix construction, hidden dimensions, mini-batch sizes, maximal training epoches and other engineering tricks not mentioned in their papers. For fair comparisons, we re-implements them and uses the same experimental settings as our method in the main text. More results on Pubmed. In the paper, Figure 2 displays the accuracy curves of test data on Cora, Citeseer and Reddit, where the random seeds are fixed. For those on Pubmed, we provide results in Figure 5. Obviously, our method outperforms the IID and Node-Wise counterparts consistently. The Full model achieves the best accuracy around the 30-th epoch, but drops down after the 60-th epoch properly due to the overfitting. In contrast, our performance is more stable and it gives even better results in the end. Performing the variance reduction on this dataset is only helpful during the early stage, but contributes little when the model converges. Table 3 (b) reports the accuracy curve of the model with the skip connection on Cora. Here, we evaluate the effectiveness of the skip connection on Citeseer and Pubmed in Figure 6. It demonstrates that the skip connection is helpful to speed up the convergence on Citeseer. While on the Pubmed dataset, adding the skip connection boosts the performance only during early training epochs. For the Reddit dataset, we can not apply the skip connection in the network since the bottom layer is fixed and the output features are pre-computed. Figure 6: Accuracy curves of testing data on Citeseer and Pubmed for our Adapt method and its variant by adding skip connections.
| 5,296 |
1809.05343
|
2952029609
|
Graph Convolutional Networks (GCNs) have become a crucial tool on learning representations of graph vertices. The main challenge of adapting GCNs on large-scale graphs is the scalability issue that it incurs heavy cost both in computation and memory due to the uncontrollable neighborhood expansion across layers. In this paper, we accelerate the training of GCNs through developing an adaptive layer-wise sampling method. By constructing the network layer by layer in a top-down passway, we sample the lower layer conditioned on the top one, where the sampled neighborhoods are shared by different parent nodes and the over expansion is avoided owing to the fixed-size sampling. More importantly, the proposed sampler is adaptive and applicable for explicit variance reduction, which in turn enhances the training of our method. Furthermore, we propose a novel and economical approach to promote the message passing over distant nodes by applying skip connections. Intensive experiments on several benchmarks verify the effectiveness of our method regarding the classification accuracy while enjoying faster convergence speed.
|
More recently, two kinds of sampling-based methods including GraphSAGE @cite_8 and FastGCN @cite_10 were developed for fast representation learning on graphs. To be specific, GraphSAGE computes node representations by sampling neighborhoods of each node and then performing a specific aggregator for information fusion. The FastGCN model interprets graph convolutions as integral transforms of embedding functions and samples the nodes in each layer independently. While our method is closely related to these methods, we develop a different sampling strategy in this paper. Compared to GraphSAGE that is node-wise, our method is based on layer-wise sampling as all neighborhoods are sampled as altogether, and thus can allow neighborhood sharing as illustrated in Figure . In contrast to FastGCN that constructs each layer independently, our model is capable of capturing the between-layer connections as the lower layer is sampled conditionally on the top one. We detail the comparisons in . Another related work is the control-variate-based method by @cite_5 . However, the sampling process of this method is node-wise, and the historical activations of nodes are required.
|
{
"abstract": [
"",
"The graph convolutional networks (GCN) recently proposed by Kipf and Welling are an effective graph model for semi-supervised learning. This model, however, was originally designed to be learned with the presence of both training and test data. Moreover, the recursive neighborhood expansion across layers poses time and memory challenges for training with large, dense graphs. To relax the requirement of simultaneous availability of test data, we interpret graph convolutions as integral transforms of embedding functions under probability measures. Such an interpretation allows for the use of Monte Carlo approaches to consistently estimate the integrals, which in turn leads to a batched training scheme as we propose in this work---FastGCN. Enhanced with importance sampling, FastGCN not only is efficient for training but also generalizes well for inference. We show a comprehensive set of experiments to demonstrate its effectiveness compared with GCN and related models. In particular, training is orders of magnitude more efficient while predictions remain comparably accurate.",
"Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions."
],
"cite_N": [
"@cite_5",
"@cite_10",
"@cite_8"
],
"mid": [
"2963581908",
"2786915849",
"2962767366"
]
}
|
Adaptive Sampling Towards Fast Graph Representation Learning
|
Deep Learning, especially Convolutional Neural Networks (CNNs), has revolutionized various machine learning tasks with grid-like input data, such as image classification [1] and machine translation [2]. By making use of local connection and weight sharing, CNNs are able to pursue translational invariance of the data. In many other contexts, however, the input data are lying on irregular or non-euclidean domains, such as graphs which encode the pairwise relationships. This includes examples of social networks [3], protein interfaces [4], and 3D meshes [5]. How to define convolutional operations on graphs is still an ongoing research topic.
There have been several attempts in the literature to develop neural networks to handle arbitrarily structured graphs. Whereas learning the graph embedding is already an important topic [6,7,8], this paper mainly focus on learning the representations for graph vertices by aggregating their features/attributes. The closest work to this vein is the Graph Convolution Network (GCN) [9] that applies connections between vertices as convolution filters to perform neighborhood aggregation. As demonstrated in [9], GCNs have achieved the state-of-the-art performance on node classification.
An obvious challenge for applying current graph networks is the scalability. Calculating convolutions requires the recursive expansion of neighborhoods across layers, which however is computationally prohibitive and demands hefty memory footprints. Even for a single node, it will quickly cover a large portion of the graph due to the neighborhood expansion layer by layer if particularly the graph is dense or powerlaw. Conventional mini-batch training is unable to speed up the convolution computations, since every batch will involve a large amount of vertices, even the batch size is small. To illustrate the effectiveness of the layer-wise sampling, we assume that the nodes denoted by the red circle in (a) and (b) have at least two parents in the upper layer. In the node-wise sampling, the neighborhoods of each parent are not seen by other parents, hence the connections between the neighborhoods and other parents are unused. In contrast, for the layer-wise strategy, all neighborhoods are shared by nodes in the parent layer, thus all between-layer connections are utilized.
To avoid the over-expansion issue, we accelerate the training of GCNs by controlling the size of the sampled neighborhoods in each layer (see Figure 5). Our method is to build up the network layer by layer in a top-down way, where the nodes in the lower layer 1 are sampled conditionally based on the upper layer's. Such layer-wise sampling is efficient in two technical aspects. First, we can reuse the information of the sampled neighborhoods since the nodes in the lower layer are visible and shared by their different parents in the upper layer. Second, it is easy to fix the size of each layer to avoid over-expansion of the neighborhoods, as the nodes of the lower layer are sampled as a whole.
The core of our method is to define an appropriate sampler for the layer-wise sampling. A common objective to design the sampler is to minimize the resulting variance. Unfortunately, the optimal sampler to minimize the variance is uncomputable due to the inconsistency between the top-down sampling and the bottom-up propagation in our network (see § 4.2 for details). To tackle this issue, we approximate the optimal sampler by replacing the uncomputable part with a self-dependent function, and then adding the variance to the loss function. As a result, the variance is explicitly reduced by training the network parameters and the sampler.
Moreover, we explore how to enable efficient message passing across distant nodes. Current methods [6,10] resort to random walks to generate neighborhoods of various steps, and then take integration of the multi-hop neighborhoods. Instead, this paper proposes a novel mechanism by further adding a skip connection between the (l+1)-th and (l−1)-th layers. This short-cut connection reuses the nodes in the (l − 1)-th layer as the 2-hop neighborhoods of the (l + 1)-th layer, thus it naturally maintains the second-order proximity without incurring extra computations.
To sum up, we make the following contributions in this paper: I.We develop a novel layer-wise sampling method to speed up the GCN model, where the between-layer information is shared and the size of the sampling nodes is controllable. II. The sampler for the layer-wise sampling is adaptive and determined by explicit variance reduction in the training phase. III. We propose a simple yet efficient approach to preserve the second-order proximity by formulating a skip connection across two layers. We evaluate the performance of our method on four popular benchmarks for node classification, including Cora, Citeseer, Pubmed [11] and Reddit [3]. Intensive experiments verify the effectiveness of our method regarding the classification accuracy and convergence speed.
Notations and Preliminaries
Notations. This paper mainly focuses on undirected graphs. Let G = (V, E) denote the undirected graph with nodes v i ∈ V, edges (v i , v j ) ∈ E, and N defines the number of the nodes. The adjacency matrix A ∈ R N ×N represents the weight associated to edge (v i , v j ) by each element A ij . We also have a feature matrix X ∈ R N ×D with x i denoting the D-dimensional feature for node v i .
GCN. The GCN model developed by Kipf and Welling [9] is one of the most successful convolutional networks for graph representation learning. If we define h (l) (v i ) as the hidden feature of the l-th layer for node v i , the feed forward propagation becomes
h (l+1) (v i ) = σ N j=1â (v i , u j )h (l) (u j )W (l) , i = 1, · · · , N,(1)
where = (â(v i , u j )) ∈ R N ×N is the re-normalization of the adjacency matrix; σ(·) is a nonlinear function; W (l) ∈ R D (l) ×D (l−1) is the filter matrix in the l-th layer; and we denote the nodes in the l-th layer as u j to distinguish them from those in the (l + 1)-th layer.
Adaptive Sampling
Eq. (1) indicates that, GCNs require the full expansion of neighborhoods for the feed forward computation of each node. This makes it computationally intensive and memory-consuming for learning on large-scale graphs containing more than hundreds of thousands of nodes. To circumvent this issue, this paper speeds up the feed forward propagation by adaptive sampling. The proposed sampler is adaptable and applicable for variance reduction.
We first re-formulate the GCN update to the expectation form and introduce the node-wise sampling accordingly. Then, we generalize the node-wise sampling to a more efficient framework that is termed as the layer-wise sampling. To minimize the resulting variance, we further propose to learn the layer-wise sampler by performing variance reduction explicitly. Lastly, we introduce the concept of skip-connection, and apply it to enable the second-order proximity for the feed-forward propagation.
From Node-Wise Sampling to Layer-Wise Sampling
Node-Wise Sampling. We first observe that Eq (1) can be rewritten to the expectation form, namely,
h (l+1) (v i ) = σ W (l) (N (v i )E p(uj |vi) [h (l) (u j )]),(2)
where we have included the weight matrix W (l) into the function σ(·) for concision;
p(u j |v i ) = a(v i , u j )/N (v i ) defines the probability of sampling u j given v i , with N (v i ) = N j=1â (v i , u j ).
A natural idea to speed up Eq. (2) is to approximate the expectation by Monte-Carlo sampling. To be specific, we estimate the expectation
µ p (v i ) = E p(uj |vi) [h (l) (u j )] withμ p (v i ) given bŷ µ p (v i ) = 1 n n j=1 h (l) (û j ),û j ∼ p(u j |v i ).(3)
By setting n N , the Monte-Carlo estimation can reduce the complexity of (1) from O(|E|D (l) D (l−1) ) (|E| denotes the number of edges) to O(n 2 D (l) D (l−1) ) if the numbers of the sampling points for the (l + 1)-th and l-th layers are both n.
By applying Eq. (3) in a multi-layer network, we construct the network structure in a top-down manner: sampling the neighbours of each node in the current layer recursively (see Figure 5 (a)). However, such node-wise sampling is still computationally expensive for deep networks, because the number of the nodes to be sampled grows exponentially with the number of layers. Taking a network with depth d for example, the number of sampling nodes in the input layer will increase to O(n d ), leading to significant computational burden for large d 2 .
Layer-Wise Sampling. We equivalently transform Eq. (2) to the following form by applying importance sampling, i.e.,
h (l+1) (v i ) = σ W (l) (N (v i )E q(uj |v1,··· ,vn) [ p(u j |v i ) q(u j |v 1 , · · · , v n ) h (l) (u j )]),(4)
where q(u j |v 1 , · · · , v n ) is defined as the probability of sampling u j given all the nodes of the current layer (i.e., v 1 , · · · , v n ). Similarly, we can speed up Eq. (4) by approximating the expectation with the Monte-Carlo mean, namely, computing
h (l+1) (v i ) = σ W (l) (N (v i )μ q (v i )) witĥ µ q (v i ) = 1 n n j=1 p(û j |v i ) q(û j |v 1 , · · · , v n ) h (l) (û j ),û j ∼ q(û j |v 1 , · · · , v n ).(5)
We term the sampling in Eq. (5) as the layer-wise sampling strategy. As opposed to the node-wise method in Eq. (3) where the nodes {û j } n j=1 are generated for each parent v i independently, the sampling in Eq. (5) is required to be performed only once. Besides, in the node-wise sampling, the neighborhoods of each node are not visible to other parents; while for the layer-wise sampling all sampling nodes {û j } n j=1 are shared by all nodes of the current layer. This sharing property is able to enhance the message passing at utmost. More importantly, the size of each layer is fixed to n, and the total number of sampling nodes only grows linearly with the network depth.
Explicit Variance Reduction
The remaining question for the layer-wise sampling is how to define the exact form of the sampler q(u j |v 1 , · · · , v n ). Indeed, a good estimator should reduce the variance caused by the sampling process, since high variance probably impedes efficient training. For simplicity, we concisely denote the distribution q(u j |v 1 , · · · , v n ) as q(u j ) below.
According to the derivations of importance sampling in [23], we immediately conclude that Proposition 1. The variance of the estimatorμ q (v i ) in Eq. (5) is given by
Var q (μ q (v i )) = 1 n E q(uj ) [ (p(u j |v i )|h (l) (u j )| − µ q (v i )q(u j )) 2 q 2 (u j ) ].(6)
The optimal sampler to minimize the variance Var q(uj ) (μ q (v i )) in Eq. (6) is given by
q * (u j ) = p(u j |v i )|h (l) (u j )| N j=1 p(u j |v i )|h (l) (u j )| .(7)
Unfortunately, it is infeasible to compute the optimal sampler in our case. By its definition, the sampler q * (u j ) is computed based on the hidden feature h (l) (u j ) that is aggregated by its neighborhoods in previous layers. However, under our top-down sampling framework, the neural units of lower layers are unknown unless the network is completely constructed by the sampling.
To alleviate this chicken-and-egg dilemma, we learn a self-dependent function of each node to determine its importance for the sampling. Let g(x(u j )) be the self-dependent function computed based on the node feature x(u j ). Replacing the hidden function in Eq. (7) with g(x(u j )) arrives at
q * (u j ) = p(u j |v i )|g(x(u j ))| N j=1 p(u j |v i )|g(x(u j ))| ,(8)
The sampler by Eq. (8) is node-wise and varies for different v i . To make it applicable for the layer-wise sampling, we summarize the computations over all nodes {v i } n i=1 , thus we attain
q * (u j ) = n i=1 p(u j |v i )|g(x(u j ))| N j=1 n i=1 p(u j |v i )|g(x(v j ))| .(9)
In this paper, we define g(x(u j )) as a linear function i.e. g(x(u j )) = W g x(u j ) parameterized by the matrix W g ∈ R 1×D . Computing the sampler in Eq. (9) is efficient, since computing p(u j |v i ) (i.e. the adjacent value) and the self-dependent function g(x(u j )) is fast.
Note that applying the sampler given by Eq. (9) not necessarily results in a minimal variance. To fulfill variance reduction, we add the variance to the loss function and explicitly minimize the variance by model training. Suppose we have a mini-batch of data pairs
{(v i , y i )} n i=1 ,
where v i is the target nodes and y i is the corresponded ground-true label. By the layer-wise sampling (Eq. (9)), the nodes of previous layer are sampled given {v i } n i=1 , and this process is recursively called layer by layer until we reaching the input domain. Then we perform a bottom-up propagation to compute the hidden features and obtain the estimated activation for node v i , i.e.μ q (v i ). Certain nonlinear and soft-max functions are further added onμ q (v i ) to produce the predictionȳ(μ q (v i ))). By taking the classification loss and variance (Eq. (6)) into account, we formulate a hybrid loss as
L = 1 n n i=1 L c (y i ,ȳ(μ q (v i ))) + λVar q (μ q (v i ))),(10)
where L c is the classification loss (e.g., the crossing entropy); λ is the trade-off parameter and fixed as 0.5 in our experiments. Note that the activations for other hidden layers are also stochastic, and the resulting variances should be reduced. In Eq. (10) we only penalize the variance of the top layer for efficient computation and find it sufficient to deliver promising performance in our experiments.
To minimize the hybrid loss in Eq. (10), it requires to perform gradient calculations. For the network parameters, e.g. W (l) in Eq. (2), the gradient calculation is straightforward and can be easily derived by the automatically-differential platform, e.g., TensorFlow [24]. For the parameters of the sampler, e.g. W g in Eq. (9), calculating the gradient is nontrivial as the sampling process (Eq. (5)) is nondifferential. Fortunately, we prove that the gradient of the classification loss with respect to the sampler is zero. We also derive the gradient of the variance term regarding the sampler, and detail the gradient calculation in the supplementary material
Preserving Second-Order Proximities by Skip Connections
The GCN update in Eq. (1) only aggregates messages passed from 1-hop neighborhoods. To allow the network to better utilize information across distant nodes, we can sample the multi-hop neighborhoods for the GCN update in a similar way as the random walk [6,10]. However, the random walk requires extra sampling to obtain distant nodes which is computationally expensive for dense graphs. In this paper, we propose to propagate the information over distant nodes via skip connections.
The key idea of the skip connection is to reuse the nodes of the (l − 1)-th layer to preserve the second-order proximity (see the definition in [7]). For the (l + 1)-th layer, the nodes of the (l − 1)-th layer are actually the 2-hop neighborhoods. If we further add a skip connection from the (l − 1)-th to the (l + 1)-th layer, as illustrated in Figure 5 (c), the aggregation will involve both the 1-hop and 2-hop neighborhoods. The calculations along the skip connection are formulated as
h (l+1) skip (v i ) = n j=1â skip (v i , s j )h (l−1) (s j )W (l−1) skip , i = 1, · · · , n,(11)
where s = {s j } n j=1 denote the nodes in the (l − 1)-th layer. Due to the 2-hop distance between v i and s j , the weightâ skip (v i , s j ) is supposed to be the element of 2 . Here, to avoid the full computation of 2 , we estimate the weight with the sampled nodes of the l-th layer, i.e.,
a skip (v i , s j ) ≈ n k=1â (v i , u k )â(u k , s j ).(12)
Instead of learning a free W (l−1) skip in Eq. (11), we decompose it to be W (l−1)
skip = W (l−1) W (l) ,(13)
where W (l) and W (l−1) are the filters of the l-th and (l − 1)-th layers in original network, respectively. The output of skip-connection will be added to the GCN layer (Eq.(1)) before nonlinearity.
By the skip connection, the second-order proximity is maintained without extra 2-hop sampling. Besides, the skip connection allows the information to pass between two distant layers thus enabling more efficient back-propagation and model training.
While the designs are similar, our motivation of applying the skip connection is different to the residual function in ResNets [1]. The purpose of employing the skip connection in [1] is to gain accuracy by increasing the network depth. Here, we apply it to preserve the second-order proximity.
In contrast to the identity mappings used in ResNets, the calculation along the skip-connection in our model should be derived specifically (see Eq. (12) and Eq. (13)).
Discussions and Extensions
Relation to other sampling methods. We contrast our approach with GraphSAGE [3] and Fast-GCN [21] regarding the following aspects:
1. The proposed layer-wise sampling method is novel. GraphSAGE randomly samples a fixed-size neighborhoods of each node, while FastGCN constructs each layer independently according to an identical distribution. As for our layer-wise approach, the nodes in lower layers are sampled conditioned on the upper ones, which is capable of capturing the between-layer correlations.
2. Our framework is general. Both GraphSAGE and FastGCN can be categorized as the specific variants of our framework. Specifically, the GraphSAGE model is regarded as a node-wise sampler in Eq (3) if p(u j |v i ) is defined as the uniform distribution; FastGCN can be considered as a special layer-wise method by applying the sampler q(u j ) that is independent to the nodes {v i } n i=1 in Eq. (5). 3. Our sampler is parameterized and trainable for explicit variance reduction. The sampler of GraphSAGE or FastGCN involves no parameter and is not adaptive for minimizing variance. In contrast, our sampler modifies the optimal importance sampling distribution with a self-dependent function. The resulting variance is explicitly reduced by fine-tuning the network and sampler.
Taking the attention into account. The GAT model [13] applies the idea of self-attention to graph representation learning. Concisely, it replaces the re-normalization of the adjacency matrix in Eq. (1) with specific attention values, i.e.,
h (l+1) (v i ) = σ( N j=1 a((h (l) (v i ), (h (l) (u j ))h (l) (v j )W (l) ), where a(h (l) (v i ), h (l) (u j )
) measures the attention value between the hidden features v i and u j , which is derived as a(h (l) (v i ), h (l) (u j )) = SoftMax(LeakyReLU(W 1 h (l) (v i ), W 2 h (l) (u j ))) by using the LeakyReLU nonlinearity and SoftMax normalization with parameters W 1 and W 2 .
It is impracticable to apply the GAT-like attention mechanism directly in our framework, as the probability p(u j |v i ) in Eq. (9) will become related to the attention value a(h (l) (v i ), h (l) (u j )) that is determined by the hidden features of the l-th layer. As discussed in § 4.2, computing the hidden features of lower layers is impossible unless the network is already built after sampling. To solve this issue, we develop a novel attention mechanism by applying the self-dependent function similar to Eq. (9). The attention is computed as
a(x(v i ), x(u j )) = 1 n ReLu(W 1 g(x(v i )) + W 2 g(x(u j ))),(14)
where W 1 and W 2 are the learnable parameters.
Experiments
We evaluate the performance of our methods on the following benchmarks: (1) categorizing academic papers in the citation network datasets-Cora, Citeseer and Pubmed [11]; (2) predicting which Our sampling framework is inductive in the sense that it clearly separates out test data from training. In contrast to the transductive learning where all vertices should be provided, our approach aggregates the information from each node's neighborhoods to learn structural properties that can be generalized to unseen nodes. For testing, the embedding of a new node may be either computed by using the full GCN architecture or approximated through sampling as is done in model training. Here we use the full architecture as it is more straightforward and easier to implement. For all datasets, we employ the network with two hidden layers as usual. The hidden dimensions for the citation network datasets (i.e., Cora, Citeseer and Pubmed) are set to be 16. For the Reddit dataset, the hidden dimensions are selected to be 256 as suggested by [3]. The numbers of the sampling nodes for all layers excluding the top one are set to 128 for Cora and Citeseer, 256 for Pubmed and 512 for Reddit. The sizes of the top layer (i.e. the stochastic mini-batch size) are chosen to be 256 for all datasets. We train all models using early stopping with a window size of 30, as suggested by [9]. Further details on the network architectures and training settings are contained in the supplementary material.
Alation Studies on the Adaptive Sampling
Baselines. The codes of GraphSAGE [3] and FastGCNN [21] provided by the authors are implemented inconsistently; here we re-implement them based on our framework to make the comparisons more fair 3 . In details, we implement the GraphSAGE method by applying the node-wise strategy with a uniform sampler in Eq. (3), where the number of the sampling neighborhoods for each node are set to 5. For FastGCN, we adopt the Independent-Identical-Distribution (IID) sampler proposed by [21] in Eq. (5), where the number of the sampling nodes for each layer is the same as our method. For consistence, the re-implementations of GraphSAGE and FastGCN are named as Node-Wise and IID in our experiments. We also implement the Full GCN architecture as a strong baseline. All compared methods shared the same network structure and training settings for fair comparison. We have also conducted the attention mechanism introduced in § 6 for all methods.
Comparisons with other sampling methods. The random seeds are fixed and no early stopping is used for the experiments here. Figure 5 reports the converging behaviors of all compared methods during training on Cora, Citeseer and Reddit 4 . It demonstrates that our method, denoted as Adapt, converges faster than other sampling counterparts on all three datasets. Interestingly, our method even outperforms the Full model on Cora and Reddit. Similar to our method, the IID sampling is also layer-wise, but it constructs each layer independently. Thanks to the conditional sampling, our method achieves more stable convergent curve than the IID method as Figure 5 shown. It turns out that considering the between-layer information helps in stability and accuracy.
Moreover, we draw the training time in Figure 3 (a). Clearly, all sampling methods run faster than the Full model. Compared to the Node-Wise method, our approach exhibits a higher training speed due to the more compact architecture. To show this, suppose the number of nodes in the top layer is n, then Table 1: Accuracy Comparisons with state-of-the-art methods.
Methods
Cora Citeseer Pubmed Reddit KLED [25] 0.8229 -0.8228 -2-hop DCNN [18] 0.8677 -0.8976 -FastGCN [21] 0.8500 0.7760 0.8800 0.9370 GraphSAGE [3] 0 for the Node-Wise method the input, hidden and top layers are of sizes 25n, 5n and n, respectively, while the numbers of the nodes in all layers are n for our model. Even with less sampling nodes, our model still surpasses the Node-Wise method by the results in Figure 5.
How important is the variance reduction? To justify the importance of the variance reduction, we implement a variant of our model by setting the trade-off parameter as λ = 0 in Eq. (10). By this, the parameters of the self-dependent function are randomly initialized and no training is performed. Figure 5 shows that, removing the variance loss does decrease the accuracies of our method on Cora and Reddit. For Citeseer, the effect of removing the variance reduction is not so significant. We conjecture that the average degree of Citeseer (i.e. 1.4) is smaller than Cora (i.e. 2.0) and Reddit (i.e. 492), and penalizing the variance is not so impeding due to the limited diversity of neighborhoods.
Comparisons with other state-of-the-art methods. We contrast the performance of our methods with the graph kernel method KLED [25] and Diffusion Convolutional Network (DCN) [18]. We use the reported results of KLED and DCN on Cora and Pubmed in [18]. We also summarize the results of GraphSAGE and FastGCN by their original implementations. For GraphSAGE, we report the results by the mean aggregator with the default parameters. For FastGCN, we directly make use of the provided results by [21]. For the baselines and our approach, we run the experiments with random seeds over 20 trials and record the mean accuracies and the standard variances. All results are organized in Table 1. As expected, our method achieves the best performance among all datasets, which are consistent with the results in Figure 5. It is also observed that removing the variance reduction will decrease the performance of our method especially on Cora and Reddit.
Evaluations of the Skip Connection
We evaluate the effectiveness of the skip connection on Cora. For the experiments on other datasets, we present the details in the supplementary material. The original network has two hidden layers. We further add a skip connection between the input and top layers, by using the computations in Eq. (12) and Eq. (13). Figure 5 displays the convergent curves of the original Adapt method and its variant with the skip connection, where the random seeds are shared and no early stopping is adapted. Although the improvement by our skip connection is not big regarding the final accuracy, it indeed speeds up the convergence significantly. This can be observed from Figure 3 (b) where adding the skip connection reduces the required epoches to converge from around 150 to 100.
We run experiments with different random seeds over 20 trials and report the mean results obtained by early stopping in Table 2. It is observed that the skip connection slightly improves the performance. Besides, we explicitly involve the 2-hop neighborhood sampling in our method by replacing the re-normalization matrix with its 2-order power expansion, i.e. + 2 . As displayed in Table 2, the explicit 2-hop sampling further boosts the classification accuracy. Although the skip-connection method is slightly inferior to the explicit 2-hop sampling, it avoids the computation of (i.e. 2 ) and yields more computationally beneficial for large and dense graphs.
Conclusion
We present a framework to accelerate the training of GCNs through developing a sampling method by constructing the network layer by layer. The developed layer-wise sampler is adaptive for variance reduction. Our method outperforms the other sampling-based counterparts: GraphSAGE and FastGCN in effectiveness and accuracy on extensive experiments. We also explore how to preserve the second-order proximity by using the skip connection. The experimental evaluations demonstrate that the skip connection further enhances our method in terms of the convergence speed and eventual classification accuracy. Table 3.
Further implementation details. The initial learning rates for the Adam optimizer are set to be 0.001 for Cora, Citeseer and Pubmed, and 0.01 for Reddit. The weight decays for all datasets are selected to be 0.0004. We apply ReLu function as the activation function and no dropout in our experiments. As presented in the paper, all models are implemented with 2-hidden-layer networks. For the Reddit dataset, we follow the suggestion by [21] to fix the weight of the bottom layer and pre-compute the productÂH (0) given the input features for efficiency. All experiments are conducted on a single Tesla P40 GPU. We apply the early-stopping for the training with a window size of 30 and apply the model that achieves the best validation accuracy for testing.
More results on the variance reduction. As shown in Table 1, it is sufficient to boost the performance by only reducing the variance of the top layer. Indeed, it is convenient to reduce the variances of all layers in our method, e.g., adding them all to the loss. To show this, we conduct an experiment on Cora by minimizing the variances of both the first and top hidden layers, where the experimental settings are the same as Table 1. The result is 0.8780 ± 0.0014, which slightly outperforms the original accuracy in Table 1 (i.e. 0.8744 ± 0.0034).
Comparisons with FastGCN by using the official codes. We use the public code to re-run the experiments of FastGCN in Figure 2 and Table 1. The average accuracies of FastGCN for four datasets are 0.840 ± 0.005, 0.774 ± 0.004, 0.881 ± 0.002 and 0.920 ± 0.005. The running curves of Figure 2 in the paper are updated by Figure 5 here. Clearly, our method still outperforms FastGCN remarkably. We have observed the inconsistences between the official implementations of GraphSAGE and FastGraph including the adjacent matrix construction, hidden dimensions, mini-batch sizes, maximal training epoches and other engineering tricks not mentioned in their papers. For fair comparisons, we re-implements them and uses the same experimental settings as our method in the main text. More results on Pubmed. In the paper, Figure 2 displays the accuracy curves of test data on Cora, Citeseer and Reddit, where the random seeds are fixed. For those on Pubmed, we provide results in Figure 5. Obviously, our method outperforms the IID and Node-Wise counterparts consistently. The Full model achieves the best accuracy around the 30-th epoch, but drops down after the 60-th epoch properly due to the overfitting. In contrast, our performance is more stable and it gives even better results in the end. Performing the variance reduction on this dataset is only helpful during the early stage, but contributes little when the model converges. Table 3 (b) reports the accuracy curve of the model with the skip connection on Cora. Here, we evaluate the effectiveness of the skip connection on Citeseer and Pubmed in Figure 6. It demonstrates that the skip connection is helpful to speed up the convergence on Citeseer. While on the Pubmed dataset, adding the skip connection boosts the performance only during early training epochs. For the Reddit dataset, we can not apply the skip connection in the network since the bottom layer is fixed and the output features are pre-computed. Figure 6: Accuracy curves of testing data on Citeseer and Pubmed for our Adapt method and its variant by adding skip connections.
| 5,296 |
1809.04747
|
2890571755
|
Deep generative models are tremendously successful in learning low-dimensional latent representations that well-describe the data. These representations, however, tend to much distort relationships between points, i.e. pairwise distances tend to not reflect semantic similarities well. This renders unsupervised tasks, such as clustering, difficult when working with the latent representations. We demonstrate that taking the geometry of the generative model into account is sufficient to make simple clustering algorithms work well over latent representations. Leaning on the recent finding that deep generative models constitute stochastically immersed Riemannian manifolds, we propose an efficient algorithm for computing geodesics (shortest paths) and computing distances in the latent space, while taking its distortion into account. We further propose a new architecture for modeling uncertainty in variational autoencoders, which is essential for understanding the geometry of deep generative models. Experiments show that the geodesic distance is very likely to reflect the internal structure of the data.
|
, is based on a recent observation that deep generative models immerse random Riemannian manifolds @cite_9 . This implies a change in the way distances are measured in the latent space, which reveals a clustering structure. Unfortunately, practical algorithms for actually computing such distances are missing, and it is the main focus of the present paper. With such an algorithm in hand, clustering can be performed with high accuracy in the latent space of an off-the-shelf VAE.
|
{
"abstract": [
"Deep generative models provide a systematic way to learn nonlinear data distributions, through a set of latent variables and a nonlinear \"generator\" function that maps latent points into the input space. The nonlinearity of the generator imply that the latent space gives a distorted view of the input space. Under mild conditions, we show that this distortion can be characterized by a stochastic Riemannian metric, and demonstrate that distances and interpolants are significantly improved under this metric. This in turn improves probability distributions, sampling algorithms and clustering in the latent space. Our geometric analysis further reveals that current generators provide poor variance estimates and we propose a new generator architecture with vastly improved variance estimates. Results are demonstrated on convolutional and fully connected variational autoencoders, but the formalism easily generalize to other deep generative models."
],
"cite_N": [
"@cite_9"
],
"mid": [
"2765934517"
]
}
|
Geodesic Clustering in Deep Generative Models
|
U NSUPERVISED learning is generally considered one of the greatest challenges of machine learning research. In recent years, there has been a great progress in modeling data distributions using deep generative models [1], [2], and while this progress has influenced the clustering literature, the full potential has yet to be reached.
Consider a latent variable model
p(x) = p(x|z)p(z)dz,(1)
where latent variables z ∈ R d provide a low-dimensional representation of data x ∈ R D and D d. In general, the prior p(z) will determine if clustering of the latent variables is successful; e.g. the common Gaussian prior, p(z) = N (z|0, I d ), tend to move clusters closer together, making post hoc clustering difficult (see Fig. 1). This problem is particularly evident in deep generative models such as variational autoencoders (VAE) [3], [4] that pick p(x|z; θ) = N (x|µ(z; θ), I D · σ 2 (z; θ)),
where the mean µ and variance σ 2 are parametrized by deep neural networks with parameters θ. The flexibility of such networks ensures that the latent variables can be made to follow almost any prior p(z), implying that the latent variables can be forced to show almost any structure, including ones not present in the data. This does not influence the distribution p(x), but it can be detrimental for clustering.
T. Yang Fig. 1. The latent space of a generative model is highly distorted and will often loose clustering structure.
These concerns indicate that one should be very careful when computing distances in the latent space of deep generative models. As these models (informally) span a manifold embedded in the data space, one can consider measuring distances along this manifold; an idea that share intuitions with classic approaches such as spectral clustering [5]. Arvanitidis et al. [6] have recently shown that measuring along the data manifold associated with a deep generative model can be achieved by endowing the latent space with a Riemannian metric and measure distances accordingly. Unfortunately, the approach of Arvanitidis et al. require numerical solutions to a system of ordinary differential equations, which cannot be readily evaluated using standard frameworks for deep learning (see Sec. II). In this paper, we propose an efficient algorithm for evaluating these distances and demonstrate usefulness for clustering tasks.
II. RELATED WORK
Clustering, as a fundamental problem in machine learning, highly depends on the quality of data representation. Recently deep neural networks have become useful in learning clusteringfriendly representations. We see four categories of work based on network structure: autoencoders (AE), deep neural networks (DNN), generative adversarial networks (GAN) and variational autoencoders (VAE).
In AE-based methods, Deep Clustering Networks [7] directly combine the loss functions in autoenconders and kmeans, while Deep Embedding Network [8] also revised the loss function by adding locality-preserving and group sparsity constraints to guide the network for clustering. Deep Multi-Manifold Clustering [9] introduced manifold locality preserving loss and proximity to cluster centroids, while Deep Embedded Regularized Clustering [10] established a non-trivial structure of convolutional and softmax autoencoder and proposed an entropy loss function with clustering regularization. Deep Continuous Clustering [11] inherited the continuity property in the method of Robust Continuous Clustering [12], a formulation having a clear continuous objective and no prior knowledge of clusters number, to integrate the parameters learning in network and clustering altogether. The AE-based methods are easy to implement but introduce hyper-parameters to the loss and are very limited in network depth.
In DNN-based methods, the networks can be very flexible such as Convolutional Neural Networks [13] or Deep Belief Networks [14] and often involve pre-training and fine-tuning stages. Deep Nonparametric Clustering [15] and Deep Embedded Clustering [16] are such representative works. Since the network initialization is sensitive to the result, Clustering Convolutional Neural Networks [17] is proposed with initial cluster centroids. To get rid of pre-training, Joint Unsupervised Learning [18] and Deep Adaptive Image Clustering [19] are proposed for hierarchical cluster and binary relationship of images specifically.
In VAE-based methods, because the VAE is a generative model, Variational Deep Embedding [20] and Gaussian Mixture VAE [21] designed special prior distributions over the latent representation and inferred the data classes correspond to the modes of different priors.
In GAN-based methods, as another generative model, Deep Adversarial Clustering [22] was inspired by the ideas behind the Variational Deep Embedding [20], but with GAN structure. Information Maximizing Generative Adversarial Network [23] can disentangle the latent representations both discrete and continuous, and it modeled a clustering function with categorical values for those latent codes. AE-based and DNN-based methods are designed specifically for clustering but do not consider the underlying structure of data resulting in having no ability to generate data. VAE-based and GAN-based methods can generate samples and infer the structure of data while because of changing the latent space for clustering, it may conversely affect the true intrinsic structure of data.
Our work, is based on a recent observation that deep generative models immerse random Riemannian manifolds [6]. This implies a change in the way distances are measured in the latent space, which reveals a clustering structure. Unfortunately, practical algorithms for actually computing such distances are missing, and it is the main focus of the present paper. With such an algorithm in hand, clustering can be performed with high accuracy in the latent space of an off-the-shelf VAE.
The paper is organized as follows: Section III introduce the usual VAE network, along with its interpretation as a stochastic Riemannian manifold. In Sec. IV we derive an efficient algorithm for computing geodesics (shortest paths) over this manifold, and in Sec. V we demonstrate its usefulness for clustering tasks. The paper is concluded in Sec. VI.
A. Inference and Generator
The VAE consists of two parts: an inference network and a generator network, that serve almost the same roles as encoders and decoders in classic autoencoders.
1) The inference network: is trained to map the training data samples x to the latent space Z meanwhile forcing the latent variables z to comply with the distribution p(z). However, both the posterior distribution p(z|x) and p(x) are unknown. Therefore, VAE gives the solution that the posterior distribution is a variational distribution q(z|x; λ), computed by a network with parameters λ. In order to make q(z|x; λ) accord with the distribution p(z), the Kullback-Leibler (KL) divergence [24] is used, that is:
min λ KL(q(z|x; λ)||p(z))(5)
2) The generator network: is trained to map the latent variables z to generate data samplesx that are much like the true samples x from the data space X . According to the purpose of this network, we know that it needs to maximize the marginal distribution p(x|z; θ) over the whole latent space and actually it is often processed with logarithm and computed by a multi-layer network with parameters θ:
max θ E z∼q(z|x;λ) [log p(x|z; θ)](6)
From these parts a VAE is jointly trained as
θ * , λ * = arg max λ,θ E[log p(x|z; θ)]−KL(q(z|x; λ)||p(z)). (7)
B. The Random Riemannian Interpretation
The inference network should force the latent variables to approximately follow the pre-specified unit Gaussian prior p(z), which implies that the latent space gives a highly distorted view of the original data. Fortunately, this distortion is fairly easy to characterize [6]. First, observe that the generative model of the VAE can be written as (using the so-called re-parametrization trick; see also Fig. 2)
x = f (z) = µ(z) + σ(z) ε, ε ∼ N (0, I). (8)
Now let z be a latent variable and let δ be infinitesimal. Then we can measure the distance between z and z + δ in the input space using Taylor's theorem
f (z) − f (z + δ) 2 = J z z − J z (z + δ) 2 (9) = δ J z J z δ,(10)
where J z denote the Jacobian of f at z. This implies that J z J z define a local inner product under which we can define curve lengths through integration
Length(c) = b a ∂ t f (c t ) dt = b a ċ t J z J zċt dt. (11)
Here c : [a, b] → R d is a curve in the latent space andċ = ∂ t c is its velocity. Distances can then be defined as the length of the shortest curve (geodesic) connecting two points,
dist(z 0 , z 1 ) = Length c (0,1)(12)c (0,1) = argmin c, c0=z0, c1=z1 Length(c)(13)
This is the traditional Riemannian analysis associated with embedded surfaces [25]. From this, it is well-known that lengthminimizing curves are minimizers of energy
E(c) = b aċ t J z J zċt dt,(14)
which is easier to optimize than Eq. 11. For generative models, the analysis is complicated by the fact that f is a stochastic mapping, implying that the Jacobian J z is stochastic, geodesics are stochastic, distances are stochastic, etc. Arvanitidis et al. [6] propose to replace the stochastic metric J z J z with its expectation E[J z J z ] which is equivalent to minimizing the expected energy [26]. While this is shown to work well, the practical algorithm proposed by Arvanitidis et al. amount to solving a nonlinear differential equation numerically, which require us to evaluate both the Jacobian J z and its derivatives. Unfortunately, modern deep learning frameworks such as Tensorflow rely on reverse mode automatic differentiation [27], which does not support Jacobians. This renders the algorithm of Arvanitidis et al. impractical. A key contribution of this paper is a practical algorithm for computing geodesics that fits within modern deep learning frameworks.
IV. PROPOSED ALGORITHM TO COMPUTE GEODESICS
To develop an efficient algorithm for computing geodesics, we first note that the expected curve energy can be written as If we discretize the curve c at n points (Fig. 3), then this integral can be approximated as
E = E[E(c)] = b a E ∂ t f (c t ) 2 dt.(15)E ≈ n−1 i=0 E f (c i ) − f (c i+1 ) t i − t i+1 2 .(16)
Since f (c) ∼ N (µ(c), I D · σ 2 (c)), the expectation computing can be evaluated in closed-form as
E f (c i ) − f (c i+1 ) 2 = µ(c i ) − µ(c i+1 ) 2 + σ 2 (c i ) + σ 2 (c i+1 ) ,(17)
and the approximated expected energy can be written
E ≈ n−1 i=0 µ(c i )−µ(c i+1 ) 2 + σ 2 (c i )+σ 2 (c i+1 ) . (18)
This energy is easily interpretable: the first term of the sum corresponds to the curve energy along the expected data manifold, while the second term penalizes curves for traversing highly uncertain regions on the manifold. This implies that geodesics will be attracted to regions of high data density in the latent space. Unlike the ordinary differential equations of Arvanitidis et al. [6], Eq. 18 can readily be optimized using automatic differentiation as implemented in Tensorflow. We can, thus, compute geodesics by picking a parametrization of the latent curve c and optimize Eq. 18 with respect to curve parameters.
A. Curve Parametrization
There are many common choices for parametrizing curves, e.g. splines [6], Gaussian processes [28] or point collections [29]. In the interest of speed, we propose to use the restricted class of quadratic functions, i.e.
c t = a 1 t 2 + b 1 t + c 1 a 2 t 2 + b 2 t + c 2 . . . a d t 2 + b d t + c d , c : [0, 1] → R d .(19)
A curve, thus, has 3d free parameters a : , b : , and c : . In practice, we are concerned with geodesic curves that connect two prespecified points z 0 and z 1 , so the quadratic function should be constrained to satisfy c 0 = z 0 and c 1 = z 1 , which is easily achieved for quadratics. Under this constraint, there are only d free parameters to estimate when optimizingĒ (18). Here we perform the optimization using standard gradient descent.
B. Specifying Uncertainty
When training the VAE model, the reconstruction term of Eq. 7 ensure that we can expect high-quality reconstructions of the training data. Interpolations between latent training data usually give high-quality reconstructions in densely sampled regions of the latent space, but low-quality reconstructions in regions with low sample density. Ideally, the generator variance σ 2 (z) should reflect this.
From the point of view of computing geodesics, the generator variance σ 2 (z) is important as it appears directly in the expected curve energy (18). If σ 2 (z) is small near the latent data and large away from the data, then geodesics will follow the trend of the data [26], which is a useful property in a clustering context.
In practice, the neural network used to model σ 2 (z) is only trained where there is data, and its behavior in-between is governed by the activation functions of the network; e.g. common activations such softplus or tanh implying that variances are smoothly interpolated in-between the training data. This is a most unfortunate property for a variance function; e.g. if the optimal variance is low at all latent training points, then the predicted variance will be low at all points in the latent space. To ensure that variance increases away from latent data, Arvanitidis et al. [6] proposed to model the inverse variance (precision) with an RBF network [30] with isotropic kernels. This is reported to provide meaningful variance estimates.
We found the isotropic assumptions to be too limited, and instead applied anisotropic kernels. Specifically, we propose to use a rescaled Gaussian Mixture Model (GMM) to represent the inverse variance function
1 σ 2 (z) = g(z) = K i=1 w i N (z | c i , Σ i )W g ,(20)
where c i and Σ i are the component-wise mean and covariance, and w i and W g ∈ R D are positive weights. For simplicity, we set each component has its own single variance. For all the latent variables z, we use the usual EM algorithm [24] to obtain the weights w and the mean and covariance in each component. W g is trained by solving Eq. 7. Figure 4 gives an example, showing the inverse output of g(z), and we see that for regions without data, 1/g(z)
gives low values such that the variance is large as one would expect.
C. Curve Initialization
Once the VAE is fully trained, we can compute geodesics in the latent space. As previously mentioned, we use gradient descent to minimizeĒ (18). To improve convergence speed, we here propose a heuristic for initialization that we have found to work well.
Since geodesics generally follow the trend of the data [6] we seek an initial curve with this property. As it can be expensive to evaluate the generator network f , we propose to first seek a curve that minimize the inverse GMM model g(c) = g(c t )dt. We do this with a simple stochastic optimization akin to a particle filter [31]. This is written explicitly in Algorithm 1.
V. EXPERIMENTS AND DETAILS RELATED
A. Experimental Pipeline
Throughout the experiments, we use the same three-stage pipeline, which is illustrated in Fig. 6. In the first stage we Algorithm 1 The pseudo-code for initializing Θ 1: Set µ 0 = 0, σ 2 0 = z 0 − z 1 2: for each step in [1, 2, · · · , max_step] do 3:
µ i = µ i−1 , σ i = σ i−1 4: Sample M sets of parameters Θ 1:M ∼ N (µ i , I d · σ 2 i ) 5:
for each index in M do 6: c index ← c(t; Θ index ), t ∈ [0, 1] 7:
C index ← 1/g(c index ) 8
B. Visualizing Curvature
A useful visualization tool for the curvature of generative models is the magnification factor [33], which correspond to the Riemannian volume measure associated with the metric [34]. For a given Jacobian J z , this is defined as
vol(z) = det J z J z .(21)
In practice, the Jacobian is a stochastic object, so previous work [6], [34] has proposed to visualize det E[J z J z ]. Here we argue that the expectation should be taken as late in the process as possible, and instead visualize the expected volume measure,
vol(z) = E det J z J z .(22)
To compute this measure, we split the latent space into small quadratic pieces, as in Fig. 7. As we can see from the figure, there are two vectors v z (0,1) = z 1 − z 0 , v z (0,2) = z 2 − z 0 and the corresponding vectors inX , v
f (z) (0,1) = f (z 1 ) − f (z 0 ) and v f (z) (0,2) = f (z 2 ) − f (z 0 ). Note V = [v f (z) (0,1) , v f (z) (0,2) ], then the volume measure is: vol v f (z) (0,1) , v f (z) (0,2) ≈ E det(V T V ) .(23)
Here we compute the right-hand side expectation using sampling. As an example visualization, Fig. 8 show the logarithm of the volume measure associated with the model from Fig. 4. In areas of small volume measure (blue), distances will generally be small, while they will be large in regions of large volume measure (red). 1) The Two-Moon Dataset: As a first illustration, we consider the classic "Two Moon" data set shown in Fig. 4. For H-enc and H-dec layers, we use two hidden fully-connected layers with softplus activations, and for the S-enc layer, we use one fully-connected layer, again, with softplus. For M-enc and M-dec layers we use fully-connected layers. Figure 9 show the latent space of the resulting VAE along with several quadratic geodesics. We see that the geodesics nicely follow the structure of the data. This also influences the observed clustering structure. Figure 10 show all pairwise distances using both geodesic and Euclidean distances. I should be noted that the first 50 points belong to the first "moon" while the remaining belong to the other. From the figure, we see that the geodesic distance reveals the cluster structure much more clearly than the Euclidean counterpart. We validate this by performing k-medoids clustering using the two distances. As a baseline, we also consider standard spectral clustering (SC) [5] to the original data. We report clustering accuracy (the ratio of correct clustered sample number and the number of observations) in Fig. 11 and in Table I. It is evident that the geodesic distance reveals the intrinsic structure of the data.
2) Synthetic Anisotropically Distributed Data: Using the same setup as for the two-moon dataset, we generate 100 samples from clusters with anisotropic distributions. Figure 12 shows both volume measure and pair-wise distances. Again k-medoids clustering show that the geodesic distance does a much better job at capturing the data structure than the baselines. Clustering accuracy is in Table II and the found clusters are shown in Fig. 13.
3) The MNIST Dataset: From the well-known MNIST dataset, we take hand-written digit '0', '1' and '2' to test 2-class and 3-class clustering. For H-enc and H-dec layers, we use two hidden fully-connected layers with Relu activations 1 , and for the S-enc layer, we use one fully-connected layer with a sigmoid activation function, and for M-enc, M-dec layers we use fully-connected layers with identity activation functions. Images generated by both networks are shown in Fig. 14.
For the 2-class situation, we use digits '0' and '1'. We select 50 samples from each class and compute their pair-wise distances, which are shown in Fig. 15. For the 3-class situation, we select 30 samples from each class and show pair-wise distances in Fig. 16. In both cases, the geodesic distance reveals a clear clustering structure. We also see this in k-medoids clustering, which outperforms the baselines (Table III). 1 The number of H-enc neural nodes: from 784 to 500 and from 500 to 2. The number of H-dec neural nodes: from 2 to 500 and from 500 to 784. 4) The Fashion-MNIST Dataset: Fashion-MNIST [35] is a dataset of Zalando's article images. Each image is a 28 × 28 gray-scale image. We consider classes 'T-shirt', 'Sandal' and 'Bag' to test 2-class and 3-class clustering. For H-enc and H-dec layers, we use three hidden fully-connected layers with Relu activations 2 , and for S-enc layer, we use one fullyconnected layer with a sigmoid activation function, and for M-enc, M-dec we use fully-connected layers with identity and sigmoid activation functions respectively. Images generated by the networks are shown in Fig. 17.
For the 2-class situation, we use the 'T-shirt' and 'Sandal' samples to train the VAE. We select 50 samples from 'Tshirt' and 'Sandal' dataset respectively, and compute pair-wise distances (see Fig. 18). For the 3-class situation we select 30 samples from each class and compute distances (Fig. 19). As before, we see that k-medoids clustering with geodesic distances significantly outperform the baselines; see Table IV for numbers.
5) The EMNIST-Letter Dataset:
The EMNIST-letter dataset [36] is a set of handwritten alphabet characters derived from the NIST Special Database and converted to 28 × 28 gray-scale images. We select the characters 'D' and 'd' as 2 classes, and fit a VAE with the same network architectures as the ones used for Fashion-MNIST. Generated images are shown in Fig. 20.
We select 50 samples from 'D' and 'd' respectively and show pair-wise distances in Fig. 21. Again, k-medoids clustering show that the geodesic distance reflects the intrinsic structure, which improves clustering over the baselines, c.f. Table V. VI. CONCLUSION In this paper, we have proposed an efficient algorithm for computing shortest paths (geodesics) along data manifolds spanned by deep generative models. Unlike previous work, the proposed algorithm is easy to implement and fits well with modern deep learning frameworks. We have also proposed a new network architecture for representing variances in variational autoencoders. With these two tools in hand, we have shown that simple distance-based clustering works remarkably well in the latent space of a deep generative model, even if the model is not trained for clustering tasks. Still, the dimension of the latent space, the form of the curve parametrization and modeling variance in generator worth developing further to obtain the more robust geodesics computation algorithm.
| 3,770 |
1809.04467
|
2949394659
|
Using a neural network architecture for depth map inference from monocular stabilized videos with application to UAV videos in rigid scenes, we propose a multi-range architecture for unconstrained UAV flight, leveraging flight data from sensors to make accurate depth maps for uncluttered outdoor environment. We try our algorithm on both synthetic scenes and real UAV flight data. Quantitative results are given for synthetic scenes with a slightly noisy orientation, and show that our multi-range architecture improves depth inference. Along with this article is a video that present our results more thoroughly.
|
Depth from vision is one of the problems studied with neural network, and has been addressed with a wide range of training solution. Some datasets @cite_16 @cite_4 allow a neural network to learn end-to-end depth or disparity @cite_22 @cite_7 @cite_9 . Reprojection error has also been used for unsupervised training for depth from a single image @cite_14 @cite_17 or for disparity between two frames of a stereo rig @cite_19 @cite_0 .
|
{
"abstract": [
"",
"We present an approach to interpret the major surfaces, objects, and support relations of an indoor scene from an RGBD image. Most existing work ignores physical interactions or is applied only to tidy rooms and hallways. Our goal is to parse typical, often messy, indoor scenes into floor, walls, supporting surfaces, and object regions, and to recover support relationships. One of our main interests is to better understand how 3D cues can best inform a structured 3D interpretation. We also contribute a novel integer programming formulation to infer physical support relations. We offer a new dataset of 1449 RGBD images, capturing 464 diverse indoor scenes, with detailed annotations. Our experiments demonstrate our ability to infer support relations in complex scenes and verify that our 3D scene cues and inferred support lead to better object segmentation.",
"In the past year, convolutional neural networks have been shown to perform extremely well for stereo estimation. However, current architectures rely on siamese networks which exploit concatenation followed by further processing layers, requiring a minute of GPU computation per image pair. In contrast, in this paper we propose a matching network which is able to produce very accurate results in less than a second of GPU computation. Towards this goal, we exploit a product layer which simply computes the inner product between the two representations of a siamese architecture. We train our network by treating the problem as multi-class classification, where the classes are all possible disparities. This allows us to get calibrated scores, which result in much better matching performance when compared to existing approaches.",
"We present a method for extracting depth information from a rectified image pair. We train a convolutional neural network to predict how well two image patches match and use it to compute the stereo matching cost. The cost is refined by cross-based cost aggregation and semiglobal matching, followed by a left-right consistency check to eliminate errors in the occluded regions. Our stereo method achieves an error rate of 2.61 on the KITTI stereo dataset and is currently (August 2014) the top performing method on this dataset.",
"Predicting depth is an essential component in understanding the 3D geometry of a scene. While for stereo images local correspondence suffices for estimation, finding depth relations from a single image is less straightforward, requiring integration of both global and local information from various cues. Moreover, the task is inherently ambiguous, with a large source of uncertainty coming from the overall scale. In this paper, we present a new method that addresses this task by employing two deep network stacks: one that makes a coarse global prediction based on the entire image, and another that refines this prediction locally. We also apply a scale-invariant error to help measure depth relations rather than scale. By leveraging the raw datasets as large sources of training data, our method achieves state-of-the-art results on both NYU Depth and KITTI, and matches detailed depth boundaries without the need for superpixelation.",
"A significant weakness of most current deep Convolutional Neural Networks is the need to train them using vast amounts of manu- ally labelled data. In this work we propose a unsupervised framework to learn a deep convolutional neural network for single view depth predic- tion, without requiring a pre-training stage or annotated ground truth depths. We achieve this by training the network in a manner analogous to an autoencoder. At training time we consider a pair of images, source and target, with small, known camera motion between the two such as a stereo pair. We train the convolutional encoder for the task of predicting the depth map for the source image. To do so, we explicitly generate an inverse warp of the target image using the predicted depth and known inter-view displacement, to reconstruct the source image; the photomet- ric error in the reconstruction is the reconstruction loss for the encoder. The acquisition of this training data is considerably simpler than for equivalent systems, requiring no manual annotation, nor calibration of depth sensor to camera. We show that our network trained on less than half of the KITTI dataset (without any further augmentation) gives com- parable performance to that of the state of art supervised methods for single view depth estimation.",
"",
"Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net datasets kitti",
"We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. In common with recent work [10, 14, 16], we use an end-to-end learning approach with view synthesis as the supervisory signal. In contrast to the previous work, our method is completely unsupervised, requiring only monocular video sequences for training. Our method uses single-view depth and multiview pose networks, with a loss based on warping nearby views to the target using the computed depth and pose. The networks are thus coupled by the loss during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performs comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performs favorably compared to established SLAM systems under comparable input settings."
],
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_22",
"@cite_7",
"@cite_9",
"@cite_0",
"@cite_19",
"@cite_16",
"@cite_17"
],
"mid": [
"",
"125693051",
"2440384215",
"2144041313",
"2951234442",
"2949634581",
"",
"2150066425",
"2609883120"
]
}
|
Multi range Real-time depth inference from a monocular stabilized footage using a Fully Convolutional Neural Network
|
Scene understanding from vision is a core problem for autonomous vehicles and for UAVs in particular. In this paper we are specifically interested in computing the depth of each pixel from image sequences captured by a camera. We assume our camera's velocity (and thus displacement between two frames) is known, as most UAV flight systems include a speed estimator, allowing to settle the scale invariance ambiguity of the depth map.
Solving this problem could be beneficial for several problems such as environment scanning or applying depthbased sense and avoid algorithms for lightweight embedded systems that only have a monocular camera. Not relying on depth Sensors such as stereo vision, ToF camera, LiDar or Infra Red emitter/receiver allows to free the UAV from their weight, cost and limitations. Specifically, along with some RGB-D sensors being unable to operate under sunlight (e.g. IR and ToF), most of them suffer from range limitations and can be inefficient in case we need long-range information such as trajectory planning [7]. Unlike RGB-D sensors, depth from motion is flexible w.r.t. displacement and thus robust to high speeds or high distances as choosing among previous frames gives us a wide range of different displacements. For estimating such depth maps, we designed an end-toend learning architecture, based on a synthetic dataset and a fully convolutional neural network that takes as input an image pair taken at different times. No preprocessing such as optical flow computation, nor visual odometry is applied to the input, while the depth is directly provided as an output. [18] a Parrot, Paris, France (clement.pinard, laure.chevalley)@parrot.com b U2IS, ENSTA ParisTech, Université Paris-Saclay, Palaiseau, France (clement.pinard, antoine.manzanera, david.filliat)@ensta-paristech.fr a) b) c) Fig. 1. Camera stabilization can be done via a) mechanic gimbal or b) dynamic cropping from fish-eye camera, for drones or c) hand-held cameras
We created a dataset of image pairs with random translation movements, with no rotation, and a constant displacement magnitude applied during the whole training.
The assumption about videos without rotation appears realistic for two reasons:
• Hardware rotation compensation is mainly a solved problem, even for consumer products, with IMUstabilized cameras on consumer drones or hand-held steady-cam (Fig 1). • this movement is somewhat related to human vision and vestibulo-ocular reflex (VOR) [2]. Our eyes orientation is not induced by head rotation, our inner ear among other biological sensors allows us to compensate parasite rotation when looking at a particular direction.
Using the trained network, we propose an algorithm for real condition depth inference from a stabilized UAV. Displacement from sensors is used to compute real depth map, as it only differs from the synthetic constant displacement images by a scale factor. Our network output also allows us to a posteriori optimize the depth inference. By adjusting frame shift to get a displacement that would make the network get the same disparity distribution as during its training, we lower the depth error for next inference. For example, with large distances, ideal displacement between two frames is higher, and thus the shift is also higher for a given speed. Moreover, we use multiple batch inference to compute multiple depth maps centered around a particular range, and fuse them to get a high precision for both close and far objects, no matter the distance, given a sufficient displacement from the UAV.
III. END-TO-END LEARNING OF DEPTH INFERENCE
Inspired by flow estimation and disparity (which is essentially magnitude of optical flow vectors), a problem to which exist a lot of very convincing methods [8], [10], we set up an end-to-end learning workflow, by training a neural network to explicitly predict the depth of every pixel in a scene, from an image pair with constant displacement value.
A. Still Box Dataset
We design our own synthetic dataset, using the rendering software Blender, to generate an arbitrary number of random rigid scenes, composed of basic 3d primitives (cubes, spheres, cones and tores) randomly textured from an image set scrapped from Flickr (see Fig 2).
These objects are randomly placed and sized in the scene, and walls are added at large distances as if the camera was inside a box (hence the name). The camera is moving at a fixed speed value, but to an uniformly distributed random direction, which is constant for each scene. It can be anything from forward/backward movement to lateral movement (which is then equivalent to stereo vision).
B. Dataset augmentation
In our dataset, we store data in 10 images long videos, with each frame paired with its ground truth depth. This allows us to set a posteriori distances distribution with a variable temporal shift between two frames. If we use a baseline shift of 3 frames, we can e.g. assume a depth three times as great as for two consecutive frames (shift of 1). In addition, we can also consider negative shift, which will only change displacement direction without changing speed value. This allows us, given a fixed dataset size, to get more evenly distributed depth values to learn, and also to de-correlate images from depth, preventing over-fitting during training, that would result in a scene recognition algorithm and would poorly perform on a validation set.
C. Depth Inference training
Our network is broadly inspired from FlowNetS [3] (initially used for flow inference) and called DepthNet. It is described in details in [18], we provide here a summary of its structure (Fig 3) and performances. Each convolution (apart from depth modules) is followed by a Spatial Batch Normalization and ReLU activation layer. Batch normalization helps convergence and stability during training by normalizing a convolution's output (0 mean and standard deviation of 1) over a batch of multiple inputs [9], and Rectified Linear Unit (ReLU) is the typical activation layer [21]. Depth Module are convolution modules, reducing the input to 1 feature map, which is expected to be the depth map, at a given scale. One should note that FlowNetS initially used LeakyReLU which has a non-null slope for negative values, but tests showed that ReLU performed better for our problem.
The main idea behind this network is that upsampled feature maps are concatenated with corresponding earlier convolution outputs (e.g. Conv2 output with Deconv5 output). Higher semantic information is then associated with Ground Truth depth, lower-right: our network output (128x128), upper-right: error, green is no error, red is overestimated depth, blue is underestimated information more closely linked to pixels (since it went through less downsampling convolutions) which is then used for reconstruction. This multi-scale architecture has been proven very efficient for flow and disparity computing while keeping a very simple supervised learning process.
The main point of this experimentation is to show that direct depth estimation can be efficient regarding unknown translation direction. Like FlowNetS, we use a multi-scale criterion, with a L1 reconstruction error for each scale:
Loss = s∈scales γ s 1 H s W s i j |β s (i, j) − ζ s (i, j)| (1)
where • γ s is the weight of the scale, arbitrarily chosen.
• (H s , W s ) = ( 1 /2 s H, 1 /2 s W ) are the height and width of the output. • ζ s is the scaled depth groundtruth, using average pooling. • β s is the ouput of the network at scale s. As said earlier, we apply data augmentation to the dataset using different shifts, along with classic methods such a flips and rotations. We also clip depth to a maximum of 100m, and provide sample pairs without shift, assuming its depth is 100m everywhere. As a consequence, the trained network will only be able to infer depth lower than 100m.
We applied training on several input size images, from 64x64 to 512x512. Fig 4 shows training results for mean L1 reconstruction error. Like FlowNetS, network output are downsampled by a factor of 4 with reference to the input size. As Table I
IV. UAV NAVIGATION USE-CASE
A. Optimal frame shift determination
We learned depth inference from a moving camera, assuming its velocity is always the same. Results from real condition drone footage, on which we were careful to avoid camera rotation can be seen Fig 5. These results did not benefit from any fine-tuning from real footage, indicating that our Still Box Dataset, although not realistic in its scenes structures and rendering, appears to be sufficiently heterogeneous for learning to produce decent depth maps in real conditions. When running during flight, such a system can deduce the real depth map ζ from the network output and the drone displacement, knowing that the training displacement was D 0 (here 0.3m)
ζ(t) = DepthN et(I t , I t−∆t ) D(t,∆t) D0 D(t, ∆t) = t t−∆t V (τ )dτ (2)
The actual correct interpretation of the output of DepthNet is rather a percentage than a distance. 100% meaning max distance for a given displacement D. We can introduce a function β = DepthN et(It,It−∆t) maxDistance and a dimension-less parameter α = maxDistance D0 for computing actual depth using the displacement D as the only distance related factor.
ζ(t) = αβ(I t , I t−∆t )D(t, ∆t)(3)
Depending of the depth distribution of the ground-truth depth map, it may be useful to adjust frame shift ∆t. For example, when flying high above the ground with low speed, big structure detection and avoidance requires knowing precise distance values that are outside the typical range of any RGB-D sensor. The logical strategy would then be to increase the temporal shift between the frame pairs provided to DepthNet as inputs. More generally, one must provide inputs to DepthNet in order to ensure a well distributed depth output within its typical range. Depth-wise normalized error which is the essential quality measurement for values that we want to rescale, will diverge when ground truth depth approaches 0. Indeed, in addition to being equivalent to an infinite optical flow, the depth-wise error cannot tend to 0, which will make the expression error /depth tend to +∞ at 0 We thus need to choose the optimal spatial displacement and corresponding temporal shift to minimize error on the next inference, assuming the same depth distribution, to avoid too low or too high equivalent ground-truth. We chose the space displacement as:
D optimal (t + 1) = E(ζ(t)) αβ mean(4)
With E(ζ(t)) the mean of depth values and β mean the optimal mean output of β, e.g. 0.5. ∆(t) is then computed numerically to get the frame shift with the closest corresponding displacement possible.
B. Multiple shifts inference
As neural network are traditionally computed within massively parallel architectures such as GPUs, multiple depth maps can be computed efficiently at the same time in a batch, especially for low resolution. Batch inference can then be used to compute depth with multiple shifts ∆(t, i). These multiple depth maps can then be combined to construct a higher quality depth map, with high precision for both long and short range. We propose a dynamic range algorithm, described Fig 6 to compute an combine different depth maps.
Instead of only one optimal displacement D(t) from E(ζ), we use K-mean clustering algorithm [16] on the depth map to find a list of clusters on which each shift will focus. The clustering outputs a list of n centroids C i (ζ) and corresponding D i (t) and ∆(t, i). n is an arbitrary chosen value, usually ranging from 1 to 4.
Final DepthMap is then computed from fusing these outputs using a weighted mean for each pixel. Each weight is actually a linear interpolation from 0 to 1 according to distance of depth from a target value β mean . That way, fusion will favor values that are closer to this optimal value. An value is added to solve fusion when every depth map is off its wanted range. K-means 1 βmean X α (It,It−∆ 1 )· · · (It,It−∆ n ) C1, · · · , Cn D1, · · · , Dn ∆1, · · · , ∆n D * 1 , · · · , D * n β1, · · · , βn Fig. 6. Multiple shifts architecture. We used n different planes. Numeric integration, given a desired displacement D gives the closest possible displacement between frames D * , along with corresponding shift ∆. As discussed in part IV, the fusion block computes pixel-wise weights from β 1 , · · · , β n to make a weighted mean of β 1 D 1 , · · · , β n D n
w ijk = + f (β(I t , I t−∆(t,i) )) f : x → 0 if x < β min x−βmin βmean−βmin if β min ≤ x < β mean βmax−x βmax−βmean if β mean ≤ x < β max 0 if x ≥ β max ζ i (t) = αD i (t)β(I t , I t−∆(t,i) ) (5) ∀(j, k) ∈ 0, W × 0, H , ζ f (t) jk = i w ijk ζ ijk (t) i w ijk(6)
For our use-case, we set β min = 0.1 , β mean = 0.4, β max = 0.9 and = 10 −3 . i is the index of frame shift, j, k are the spatial indices. Fig 7 shows a result of the proposed algorithm for a batch size of 2. Notice how the high shift detects buildings while low shift detects trees.
C. Clamped DepthNet
Our proposed algorithm is actually suffering a problem for real condition videos, because we assume a perfect stabilization. Therefore, on very far objects (e.g. the sky), any minor optical flow caused by a default in stabilization will result in a massive error in depth. Moreover, our network being very good at recognizing shapes and giving it the same depth everywhere, this can result in the whole sky being computed as relatively close. We thus propose a network designed for a simpler problem: during training on still box, we clamp depth from 10m to 60m, with a shift of 5 images (instead of 3 for DepthNet). These new parameters allow the network to only focus on mid range objects, dismissing close and far objects with respectively a too large and too small optical flow. This training workflow is very well suited for multiple shift depth inference. Every image pair will have a dedicated depth to analyze, allowing the fusion to not be bothered with redundant data, because of the high initial range of DepthNet. Figure 8 shows results for multiples synthetic 256x256 scenes with ground truth, along with inference speed and a small noise added to camera initial orientation at each frame. R(t) = R 0 + Euler(N 0 µ(t)), with µ(t) being a 3-dimensional random unit vector and N 0 a constant fixed to 0.001. We also report performance a thin version of our clamped network, that shows better results than DepthNet with 1 plane only in this noisy setup. The thin network has the same depth, but every convolution has an output half the number of feature maps of the original DepthNet. These V. CONCLUSION AND FUTURE WORK We proposed a novel way of computing dense depth maps from motion, along with a very comprehensive dataset for stabilized footage analysis and a technique for dynamic range real flight computing. This algorithm can then be used for depth-based sense and avoid algorithms in a very flexible way, in order to cover all kinds of path planning, from collision avoidance to long range obstacle bypassing.
A more thorough presentation of the results can be viewed in this video. http://perso. ensta-paristech.fr/˜manzaner/Download/ ECMR2017/DepthNetResults.mp4
Future works include implementation of such a path planning algorithm, and construction of a real condition fine tuning dataset, using UAVs footages and a preliminary thorough 3D offline scan. This would allow us to measure quantitative quality of our network for real footages and not only subjective as for now. We could also use unsupervised techniques, using re-projection errors as in [23].
We also believe that our network can be extended to reinforcement learning applications that will potentially result in a complete end-to-end sense and avoid neural network for monocular cameras.
The major drawback of our algorithm is however the necessity for a scene to be rigid. This is obviously never the case, and even though UAV footage are less prone to moving objects like in autonomous driving problems, we will have this issue whenever a moving target is to be followed. To solve this problem, an explicit movement equation for both the camera and the moving targets may have to be computed, as in [20]. In any case, this problem will be a challenge and may not be solvable with fully Convolutional networks only as we did in this article.
| 2,786 |
1809.04467
|
2949394659
|
Using a neural network architecture for depth map inference from monocular stabilized videos with application to UAV videos in rigid scenes, we propose a multi-range architecture for unconstrained UAV flight, leveraging flight data from sensors to make accurate depth maps for uncluttered outdoor environment. We try our algorithm on both synthetic scenes and real UAV flight data. Quantitative results are given for synthetic scenes with a slightly noisy orientation, and show that our multi-range architecture improves depth inference. Along with this article is a video that present our results more thoroughly.
|
For depth from more complex movement from a monocular camera, current state of the art methods tend to use motion, and especially structure from motion, and most algorithm do not rely on deep learning @cite_20 @cite_3 @cite_12 . Prior knowledge w.r.t. scene is used to infer a sparse depth map with its density usually growing over time. These techniques also called SLAM are typically used with unstructured movement (translation and rotation with varying magnitudes), produce very sparse point-cloud based 3D maps and require heavy calculation to keep track of the scene structure and align newly detected 3D points to the existing ones.
|
{
"abstract": [
"We present ORB-SLAM2, a complete simultaneous localization and mapping (SLAM) system for monocular, stereo and RGB-D cameras, including map reuse, loop closing, and relocalization capabilities. The system works in real time on standard central processing units in a wide variety of environments from small hand-held indoors sequences, to drones flying in industrial environments and cars driving around a city. Our back-end, based on bundle adjustment with monocular and stereo observations, allows for accurate trajectory estimation with metric scale. Our system includes a lightweight localization mode that leverages visual odometry tracks for unmapped regions and matches with map points that allow for zero-drift localization. The evaluation on 29 popular public sequences shows that our method achieves state-of-the-art accuracy, being in most cases the most accurate SLAM solution. We publish the source code, not only for the benefit of the SLAM community, but with the aim of being an out-of-the-box SLAM solution for researchers in other fields.",
"This paper presents a method of estimating camera pose in an unknown scene. While this has previously been attempted by adapting SLAM algorithms developed for robotic exploration, we propose a system specifically designed to track a hand-held camera in a small AR workspace. We propose to split tracking and mapping into two separate tasks, processed in parallel threads on a dual-core computer: one thread deals with the task of robustly tracking erratic hand-held motion, while the other produces a 3D map of point features from previously observed video frames. This allows the use of computationally expensive batch optimisation techniques not usually associated with real-time operation: The result is a system that produces detailed maps with thousands of landmarks which can be tracked at frame-rate, with an accuracy and robustness rivalling that of state-of-the-art model-based systems.",
"Simultaneous localization and mapping (SLAM) consists in the concurrent construction of a model of the environment (the map ), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications and witnessing a steady transition of this technology to industry. We survey the current state of SLAM and consider future directions. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors’ take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved?"
],
"cite_N": [
"@cite_3",
"@cite_12",
"@cite_20"
],
"mid": [
"2535547924",
"2151290401",
"2461937780"
]
}
|
Multi range Real-time depth inference from a monocular stabilized footage using a Fully Convolutional Neural Network
|
Scene understanding from vision is a core problem for autonomous vehicles and for UAVs in particular. In this paper we are specifically interested in computing the depth of each pixel from image sequences captured by a camera. We assume our camera's velocity (and thus displacement between two frames) is known, as most UAV flight systems include a speed estimator, allowing to settle the scale invariance ambiguity of the depth map.
Solving this problem could be beneficial for several problems such as environment scanning or applying depthbased sense and avoid algorithms for lightweight embedded systems that only have a monocular camera. Not relying on depth Sensors such as stereo vision, ToF camera, LiDar or Infra Red emitter/receiver allows to free the UAV from their weight, cost and limitations. Specifically, along with some RGB-D sensors being unable to operate under sunlight (e.g. IR and ToF), most of them suffer from range limitations and can be inefficient in case we need long-range information such as trajectory planning [7]. Unlike RGB-D sensors, depth from motion is flexible w.r.t. displacement and thus robust to high speeds or high distances as choosing among previous frames gives us a wide range of different displacements. For estimating such depth maps, we designed an end-toend learning architecture, based on a synthetic dataset and a fully convolutional neural network that takes as input an image pair taken at different times. No preprocessing such as optical flow computation, nor visual odometry is applied to the input, while the depth is directly provided as an output. [18] a Parrot, Paris, France (clement.pinard, laure.chevalley)@parrot.com b U2IS, ENSTA ParisTech, Université Paris-Saclay, Palaiseau, France (clement.pinard, antoine.manzanera, david.filliat)@ensta-paristech.fr a) b) c) Fig. 1. Camera stabilization can be done via a) mechanic gimbal or b) dynamic cropping from fish-eye camera, for drones or c) hand-held cameras
We created a dataset of image pairs with random translation movements, with no rotation, and a constant displacement magnitude applied during the whole training.
The assumption about videos without rotation appears realistic for two reasons:
• Hardware rotation compensation is mainly a solved problem, even for consumer products, with IMUstabilized cameras on consumer drones or hand-held steady-cam (Fig 1). • this movement is somewhat related to human vision and vestibulo-ocular reflex (VOR) [2]. Our eyes orientation is not induced by head rotation, our inner ear among other biological sensors allows us to compensate parasite rotation when looking at a particular direction.
Using the trained network, we propose an algorithm for real condition depth inference from a stabilized UAV. Displacement from sensors is used to compute real depth map, as it only differs from the synthetic constant displacement images by a scale factor. Our network output also allows us to a posteriori optimize the depth inference. By adjusting frame shift to get a displacement that would make the network get the same disparity distribution as during its training, we lower the depth error for next inference. For example, with large distances, ideal displacement between two frames is higher, and thus the shift is also higher for a given speed. Moreover, we use multiple batch inference to compute multiple depth maps centered around a particular range, and fuse them to get a high precision for both close and far objects, no matter the distance, given a sufficient displacement from the UAV.
III. END-TO-END LEARNING OF DEPTH INFERENCE
Inspired by flow estimation and disparity (which is essentially magnitude of optical flow vectors), a problem to which exist a lot of very convincing methods [8], [10], we set up an end-to-end learning workflow, by training a neural network to explicitly predict the depth of every pixel in a scene, from an image pair with constant displacement value.
A. Still Box Dataset
We design our own synthetic dataset, using the rendering software Blender, to generate an arbitrary number of random rigid scenes, composed of basic 3d primitives (cubes, spheres, cones and tores) randomly textured from an image set scrapped from Flickr (see Fig 2).
These objects are randomly placed and sized in the scene, and walls are added at large distances as if the camera was inside a box (hence the name). The camera is moving at a fixed speed value, but to an uniformly distributed random direction, which is constant for each scene. It can be anything from forward/backward movement to lateral movement (which is then equivalent to stereo vision).
B. Dataset augmentation
In our dataset, we store data in 10 images long videos, with each frame paired with its ground truth depth. This allows us to set a posteriori distances distribution with a variable temporal shift between two frames. If we use a baseline shift of 3 frames, we can e.g. assume a depth three times as great as for two consecutive frames (shift of 1). In addition, we can also consider negative shift, which will only change displacement direction without changing speed value. This allows us, given a fixed dataset size, to get more evenly distributed depth values to learn, and also to de-correlate images from depth, preventing over-fitting during training, that would result in a scene recognition algorithm and would poorly perform on a validation set.
C. Depth Inference training
Our network is broadly inspired from FlowNetS [3] (initially used for flow inference) and called DepthNet. It is described in details in [18], we provide here a summary of its structure (Fig 3) and performances. Each convolution (apart from depth modules) is followed by a Spatial Batch Normalization and ReLU activation layer. Batch normalization helps convergence and stability during training by normalizing a convolution's output (0 mean and standard deviation of 1) over a batch of multiple inputs [9], and Rectified Linear Unit (ReLU) is the typical activation layer [21]. Depth Module are convolution modules, reducing the input to 1 feature map, which is expected to be the depth map, at a given scale. One should note that FlowNetS initially used LeakyReLU which has a non-null slope for negative values, but tests showed that ReLU performed better for our problem.
The main idea behind this network is that upsampled feature maps are concatenated with corresponding earlier convolution outputs (e.g. Conv2 output with Deconv5 output). Higher semantic information is then associated with Ground Truth depth, lower-right: our network output (128x128), upper-right: error, green is no error, red is overestimated depth, blue is underestimated information more closely linked to pixels (since it went through less downsampling convolutions) which is then used for reconstruction. This multi-scale architecture has been proven very efficient for flow and disparity computing while keeping a very simple supervised learning process.
The main point of this experimentation is to show that direct depth estimation can be efficient regarding unknown translation direction. Like FlowNetS, we use a multi-scale criterion, with a L1 reconstruction error for each scale:
Loss = s∈scales γ s 1 H s W s i j |β s (i, j) − ζ s (i, j)| (1)
where • γ s is the weight of the scale, arbitrarily chosen.
• (H s , W s ) = ( 1 /2 s H, 1 /2 s W ) are the height and width of the output. • ζ s is the scaled depth groundtruth, using average pooling. • β s is the ouput of the network at scale s. As said earlier, we apply data augmentation to the dataset using different shifts, along with classic methods such a flips and rotations. We also clip depth to a maximum of 100m, and provide sample pairs without shift, assuming its depth is 100m everywhere. As a consequence, the trained network will only be able to infer depth lower than 100m.
We applied training on several input size images, from 64x64 to 512x512. Fig 4 shows training results for mean L1 reconstruction error. Like FlowNetS, network output are downsampled by a factor of 4 with reference to the input size. As Table I
IV. UAV NAVIGATION USE-CASE
A. Optimal frame shift determination
We learned depth inference from a moving camera, assuming its velocity is always the same. Results from real condition drone footage, on which we were careful to avoid camera rotation can be seen Fig 5. These results did not benefit from any fine-tuning from real footage, indicating that our Still Box Dataset, although not realistic in its scenes structures and rendering, appears to be sufficiently heterogeneous for learning to produce decent depth maps in real conditions. When running during flight, such a system can deduce the real depth map ζ from the network output and the drone displacement, knowing that the training displacement was D 0 (here 0.3m)
ζ(t) = DepthN et(I t , I t−∆t ) D(t,∆t) D0 D(t, ∆t) = t t−∆t V (τ )dτ (2)
The actual correct interpretation of the output of DepthNet is rather a percentage than a distance. 100% meaning max distance for a given displacement D. We can introduce a function β = DepthN et(It,It−∆t) maxDistance and a dimension-less parameter α = maxDistance D0 for computing actual depth using the displacement D as the only distance related factor.
ζ(t) = αβ(I t , I t−∆t )D(t, ∆t)(3)
Depending of the depth distribution of the ground-truth depth map, it may be useful to adjust frame shift ∆t. For example, when flying high above the ground with low speed, big structure detection and avoidance requires knowing precise distance values that are outside the typical range of any RGB-D sensor. The logical strategy would then be to increase the temporal shift between the frame pairs provided to DepthNet as inputs. More generally, one must provide inputs to DepthNet in order to ensure a well distributed depth output within its typical range. Depth-wise normalized error which is the essential quality measurement for values that we want to rescale, will diverge when ground truth depth approaches 0. Indeed, in addition to being equivalent to an infinite optical flow, the depth-wise error cannot tend to 0, which will make the expression error /depth tend to +∞ at 0 We thus need to choose the optimal spatial displacement and corresponding temporal shift to minimize error on the next inference, assuming the same depth distribution, to avoid too low or too high equivalent ground-truth. We chose the space displacement as:
D optimal (t + 1) = E(ζ(t)) αβ mean(4)
With E(ζ(t)) the mean of depth values and β mean the optimal mean output of β, e.g. 0.5. ∆(t) is then computed numerically to get the frame shift with the closest corresponding displacement possible.
B. Multiple shifts inference
As neural network are traditionally computed within massively parallel architectures such as GPUs, multiple depth maps can be computed efficiently at the same time in a batch, especially for low resolution. Batch inference can then be used to compute depth with multiple shifts ∆(t, i). These multiple depth maps can then be combined to construct a higher quality depth map, with high precision for both long and short range. We propose a dynamic range algorithm, described Fig 6 to compute an combine different depth maps.
Instead of only one optimal displacement D(t) from E(ζ), we use K-mean clustering algorithm [16] on the depth map to find a list of clusters on which each shift will focus. The clustering outputs a list of n centroids C i (ζ) and corresponding D i (t) and ∆(t, i). n is an arbitrary chosen value, usually ranging from 1 to 4.
Final DepthMap is then computed from fusing these outputs using a weighted mean for each pixel. Each weight is actually a linear interpolation from 0 to 1 according to distance of depth from a target value β mean . That way, fusion will favor values that are closer to this optimal value. An value is added to solve fusion when every depth map is off its wanted range. K-means 1 βmean X α (It,It−∆ 1 )· · · (It,It−∆ n ) C1, · · · , Cn D1, · · · , Dn ∆1, · · · , ∆n D * 1 , · · · , D * n β1, · · · , βn Fig. 6. Multiple shifts architecture. We used n different planes. Numeric integration, given a desired displacement D gives the closest possible displacement between frames D * , along with corresponding shift ∆. As discussed in part IV, the fusion block computes pixel-wise weights from β 1 , · · · , β n to make a weighted mean of β 1 D 1 , · · · , β n D n
w ijk = + f (β(I t , I t−∆(t,i) )) f : x → 0 if x < β min x−βmin βmean−βmin if β min ≤ x < β mean βmax−x βmax−βmean if β mean ≤ x < β max 0 if x ≥ β max ζ i (t) = αD i (t)β(I t , I t−∆(t,i) ) (5) ∀(j, k) ∈ 0, W × 0, H , ζ f (t) jk = i w ijk ζ ijk (t) i w ijk(6)
For our use-case, we set β min = 0.1 , β mean = 0.4, β max = 0.9 and = 10 −3 . i is the index of frame shift, j, k are the spatial indices. Fig 7 shows a result of the proposed algorithm for a batch size of 2. Notice how the high shift detects buildings while low shift detects trees.
C. Clamped DepthNet
Our proposed algorithm is actually suffering a problem for real condition videos, because we assume a perfect stabilization. Therefore, on very far objects (e.g. the sky), any minor optical flow caused by a default in stabilization will result in a massive error in depth. Moreover, our network being very good at recognizing shapes and giving it the same depth everywhere, this can result in the whole sky being computed as relatively close. We thus propose a network designed for a simpler problem: during training on still box, we clamp depth from 10m to 60m, with a shift of 5 images (instead of 3 for DepthNet). These new parameters allow the network to only focus on mid range objects, dismissing close and far objects with respectively a too large and too small optical flow. This training workflow is very well suited for multiple shift depth inference. Every image pair will have a dedicated depth to analyze, allowing the fusion to not be bothered with redundant data, because of the high initial range of DepthNet. Figure 8 shows results for multiples synthetic 256x256 scenes with ground truth, along with inference speed and a small noise added to camera initial orientation at each frame. R(t) = R 0 + Euler(N 0 µ(t)), with µ(t) being a 3-dimensional random unit vector and N 0 a constant fixed to 0.001. We also report performance a thin version of our clamped network, that shows better results than DepthNet with 1 plane only in this noisy setup. The thin network has the same depth, but every convolution has an output half the number of feature maps of the original DepthNet. These V. CONCLUSION AND FUTURE WORK We proposed a novel way of computing dense depth maps from motion, along with a very comprehensive dataset for stabilized footage analysis and a technique for dynamic range real flight computing. This algorithm can then be used for depth-based sense and avoid algorithms in a very flexible way, in order to cover all kinds of path planning, from collision avoidance to long range obstacle bypassing.
A more thorough presentation of the results can be viewed in this video. http://perso. ensta-paristech.fr/˜manzaner/Download/ ECMR2017/DepthNetResults.mp4
Future works include implementation of such a path planning algorithm, and construction of a real condition fine tuning dataset, using UAVs footages and a preliminary thorough 3D offline scan. This would allow us to measure quantitative quality of our network for real footages and not only subjective as for now. We could also use unsupervised techniques, using re-projection errors as in [23].
We also believe that our network can be extended to reinforcement learning applications that will potentially result in a complete end-to-end sense and avoid neural network for monocular cameras.
The major drawback of our algorithm is however the necessity for a scene to be rigid. This is obviously never the case, and even though UAV footage are less prone to moving objects like in autonomous driving problems, we will have this issue whenever a moving target is to be followed. To solve this problem, an explicit movement equation for both the camera and the moving targets may have to be computed, as in [20]. In any case, this problem will be a challenge and may not be solvable with fully Convolutional networks only as we did in this article.
| 2,786 |
1809.04427
|
2952395293
|
Online multi-object tracking is a fundamental problem in time-critical video analysis applications. A major challenge in the popular tracking-by-detection framework is how to associate unreliable detection results with existing tracks. In this paper, we propose to handle unreliable detection by collecting candidates from outputs of both detection and tracking. The intuition behind generating redundant candidates is that detection and tracks can complement each other in different scenarios. Detection results of high confidence prevent tracking drifts in the long term, and predictions of tracks can handle noisy detection caused by occlusion. In order to apply optimal selection from a considerable amount of candidates in real-time, we present a novel scoring function based on a fully convolutional neural network, that shares most computations on the entire image. Moreover, we adopt a deeply learned appearance representation, which is trained on large-scale person re-identification datasets, to improve the identification ability of our tracker. Extensive experiments show that our tracker achieves real-time and state-of-the-art performance on a widely used people tracking benchmark.
|
Tracking-by-detection is becoming the most popular strategy for multi-object tracking. @cite_20 associated tracklets with detection in different ways according to their confidence values. Sanchez- @cite_12 exploited multiple detectors to improve tracking performance. They collected outputs from multiple detectors, during the so called over-detection process. Combining results from multiple detectors can improve the tracking performance but is not efficient for real-time applications. In contrast, our tracking framework needs only one detector and generates candidates from existing tracks. @cite_18 used a binary classifier and single object tracker for online multi-object tracking. They shared the feature maps for classification but still had a high computation complicity.
|
{
"abstract": [
"",
"We propose an online multi-target tracker that exploits both high- and low-confidence target detections in a Probability Hypothesis Density Particle Filter framework. High-confidence (strong) detections are used for label propagation and target initialization. Low-confidence (weak) detections only support the propagation of labels, i.e. tracking existing targets. Moreover, we perform data association just after the prediction stage thus avoiding the need for computationally expensive labeling procedures such as clustering. Finally, we perform sampling by considering the perspective distortion in the target observations. The tracker runs on average at 12 frames per second. Results show that our method outperforms alternative online trackers on the Multiple Object Tracking 2016 and 2015 benchmark datasets in terms tracking accuracy, false negatives and speed.",
"Online multi-object tracking aims at estimating the tracks of multiple objects instantly with each incoming frame and the information provided up to the moment. It still remains a difficult problem in complex scenes, because of the large ambiguity in associating multiple objects in consecutive frames and the low discriminability between objects appearances. In this paper, we propose a robust online multi-object tracking method that can handle these difficulties effectively. We first define the tracklet confidence using the detectability and continuity of a tracklet, and decompose a multi-object tracking problem into small subproblems based on the tracklet confidence. We then solve the online multi-object tracking problem by associating tracklets and detections in different ways according to their confidence values. Based on this strategy, tracklets sequentially grow with online-provided detections, and fragmented tracklets are linked up with others without any iterative and expensive association steps. For more reliable association between tracklets and detections, we also propose a deep appearance learning method to learn a discriminative appearance model from large training datasets, since the conventional appearance learning methods do not provide rich representation that can distinguish multiple objects with large appearance variations. In addition, we combine online transfer learning for improving appearance discriminability by adapting the pre-trained deep model during online tracking. Experiments with challenging public datasets show distinct performance improvement over other state-of-the-arts batch and online tracking methods, and prove the effect and usefulness of the proposed methods for online multi-object tracking."
],
"cite_N": [
"@cite_18",
"@cite_12",
"@cite_20"
],
"mid": [
"",
"2547098537",
"2604679602"
]
}
|
REAL-TIME MULTIPLE PEOPLE TRACKING WITH DEEPLY LEARNED CANDIDATE SELECTION AND PERSON RE-IDENTIFICATION
|
Tracking multiple objects in a complex scene is a challenging problem in many video analysis and multimedia applications, such as visual surveillance, sport analysis, and autonomous driving. The objective of multi-object tracking is to estimate trajectories of objects in a specific category. Here we tackle the problem of people tracking by taking advantage of person re-identification.
Multi-object tracking benefits a lot from advances in object detection in the past decade. The popular tracking-bydetection methods apply the detector on each frame, and associate detection across frames to generate object trajectories. Both intra-category occlusion and unreliable detection are tremendous challenges in such a tracking framework [1,2]. Intra-category occlusion and similar appearances of objects can result in ambiguities in data association. Multiple cues, including motion, shape and object appearances, are fused to mitigate this problem [3,4]. On the other hand, detection results are not always reliable. Pose variation and occlusion in crowded scenes often cause detection failures such as false positives, missing detection, and non-accurate bounding. Some studies proposed to handle unreliable detection in a batch mode [2,5,6]. These methods address detection noises by introducing information from future frames. Detection results in whole video frames or a temporal window are employed and linked to trajectories by solving a global optimization problem. Tracking in a batch mode is non-causal and not suitable for time-critical applications. In contrast to these works, we focus on the online multiple people tracking problem, using only the current and past frames. In order to handle unreliable detection in an online mode, our tracking framework optimally selects candidates from outputs of both detection and tracks in each frame (as shown in Figure 1). In most of the existing tracking-by-detection methods, when talking about data association, candidates to be associated with existing tracks are only made up of detection results. Yan et al. [4] proposed to treat the tracker and object detector as two independent identities, and keep results of them as candidates. They selected candidates based on hand-crafted features, e.g., color histogram, optical flow, and motion features. The intuition behind generating redundant candidates is that detection and tracks can complement each other in different scenarios. On the one hand, reliable predictions from the tracker can be used for short-term association in case of missing detection or non-accurate bounding. On the other hand, confident detection results are essential to prevent tracks drifting to backgrounds in the long term. How to score outputs of both detection and tracks in an unified way is still an open question.
Recently, deep neural networks, especially convolutional neural networks (CNN), have made great progress in the field of computer vision and multimedia. In this paper, we take full advantage of deep neural networks to tackle unreliable detection and intra-category occlusion. Our contribution is three fold. First, we handle unreliable detection in online tracking by combining both detection and tracking results as candidates, and selecting optimal candidates based on deep neural networks. Second, we present a hierarchical data association strategy, which utilizes spatial information and deeply learned person re-identification (ReID) features. Third, we demonstrate real-time and state-of-the-art performance of our tracker on a widely used people tracking benchmark.
PROPOSED METHOD
Framework Overview
In this work, we extend traditional tracking-by-detection by collecting candidates from outputs of both detection and tracks. Our framework consists of two sequential tasks, that is, candidate selection and data association.
We first measure all the candidates using an unified scoring function. A discriminatively trained object classifier and a well-designed tracklet confidence are fused to formulate the scoring function, as described in Section 3.2 and Section 3.3. Non-maximal suppression (NMS) is subsequently performed with the estimated scores. After obtaining candidates without redundancy, we use both appearance representations and spatial information to hierarchically associate existing tracks with the selected candidates. Our appearance representations are deeply learned from the person re-identification as described in Section 3.4. Hierarchical data association is detailed in Section 3.5.
Real-Time Object Classification
Combining outputs of both detection and tracks will result in an excessive amount of candidates. Our classifier shares most computations on the entire image by using a region-based fully convolutional neural network (R-FCN) [12]. Thus, it is much more efficient comparing to classification on image patches, which are cropped from heavily overlapped candidate regions. The comparison of time consumption of these two methods can be found in Figure 3.
Our efficient classifier is illustrated in Figure 2. Given an image frame, score maps of the entire image are predicted using a fully convolutional neural network with an encoderdecoder architecture. The encoder part is a light-weight convolutional backbone for real-time performance, and we introduce the decoder part with up-sampling to increase the spatial resolution of output score maps for later classification. Each candidate to be classified is defined as a region of interest (RoI) by x = (x 0 , y 0 , w, h), where (x 0 , y 0 ) denotes the topleft point and w, h represent width and height of the region. For computational efficiency, we expect that the classification probability of each RoI is directly voted by the shared score maps. A straightforward approach for voting is to construct foreground probabilities for all points on the image, and then calculate the average probability of points inside the RoI. However, this simple strategy loses the spatial information of objects. For instance, even if the RoI only covers a part of the object, a high confidence score still can be obtained.
In order to explicitly encode spatial information into the score maps, we employ the position-sensitive RoI pooling layer [12] and estimate the classification probability from k 2 position-sensitive score maps z. In particular, we split a RoI into k × k bins by a regular grid. Each of the bins has the same size w k × h k , and represents a specific spatial location of the object. We extract responses of k×k bins from k 2 score maps. Each score map only corresponds to one bin. The final classification probability of a RoI x is formulated as:
p(y|z, x) = σ( 1 wh k 2 i=1 (x,y)∈bini z i (x, y)),(1)
where σ(x) = 1 1+e −x is the sigmoid function, and z i denotes the i-th score map.
During the training procedure, we randomly sample RoIs around the ground truth bounding boxes as positive examples, and take the same number of RoIs from backgrounds as negative examples. By training the network end-to-end, the output on the top of the decoder part, that is, the k 2 score maps, learns to response to specific spatial locations of the object. For example, if k = 3, we have 9 score maps response to top-left, top-center, top-right, ..., bottom-right of the object, respectively. In this way, the RoI pooling layer is sensitive to spatial positions and has a strong discriminative ability for object classification without using learnable parameters. Please note that the proposed neural network is trained only for candidate classification, not for bounding box regression.
Tracklet Confidence and Scoring Function
Given a new frame, we estimate the new location of each existing track using the Kalman filter. These predictions are adopted to handle detection failures caused by varying visual properties of objects and occlusion in crowded scenes. But they are not suitable for long-term tracking. The accuracy of the Kalman filter could decrease if it is not updated by detection over a long time. Tracklet confidence is designed to measure the accuracy of the filter using temporal information.
A tracklet is generated through temporal association of candidates from consecutive frames. We can split a track into a set of tracklets since a track can be interrupted and retrieved for times during its lifetime. Every time a track is retrieved from lost state, the Kalman filter will be reinitialized. Therefore, only the information of the last tracklet is utilized to formulate the confidence of the track. Here we define L det as the number of detection results associated to the tracklet, and L trk as the number of track predictions after the last detection is associated. The tracklet confidence is defined as:
s trk = max(1 − log(1 + α · L trk ), 0) · 1(L det ≥ 2),(2)
where 1(·) is the indicator function that equals 1 if the input is true, otherwise equals 0. We require L det ≥ 2 to construct a reasonable motion model using observed detection before the track is used as a candidate.
The unified scoring function for a candidate x is formated by fusing classification probability and tracklet confidence:
s = p(y|z, x) · (1(x ∈ C det )) + s trk 1(x ∈ C trk )). (3)
Here we use C det to denote the candidates from detection, and C trk for candidates from tracks, and s trk ∈ [0, 1] to punish candidates from uncertain tracks. Candidates for data association are finally selected based on the unified scores using non-maximal suppression. We define the maximum intersection over union (IoU) by a threshold τ nms , also there is a threshold τ s for the minimum score.
Appearance Representation with ReID Features
The similarity function between candidates is the key component of data association. We argue that the object appearance, which are deeply learned by a data driven approach, outperforms traditional hand-crafted features on the task of similarity estimation. For the purpose of learning the object appearance and similarity function, we employ a deep neural network to extract feature vectors from RGB images, and formulate the similarity using the distance between the obtained features.
We utilize the network architecture proposed in [13] and train the network on a combination of several large scale person re-identification datasets. The network H reid consists of the convolutional backbone from GoogLeNet [14] followed by K branches of part-aligned fully connected (FC) layers. We refer to [13] for more details on the network architecture. Given an RGB image I of a person, the appearance representation is formulated as f = H reid (I). We directly use Euclidean distance between the feature vectors to measure the distance d ij of two images I i and I j . During the training procedure, images of identities in training datasets are formed as a set of triplets T = { I i , I j , I k }, where I i , I j is a positive pair from the same person, and I i , I k is the negative pair from two different people. Given N triplets, the loss function going to be minimized is formulated as:
l triplet = 1 N Ii,Ij ,I k ∈T max(d ij − d ik + m, 0),(4)
where m > 0 is a predefined margin. We ignore triplets that are easy to handle, i.e. d ik − d ij > m, to enhance the discriminative ability of learned feature representations.
Hierarchical Data Association
Predictions of tracks are utilized to handle missing detection occurred in crowded scenes. Influenced by intra-category occlusion, these predictions may be involved with other objects.
To avoid taking other unwanted objects and backgrounds into appearance representations, we hierarchically associate tracks with different candidates using different features.
In particular, we first apply data association on candidates from detection, using appearance representations with a threshold τ d for the maximum distance. Then, we associate the remaining candidates with unassociated tracks based on IoU between candidates and tracks, with a threshold τ iou . We only update appearance representations of tracks when they are associated to detection. The updating is conducted by saving ReID features from the associated detection. Finally, new tracks are initialized based on the remaining detection results. The detail of the proposed online tracking algorithm is illustrated in Algorithm 1. With the hierarchical data association, we only need to extract ReID features for candidates from detection once per frame. Combining this with the previous efficient scoring function and tracklet confidences, our framework can run at real-time speed.
EXPERIMENTS
Experiment Setup
To evaluate the performance of the proposed online tracking method, we conduct extensive experiments on the MOT16 dataset [15], which is a widely used benchmark for multiple people tracking. This dataset contains a training set and a test set, each with 7 challenging video sequences filmed in unconstrained environments. We form a validation set with 5 video sequences from the training set to analyze the contribution of each component in our framework. Afterwards, we submit the tracking result on the test set to the benchmark, and compare it with state-of-the-art methods on the benchmark. Implementation details. We employ SqueezeNet [16], as the backbone of R-FCN for real-time performance. Our fully convolutional network, consisting of SqueezeNet and the decoder part, costs only 8ms to estimate score maps for an input image with the size of 1152x640 on a GTX1080Ti GPU. We set k = 7 for position-sensitive score maps, and train the network using RMSprop optimizer with the learning rate of 1e-4 and the batch size of 32 for 20k iterations. The training data for person classification is collected from MS COCO [17] and the remaining two training video sequences. We set τ nms = 0.3 and τ s = 0.4 for candidate selection. When coping with the ReID network, we train it on a combination of three large scale person re-identification datasets, Output: Tracks T of the video 1 Initialization: T ← ∅; appearance of tracks F trk ← ∅ 2 foreach frame f k in v do 3 Estimate score maps z from f using R-FCN / * collect candidates * / 4 C det ← D k ; C trk ← ∅ 5 foreach t in T do 6 Predict new location x * of t using Kalman filter 17 end / * hierarchical data association * / 18 Associate T and C det using distances of F trk and F det
7 C trk ← C trk ∪ {x * } 8 end / * select candidates * / 9 C ← C det ∪ C trkF det ← ∅ 14 foreach x in C det do 15 Ix ← Crop(f k , x) 16 F det ← F det ∪ H reid (Ix)
19
Associate remaining tracks and candidates using IoU
20 F trk ← F trk ∪ F det / * initialize new tracks * / 21 C remain ← remaining candidates from C det 22 F remain ← features of C remain 23 T , F trk ← T ∪ C remain , F trk ∪ F remain 24 end
i.e. Market1501 [18], CUHK01 and CUHK03 [19], to enhance the generation ability for tracking. We set τ d = 0.4 and τ iou = 0.3 for hierarchical data association. The following experiments are based on the same hyper-parameters.
Evaluation metrics. In order to measure accuracies of bounding boxes and identities at the same time, we adopt multiple metrics used in the benchmark to evaluate the proposed method, including multiple object tracking accuracy (MOTA) [20], false alarm per frame (FAF), the number of mostly tracked targets (MT, > 80% recovered), the number of mostly lost targets (ML, < 20% recovered) [21], false positives (FP), false negatives (FN), identity switches (IDS), identification recall (IDR), identification F1 score (IDF1) [22], and processing speed (frames per second, FPS).
Analysis on Validation Set
Contribution of each component. In order to demonstrate the effectiveness of the proposed method, we investigate contribution of different components in our framework in Table 1. The baseline method predicts new location of each track using the Kalman filter, and then associates tracks with detection based on the IoU. Using the classification probability to select candidates from both detection and tracks, in which case, improves the MOTA by 4.6%, comparing to the baseline method. By punishing candidates from uncertain tracks, the combination of tracklet confidence with classification probability further improves the MOTA and reduces false positives, as we expected in Section 3.3. On the other hand, by introducing appearance representations based on ReID features, we can obtain a significant improvement on the performance of identification (evaluated by IDF1 and IDS). Our proposed method that combining the unified scoring function and ReID features has the best results for all metrics.
Comparison with different appearance features. As shown in Table 2, we compare representations learned by a data driven approach detailed in Section 3.4 with two typical hand-crafted features, i.e. color histogram, histogram of oriented gradient (HOG). Following the fixed part model, which is widely used for appearance descriptors [23], we divide each image of a person into six horizontal stripes with an equal size for the color histogram. The color histogram of each stripe is built from the HSV color space with 125 bins. We normalize both the color histogram and HOG features by L 2 norm, and formulate the similarity using the cosine similarity function. As shown in the table, our appearance representation outperforms traditional hand-crafted features by a large margin in terms of IDF1 and IDS, in spite of the shorter feature vector comparing to other methods. The evaluation result on the validation set verifies the effectiveness of our data driven approach for multiple people tracking. The proposed tracking framework can be easily transfered for other categories, by learning the appearance representation from corresponding datasets, such as vehicle re-identification [24].
Evaluation on Test Set
We first analyze the time consumption of the proposed tracking framework on MOT16-03 sequence. As shown in Figure 3, the proposed method is much more time efficient by sharing computations on the entire image.
We report evaluation results on the test set of MOT16, and compare our tracker with other offline and online trackers in Table 3. Note that the tracking performance depends heavily on the quality of detection. For the fair comparison, all the trackers in the table use the same detection provided by the benchmark. As shown in the table, Our tracker runs at real-time speed, and outperforms existing online trackers on most of the metrics, especially for IDF1, IDR, MT, and ML. The identification ability is enhanced by the deeply learned appearance representation. The improvement on MT and ML demonstrates the advantage of our unified scoring function for candidate selection. Selecting candidates from detection and tracks indeed reduces tracking failures caused by missing detection. Moreover, our online tracker has much lower computational complexity and is about 5~20 times faster than most of the existing methods.
CONCLUSION
In this paper, we propose an online multiple people tracking framework, which takes full advantage of recent deep neural networks. We tackle unreliable detection by selecting candidates from outputs of both detection and tracks. The scoring function for candidate selection is formulated by an efficient R-FCN, which shares computations on the entire image. Moreover, we improve the identification ability when coping with intra-category occlusion by introducing ReID features for data association. ReID features trained by a data driven approach outperforms traditional hand-crafted features by a large margin. The proposed tracker achieves real-time and state-of-the-art performance on the MOT16 benchmark. A future study is planed to further improve efficiency by sharing convolutional layers with both classification and appearance extraction.
| 3,173 |
1809.04649
|
2950839960
|
We present Semantic WordRank (SWR), an unsupervised method for generating an extractive summary of a single document. Built on a weighted word graph with semantic and co-occurrence edges, SWR scores sentences using an article-structure-biased PageRank algorithm with a Softplus function adjustment, and promotes topic diversity using spectral subtopic clustering under the Word-Movers-Distance metric. We evaluate SWR on the DUC-02 and SummBank datasets and show that SWR produces better summaries than the state-of-the-art algorithms over DUC-02 under common ROUGE measures. We then show that, under the same measures over SummBank, SWR outperforms each of the three human annotators (aka. judges) and compares favorably with the combined performance of all judges.
|
UniformLink @cite_6 builds a sentence graph on a set of similar documents, where a sentence's score is computed based on both with-in document score and cross-document score. URank @cite_24 uses a unified graph-based framework to study both single-document and multi-document summarizations.
|
{
"abstract": [
"Single-document summarization and multi-document summarization are very closely related tasks and they have been widely investigated independently. This paper examines the mutual influences between the two tasks and proposes a novel unified approach to simultaneous single-document and multi-document summarizations. The mutual influences between the two tasks are incorporated into a graph model and the ranking scores of a sentence for the two tasks can be obtained in a unified ranking process. Experimental results on the benchmark DUC datasets demonstrate the effectiveness of the proposed approach for both single-document and multi-document summarizations.",
"Document summarization and keyphrase extraction are two related tasks in the IR and NLP fields, and both of them aim at extracting condensed representations from a single text document. Existing methods for single document summarization and keyphrase extraction usually make use of only the information contained in the specified document. This article proposes using a small number of nearest neighbor documents to improve document summarization and keyphrase extraction for the specified document, under the assumption that the neighbor documents could provide additional knowledge and more clues. The specified document is expanded to a small document set by adding a few neighbor documents close to the document, and the graph-based ranking algorithm is then applied on the expanded document set to make use of both the local information in the specified document and the global information in the neighbor documents. Experimental results on the Document Understanding Conference (DUC) benchmark datasets demonstrate the effectiveness and robustness of our proposed approaches. The cross-document sentence relationships in the expanded document set are validated to be beneficial to single document summarization, and the word cooccurrence relationships in the neighbor documents are validated to be very helpful to single document keyphrase extraction."
],
"cite_N": [
"@cite_24",
"@cite_6"
],
"mid": [
"4742464",
"2149795409"
]
}
| 0 |
||
1809.04649
|
2950839960
|
We present Semantic WordRank (SWR), an unsupervised method for generating an extractive summary of a single document. Built on a weighted word graph with semantic and co-occurrence edges, SWR scores sentences using an article-structure-biased PageRank algorithm with a Softplus function adjustment, and promotes topic diversity using spectral subtopic clustering under the Word-Movers-Distance metric. We evaluate SWR on the DUC-02 and SummBank datasets and show that SWR produces better summaries than the state-of-the-art algorithms over DUC-02 under common ROUGE measures. We then show that, under the same measures over SummBank, SWR outperforms each of the three human annotators (aka. judges) and compares favorably with the combined performance of all judges.
|
@math @cite_1 , as well as @math @cite_2 , use a bipartite graph to represent a document and a different algorithm, Hyperlink-Induced Topic Search (HITS) @cite_23 , is used to score sentences. They both treat the summarization problem as an ILP problem, which maximizes the sentence importance, non-redundancy, and coherence at the same time. However, since ILP is an NP-hard problem, obtaining an exact solution to an ILP problem is intractable.
|
{
"abstract": [
"We propose a graph-based method for extractive single-document summarization which considers importance, non-redundancy and local coherence simultaneously. We represent input documents by means of a bipartite graph consisting of sentence and entity nodes. We rank sentences on the basis of importance by applying a graph-based ranking algorithm to this graph and ensure nonredundancy and local coherence of the summary by means of an optimization step. Our graph based method is applied to scientific articles from the journal PLOS Medicine. We use human judgements to evaluate the coherence of our summaries. We compare ROUGE scores and human judgements for coherence of different systems on scientific articles. Our method performs considerably better than other systems on this data. Also, our graph-based summarization technique achieves state-of-the-art results on DUC 2002 data. Incorporating our local coherence measure always achieves the best results.",
"The network structure of a hyperlinked environment can be a rich source of information about the content of the environment, provided we have effective means for understanding it. We develop a set of algorithmic tools for extracting information from the link structures of such environments, and report on experiments that demonstrate their effectiveness in a variety of context on the World Wide Web. The central issue we address within our framework is the distillation of broad search topics, through the discovery of “authorative” information sources on such topics. We propose and test an algorithmic formulation of the notion of authority, based on the relationship between a set of relevant authoritative pages and the set of “hub pages” that join them together in the link structure. Our formulation has connections to the eigenvectors of certain matrices associated with the link graph; these connections in turn motivate additional heuristrics for link-based analysis.",
"We present an approach for extractive single-document summarization. Our approach is based on a weighted graphical representation of documents obtained by topic modeling. We optimize importance, coherence and non-redundancy simultaneously using ILP. We compare ROUGE scores of our system with state-of-the-art results on scientific articles from PLOS Medicine and on DUC 2002 data. Human judges evaluate the coherence of summaries generated by our system in comparision to two baselines. Our approach obtains competitive performance."
],
"cite_N": [
"@cite_1",
"@cite_23",
"@cite_2"
],
"mid": [
"2394938058",
"2138621811",
"2250968833"
]
}
| 0 |
||
1809.04649
|
2950839960
|
We present Semantic WordRank (SWR), an unsupervised method for generating an extractive summary of a single document. Built on a weighted word graph with semantic and co-occurrence edges, SWR scores sentences using an article-structure-biased PageRank algorithm with a Softplus function adjustment, and promotes topic diversity using spectral subtopic clustering under the Word-Movers-Distance metric. We evaluate SWR on the DUC-02 and SummBank datasets and show that SWR produces better summaries than the state-of-the-art algorithms over DUC-02 under common ROUGE measures. We then show that, under the same measures over SummBank, SWR outperforms each of the three human annotators (aka. judges) and compares favorably with the combined performance of all judges.
|
Submodularity optimization @cite_19 and Latent Semantic Analysis @cite_18 are two other widely used unsupervised techniques for extractive summarizations.
|
{
"abstract": [
"We design a class of submodular functions meant for document summarization tasks. These functions each combine two terms, one which encourages the summary to be representative of the corpus, and the other which positively rewards diversity. Critically, our functions are monotone nondecreasing and submodular, which means that an efficient scalable greedy optimization scheme has a constant factor guarantee of optimality. When evaluated on DUC 2004-2007 corpora, we obtain better than existing state-of-art results in both generic and query-focused document summarization. Lastly, we show that several well-established methods for document summarization correspond, in fact, to submodular function optimization, adding further evidence that submodular functions are a natural fit for document summarization.",
"In this paper, we propose two generic text summarization methods that create text summaries by ranking and extracting sentences from the original documents. The first method uses standard IR methods to rank sentence relevances, while the second method uses the latent semantic analysis technique to identify semantically important sentences, for summary creations. Both methods strive to select sentences that are highly ranked and different from each other. This is an attempt to create a summary with a wider coverage of the document's main content and less redundancy. Performance evaluations on the two summarization methods are conducted by comparing their summarization outputs with the manual summaries generated by three independent human evaluators. The evaluations also study the influence of different VSM weighting schemes on the text summarization performances. Finally, the causes of the large disparities in the evaluators' manual summarization results are investigated, and discussions on human text summarization patterns are presented."
],
"cite_N": [
"@cite_19",
"@cite_18"
],
"mid": [
"2144933361",
"1967082914"
]
}
| 0 |
||
1809.04649
|
2950839960
|
We present Semantic WordRank (SWR), an unsupervised method for generating an extractive summary of a single document. Built on a weighted word graph with semantic and co-occurrence edges, SWR scores sentences using an article-structure-biased PageRank algorithm with a Softplus function adjustment, and promotes topic diversity using spectral subtopic clustering under the Word-Movers-Distance metric. We evaluate SWR on the DUC-02 and SummBank datasets and show that SWR produces better summaries than the state-of-the-art algorithms over DUC-02 under common ROUGE measures. We then show that, under the same measures over SummBank, SWR outperforms each of the three human annotators (aka. judges) and compares favorably with the combined performance of all judges.
|
Deep learning methods, able to learn sentence or document representations automatically, have recently been used to score sentences. For example, R2N2 @cite_12 uses a recursive neural network for both word level and sentence level scoring, followed by an ILP optimization strategy for selecting sentences. CNN-W2V @cite_21 is another example, which modifies a convolutional-neural-network (CNN) model of sentence classification @cite_7 to rank sentences. SummaRuNNer @cite_22 , on the other hand, treats summarizations as sequence classifications and uses a two-layer bi-directional recurrent neural network (RNN) model to extract sentences, where the first layer RNN is for words and the second layer is for sentences. Unlike unsupervised methods, the state-of-the-art deep learning approaches require a larger dataset and a significantly longer time to train a model, yet with a much lower ROUGE-1 score when evaluating on DUC dataset.
|
{
"abstract": [
"We present SummaRuNNer, a Recurrent Neural Network (RNN) based sequence model for extractive summarization of documents and show that it achieves performance better than or comparable to state-of-the-art. Our model has the additional advantage of being very interpretable, since it allows visualization of its predictions broken up by abstract features such as information content, salience and novelty. Another novel contribution of our work is abstractive training of our extractive model that can train on human generated reference summaries alone, eliminating the need for sentence-level extractive labels.",
"Extractive summarization aims to generate a summary by ranking sentences, whose performance relies heavily on the quality of sentence features. In this paper, a document summarization framework based on convolutional neural networks is successfully developed to learn sentence features and perform sentence ranking jointly. We adapt the original CNN model to address a regression process for sentence ranking. Pre-trained word vectors are used to enhance the performance of our model. We evaluate our proposed method on the DUC 2002 and 2004 datasets covering single and multi-document summarization tasks respectively. The proposed system achieves competitive or even better performance compared with state-of-the-art document summarization systems.",
"We report on a series of experiments with convolutional neural networks (CNN) trained on top of pre-trained word vectors for sentence-level classification tasks. We show that a simple CNN with little hyperparameter tuning and static vectors achieves excellent results on multiple benchmarks. Learning task-specific vectors through fine-tuning offers further gains in performance. We additionally propose a simple modification to the architecture to allow for the use of both task-specific and static vectors. The CNN models discussed herein improve upon the state of the art on 4 out of 7 tasks, which include sentiment analysis and question classification.",
""
],
"cite_N": [
"@cite_22",
"@cite_21",
"@cite_7",
"@cite_12"
],
"mid": [
"2952138241",
"2568676725",
"2949541494",
""
]
}
| 0 |
||
1809.03568
|
2889866549
|
Although neural network approaches achieve remarkable success on a variety of NLP tasks, many of them struggle to answer questions that require commonsense knowledge. We believe the main reason is the lack of commonsense connections between concepts. To remedy this, we provide a simple and effective method that leverages external commonsense knowledge base such as ConceptNet. We pre-train direct and indirect relational functions between concepts, and show that these pre-trained functions could be easily added to existing neural network models. Results show that incorporating commonsense-based function improves the baseline on three question answering tasks that require commonsense reasoning. Further analysis shows that our system discovers and leverages useful evidence from an external commonsense knowledge base, which is missing in existing neural network models and help derive the correct answer.
|
Our work relates to the field of model pretraining in NLP and computer vision fields @cite_15 . In the NLP community, works on model pretraining can be divided into unstructured text-based and structured knowledge-based ones. Both word embedding learning algorithms @cite_23 and contextual embedding learning algorithms @cite_19 @cite_16 belong to the text-based direction. Compared with these methods, which aim to learn a representation for a continuous sequence of words, our goal is to model the concept relatedness with graph structure in the knowledge base. Previous works on knowledge-based pretraining are typically validated on knowledge base completion or link prediction task @cite_24 @cite_11 . Our work belongs to the second line. We pre-train models from the commonsense knowledge base and apply the approach to the question answering task. We believe that combining both structured knowledge graphs and unstructured texts to do model pretraining is very attractive, and we leave this for future work.
|
{
"abstract": [
"We consider the problem of embedding entities and relationships of multi-relational data in low-dimensional vector spaces. Our objective is to propose a canonical model which is easy to train, contains a reduced number of parameters and can scale up to very large databases. Hence, we propose TransE, a method which models relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities. Despite its simplicity, this assumption proves to be powerful since extensive experiments show that TransE significantly outperforms state-of-the-art methods in link prediction on two knowledge bases. Besides, it can be successfully trained on a large scale data set with 1M entities, 25k relationships and more than 17M training samples.",
"We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Our word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pre-trained on a large text corpus. We show that these representations can be easily added to existing models and significantly improve the state of the art across six challenging NLP problems, including question answering, textual entailment and sentiment analysis. We also present an analysis showing that exposing the deep internals of the pre-trained network is crucial, allowing downstream models to mix different types of semi-supervision signals.",
"Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75 on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition.",
"State-of-the-art visual perception models for a wide range of tasks rely on supervised pretraining. ImageNet classification is the de facto pretraining task for these models. Yet, ImageNet is now nearly ten years old and is by modern standards \"small\". Even so, relatively little is known about the behavior of pretraining with datasets that are multiple orders of magnitude larger. The reasons are obvious: such datasets are difficult to collect and annotate. In this paper, we present a unique study of transfer learning with large convolutional networks trained to predict hashtags on billions of social media images. Our experiments demonstrate that training for large-scale hashtag prediction leads to excellent results. We show improvements on several image classification and object detection tasks, and report the highest ImageNet-1k single-crop, top-1 accuracy to date: 85.4 (97.6 top-5). We also perform extensive experiments that provide novel empirical data on the relationship between large-scale pretraining and transfer learning performance.",
"We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5 (7.7 point absolute improvement), MultiNLI accuracy to 86.7 (4.6 absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).",
"Knowledge bases are an important resource for question answering and other tasks but often suffer from incompleteness and lack of ability to reason over their discrete entities and relationships. In this paper we introduce an expressive neural tensor network suitable for reasoning over relationships between two entities. Previous work represented entities as either discrete atomic units or with a single entity vector representation. We show that performance can be improved when entities are represented as an average of their constituting word vectors. This allows sharing of statistical strength between, for instance, facts involving the \"Sumatran tiger\" and \"Bengal tiger.\" Lastly, we demonstrate that all models improve when these word vectors are initialized with vectors learned from unsupervised large corpora. We assess the model by considering the problem of predicting additional true relations between entities given a subset of the knowledge base. Our model outperforms previous models and can classify unseen relationships in WordNet and FreeBase with an accuracy of 86.2 and 90.0 , respectively."
],
"cite_N": [
"@cite_24",
"@cite_19",
"@cite_23",
"@cite_15",
"@cite_16",
"@cite_11"
],
"mid": [
"2127795553",
"2787560479",
"2250539671",
"2799269579",
"2896457183",
"2127426251"
]
}
|
Improving Question Answering by Commonsense-Based Pre-Training
|
Commonsense reasoning is a major challenge for question answering (Levesque, Davis, and Morgenstern 2011;Clark et al. 2018;Ostermann et al. 2018;Boratko et al. 2018). Take Figure 1 as an example. Answering both questions requires a natural language understanding system that has the ability of reasoning based on commonsense knowledge about the world. Although neural network approaches ) that require commonsense knowledge and reasoning.
have achieved promising performance when supplied with a large amount of supervised training instances, even surpassing human-level exact match accuracy on the Stanford Question Answering Dataset (SQuAD) benchmark (Rajpurkar et al. 2016), it has been shown that existing systems lack true language understanding and reasoning capabilities (Jia and Liang 2017), which are crucial to commonsense reasoning. Moreover, although it is easy for humans to answer the aforementioned questions based on their knowledge about the world, it is a great challenge for machines when there is limited training data.
In this paper, we leverage external commonsense knowledge, such as ConceptNet (Speer and Havasi 2012), to improve the commonsense reasoning capability of a question answering (QA) system. We believe that a desirable way is to pre-train a generic model from external commonsense knowledge about the world, with the following advantages. First, such model has a larger coverage of the concepts/entities and can access rich contexts from the relational knowledge graph. Second, the ability of commonsense reasoning is not limited to the amount of training instances and the coverage of reasoning types in the end tasks. Third, it is convenient to build a hybrid system that preserves the semantic matching ability of the existing QA system, which might be a neural network-based model, and further integrates a generic model to improve model's capability of commonsense reasoning.
We believe that the main reason why the majority of existing methods lack the commonsense reasoning ability is the absence of connections between concepts 1 . These connections could be divided into direct and indirect ones. Below is an example sampled from ConceptNet. In this case, ( , , ) {"driving", "a license"} forms a direct connection whose relation is "HasPrerequisite". Similarly, {"driving", "road"} also forms a direct connection. Moreover, there are also indirect connections here such as {"a car", "getting to a destination"}, which are connected by a pivot concept "driving". Based on this, people can learn two functions to measure direct and indirect connections between every pair of concepts. These functions could be easily combined with existing QA system to make decisions.
We take two question answering tasks Ostermann et al. 2018) that require commonsense reasoning as the testbeds. These tasks take a question and optionally a context 2 as input, and select an answer from a set of candidate answers. We believe that understanding and answering the question requires knowledge of both words and the world (Hirsch 2003). Thus, we implement document-based neural network based baselines, and use the exact same way to improve the baseline systems with our commonsensebased pretrained models. Results show that incorporating pretrained models brings improvements on these two tasks and improve model's ability to discover useful evidences from an external commonsense knowledge base.
Tasks and Datasets
In this work, we focus on integrating commonsense knowledge as a source of supportive information into the question answering task. To verify the effectiveness of our approach, we use two multiple-choice question answering tasks that require commonsense reasoning as our testbeds. In this section, we describe task definitions and the datasets coupled with two tasks.
Given a question of length M and optionally a supporting passage of length N , both tasks are to predict the correct answer from a set of candidate answers. The difference between these tasks is the definition of the supporting passage which will be described later in this section. Systems are expected to select the correct answer from multiple candidate answers by reasoning out the question and the supporting passage. Following previous studies, we regard the problem as a ranking task. At the test time, the model should return the answer with highest score as the prediction.
The first task comes from SemEval 2018 Task 11 3 (Ostermann et al. 2018), which aims to evaluate a system's ability to perform commonsense reasoning in question answering. The dataset describes events about daily activities. For each question, the supporting passage is a specific document given as a part of the input, and the number of candidate answers is two. Answering substantial number of questions presented in this dataset requires inference from commonsense knowledge of diverse scenarios, which are beyond the facts explicitly mentioned in the document.
The second task we focus on is ARC, short for AI2 Reasoning Challenge, proposed by Clark et al. (2018) 4 . The ARC Dataset consists of a collection of scientific questions and a large scientific text corpus containing a large amount of science facts. Each question has multiple candidate answers (mostly 4-way multiple candidate answers). The dataset is separated into an easy set and a challenging set. The Challenging Set contains only difficult, grade-school questions including questions answered incorrectly by both a retrieval-based algorithm and a word co-occurrence algorithm, and have acquired strong reasoning ability of commonsense knowledge or other reasoning procedure (Boratko et al. 2018). Figure 1 shows two examples which need to be solved by common sense. We only use the challenge set in our experiment.
Commonsense Knowledge
This section describes the commonsense knowledge base we investigate in our experiment. We use ConceptNet 5 (Speer and Havasi 2012), one of the most widely used commonsense knowledge bases. Our approach is generic and could also be applied to other commonsense knowledge bases such as WebChild (Tandon, de Melo, and Weikum 2017), which we leave as a future work. ConceptNet is a semantic network that represent the large sets of words and phrases and the commonsense relationships between them. It contains 657,637 instances and 39 types of relationships. Each instance in ConceptNet can be generally described as a triple r i = (subject, relation, object). For example, the "IsA" relation (e.g. "car", "IsA", "vehicle") means that "XX is a kind of YY"; the "Causes" relation (e.g. "car", "Causes", "pollution") means that "the effect of XX is YY"; the "Capa-bleOf " relation (e.g. "car", "CapableOf ", "go fast") means that "XX can YY", etc. More relations and explanations could be found at Speer and Havasi (2012).
Approach Overview
In this section, we give an overview of our framework to show our basic idea of solving commonsense reasoning problem. Details of each component will be described in the following sections.
At the top of our framework, we need to make a suggestion that we should select the candidate answer with the highest probability (highest score) as our final prediction. So we can tackle this problem by designing a scoring function that captures the evidences mentioned in the passage and retrieved from commonsense knowledge base.
An overview of the QA system is given in Figure 3. We define the scoring function f (a i ) to calculate the score of a candidate answer a i , which can be calculated by the sum of document based scoring function f doc (a i ) and commonsense based scoring function f cs (a i )
f (a i ) = αf doc (a i ) + βf cs (a i )(1)
The calculation of the final score would consider the given passage, the given question, and a set of commonsense knowledge related to this instance.
In the next section we will detail the design and mathematical formulas of our commonsense knowledge based scoring function. We introduce the document based model in the following section.
× × ⊕ • a license • a car • being awake • … driving • changing your location • getting to a destination • getting in an accident • … • road • car • drive • … RelatedTo UsedFor • a turnpike • a lane • a parkway • …
Passage I was finally able to get my driving permit and it was time for my first driving lesson I was so excited to meet my instructor and drive their car for the first time I got behind the wheel made sure I checked the mirrors so I could see everything around me I put my seat belt on and told the instructor to put theirs on too I adjusted my seat so I could reach the pedals and steering wheel comfortably It was time to put the key in the ignition and start the car After the car was on I checked the mirrors to make sure I would not hit anything and backed out the parking spot He instructed me to drive the car around the block to make sure I knew the basics of driving After he felt comfortable we went out onto the road We drove for a few miles before going back to the school" Question Why did they take the driving lesson
Candidate Answers
Commonsense-based Model
In this section, we first describe how to pre-train commonsense-based functions in order to capture the semantic relationships between two concepts. Graph neural network (Scarselli et al. 2009) is used to integrate context from the graph structure in an external commonsense knowledge base. Afterwards, we present how to use the pretrained functions to calculate the relevance score between two pieces of text, such as a question sentence and a candidate answer sentence.
We model both direct and indirect relations between two concepts from commonsense KB, both of which are helpful when the connection between two sources (e.g. a question and a candidate answer) is missing merely based on the word utterances. Take direction relation involved in Figure 4 as an example. If a model is given the evidence from Concept-
Question Candidate Answers
Why does a plastic rod have a negative charge after being rubbed with a piece of fur Net such that the concept "electrons" and the concept "negative charge" has direct relation, it would be more confident to distinguish between (B,D) and (A,C), thus has a larger probability of obtaining the correct answer (D). Therefore, it is desirable to model the relevance between two concepts. Moreover, ConceptNet could not cover all the concepts which potentially have direction relations. We need to model the direct relation for every two concepts. Similarly, indirect relation also provides a strong evidence for prediction making. As shown in the example of Fig 2, the concept "a car" has an indirect relation to the concept "getting to a destination", both of which have a direct connection to the pivot concept "driving". With access to this information, a model would give a higher score to the an-swer containing "car" when questioned "how did someone get to the destination". Therefore, we model the commonsense-based relation between two concepts c 1 and c 2 as follows, where means element-wise multiplication, Enc(c) stands for an encoder that represents a concept c with a continuous vector.
f cs (c 1 , c 2 ) = Enc(c 1 ) Enc(c 2 )(2)
Specifically, we represent a concept with two types of information, namely the words it contains and the neighbors connected to it in the structural knowledge graph. From the first aspect, since each concept might consist of a sequence of words, we encode it by a bidirectional LSTM (Hochreiter and Schmidhuber 1997) over Glove word vectors (Pennington, Socher, and Manning 2014), where the concatenation of hidden states at both ends is used as the representation. We denote it as h w (c).
h w (c) = BiLST M (Emb(c))(3)
From the second aspect, we represent each concept based on the representations of its neighbors and the relations that connect them. We get inspirations from graph neural network (Scarselli et al. 2009). We regard a relation that connects two concepts as the compositional modifier to modify the meaning of the neighboring concept. Matrix-vector multiplication is used as the composition function (Mitchell and Lapata 2010). We denote the neighbor-based representation of a concept c as h n (c), which is calculated as follows, where r(c, c ) is the specific relation between two concepts, N BR(c) stands for the set of neighbors of the concept c, W and b are model parameters.
h n (c) = c ∈N BR(c) (W r(c,c ) h w (c ) + b r(c,c ) )(4)
The final representation of a concept c is the concatenation of both representations, namely
Enc(c) = [h w (c); h n (c)].
We use a standard ranking-based loss function to train the parameters, which is given in Equation 5.
l(c 1 , c 2 , c ) = max(0, f cs (c 1 , c )−f cs (c 1 , c 2 )+mgn) (5)
In this equation, c 1 and c 2 form a positive instance, which means that they have a relationship with each other, while c 1 and c form a negative instance. mgn is the margin with value of 0.1 in the experiment. We can easily learn two functions to model direct and indirect relations between two concepts by having different definitions of what a positive instance is, and accordingly using different strategies to sample the training instances. For the direct relation, we set those directly adjacent entities pairs in the knowledge graph as positive examples, and randomly select entity pairs that have no direct relationship as negative examples. For the indirect relation, we select entity pairs that have a common neighbor as a positive instance and randomly select an equal number of entities pairs that have no one-hop or two-hop connected relations as negative instances. We denote the direct relation based function as f dir cs (c 1 , c 2 ), and the indirect relation based function as f ind cs (c 1 , c 2 ). The final commonsensebased score in Equation 1 is calculated by using one of these two functions, or using both of them through a weighted sum. We will show the results under different settings in the experiment section.
We detailed the commonsense-based functions to measure the direct and indirect connection of each pair of concepts. Here, we present how to calculate the commonsense based score of a question sentence and a candidate answer sentence. In our experiment, we retrieve commonsense facts from ConceptNet (Speer and Havasi 2012). As described above, each fact from ConceptNet can be represented as a triple, namely c = (subject, relation, object). For each sentence (or paragraph), we retrieve a set of facts from ConceptNet. Specifically, we first extract a set of the ngrams from each sentence. We carry out an experiment with {1, 2, 3}-gram in our searching process, and then, we save the commonsense facts from ConceptNet which contain one of the extracted n-grams. We denote the retrieved facts for a sentence s as E s .
Suppose we have obtained commonsense facts for a question sentence and a candidate answer, respectively, let us denote the outputs as E 1 and E 2 . We can calculate the final score by the following formula. The intuition is to select the most relevant concept of each concept in E 1 , and then aggregate all these scores by average.
f cs (a i ) = 1 |E 1 | x∈E1 max y∈E2 (f cs (x, y))(6)
In the experiments on ARC and SemEval datasets, we also apply the previous scoring function for a pair of paragraph and candidate answer, where E 1 and E 2 come from the supporting paragraph and the answer sentence, respectively. Furthermore, we also calculate an additional f cs (a i ) score for the answer-paragraph pair in the same way. For a paragraph-question pair, in order to guarantee the relevance of the candidate answer sentence, we filter out concepts from E 1 or E 2 , if they are not contained in the extracted concepts from the candidate answer.
Document-based Model
In this section, we describe document-based models, which are used as baseline methods in the three tasks and further combined with the commonsense-based models as described in the previous section, to make the final prediction. We use state-of-the-art document-based models on these three tasks to verify whether a strong baseline model could benefit from our pre-trained commonsense-based models. We use TriAN (Wang et al. 2018), the top-performing system in the SemEval evaluation (Ostermann et al. 2018), as the document-based model for the SemEval dataset. Since the input and output of ARC and SemEval datasets are consistent, we also apply TriAN to the ARC dataset. We find that TriAN performs comparably to a recent state-of-theart system (Zhang et al. 2018), therefore we use it as our document-based model for ARC as well. To make this paper self-contained, we briefly describe the TriAN model in this part. Please refer to the original articles for more details.
In ARC and SemEval datasets, the task involves a passage as the evidence, a question, and several candidate answers as inputs. To select the correct answer, the model needs to comprehend each element and the interaction between them. The TriAN model (Wang et al. 2018), short for Threeway Attentive Networks, is developed to achieve this goal. The model can be roughly divided into three components, including the encoding layer, the interaction/composition layer, and the output layer.
Specifically, the representation of each word does not only include its internal embedding e w including word, POS and NER embeddings, but also considers its relevance to the words from other input sources. Let us denote the internal embedding for a word w as follows.
The question-aware representation of a passage word p i ∈ p is calculated as follows with an attention function, where W 1 is the model parameter.
Att(e pi , {e qj } qj ∈q ) = qj ∈q α i e qj (8)
α i = sof tmax(ReLU (W 1 e pi ) T ReLU (W 1 e qj )) (9)
The final representation of each passage word is the concatenation of e w and the following question-aware representation. Similarly, the final representation of each word in a candidate answer is the concatenation of e w , question-aware representation, and passage-aware representation. Afterwards, bidirectional LSTM is used to get the contextual vector for each word in the question, followed by a self-attention layer to get the final representation q for the question, which can be represented by following formula.
q = Att self ({h q i } |Q| i=1 ) = n i=1 β i h q i (10) β i = sof tmax i (W T 2 h i )(11)
The final representations of the candidate answer and the passage (a and p) are obtained in the same way. The ranking score of each candidate answer is calculated as follows, where σ is the sigmoid function.
y = σ(p T W 3 a + q T W 4 a)(12)
Experiment
We conduct experiments on two question answering datasets, namely SemEval 2018 Task 11 (Ostermann et al. 2018) and ARC Challenge Dataset , to evaluate the effectiveness of our system. We report model comparisons and model analysis in this section.
Passage Candidate Answers Commonsense Facts
Why does a plastic rod have a negative charge after being rubbed with a piece of fur Charge a rubber or plastic rod by rubbing it with wool or fur and the water will also be drawn toward the charged object. …. A polythene rod gains a negative charge when it is rubbed with a cloth Plastic rod Plastic rod. Where did they find the mower I look out my window at my front yard and notice that my lawn is growing a little too tall I think it is time to mow it I check the sky for rainy clouds to make sure it will not rain soon and also check the Internet for any rain later today It will not rain today so it is safe to mow the lawn on my lawnmower My lawnmower has a motor and a seat for me to ride in so it is much easier to use I go into my garage to make sure my lawnmower has enough gas to work …
Model Comparisons and Analysis
On ARC and SemEval datasets, we follow existing studies and use accuracy as the evaluation metric. Table 1 gives the data statistics of these two datasets. Table 2 and Table 3 show the results on these two datasets, respectively. On the ARC dataset, we compare our model with a list of existing systems. On the SemEval dataset, we only report the results of TriAN, which is the top-performing system in the Se-mEval evaluation 6 . f dir cs is our commonsense-based model for direct relations, and f ind cs represents the commonsensebased model for indirect relations. According to the experiment, the nerghbor-based representation has no significant effect on the performance of the model for direct relations, so its dimension in direct-relation based model is set to zero. From the results, we can observe that both commonsensebased scores improve the accuracy of the document-based model TriAN, and combining both scores could achieve further improvements on both datasets. The results show that our commonsense-based models are complementary to standard document-based models.
To better analyze the impact of incorporating our commonsense based model, we give examples from ARC and SemEval datasets that are incorrectly predicted by the document-based model, while correctly solved by incorporating the commonsense-based models. Figure 5 shows two examples that require commonsense-based direct relations between concepts. The first example comes from ARC. We can see that the retrieved facts from ConceptNet provide useful evidences to connect question to candidate answers (B) and (D). By combining with the document-based model, which might favor candidates with the co-occurred word "fur", the final system might give higher score to (D). The second example is from SemEval. Similarly, we can see that the retrieved facts from ConceptNet are helpful in making the correct prediction. Figure 6 shows an example from SemEval that benefits from both direct and indirect relations from commonsense knowledge. Despite both the question
Model
Accuracy IR 20.26% TupleInference 23.83% DecompAttn 24.34% Guess-all 25.02% DGEM-OpenIE 26.41% BiDAF 26.54% and candidate (A) mention about "drive/driving", the document-based model fails to make the correct prediction. We can see that the retrieved facts from ConceptNet help from difference perspectives. The retrieved fact {"driving","HasPrerequisite","license"} directly connects the question to the candidate (A), and both {"license","Synonym","permit"} and {"driver","RelatedTo","care"} directly connects candidate (A) to the passage. In addition, we also calculate for the question-passage pair, where the indirect relation between {"driving","permit"} could be further used as side information to do the prediction.
We further make comparisons by implementing different strategies to use the commonsense knowledge from Con-Question Why did they take the driving lesson Passage I was finally able to get my driving permit and it was time for my first driving lesson I was so excited to meet my instructor and drive their car for the first time I got behind the wheel made sure I checked the mirrors so I could see everything around me I put my seat belt on and told the instructor to put theirs on too I adjusted my seat so I could reach the pedals and steering wheel comfortably It was time to put the key in the ignition and start the car After the car was on I checked the mirrors to make sure I would not hit anything and backed out the parking spot He instructed me to drive the car around the block to make sure I knew the basics of driving After he felt comfortable we went out onto the road We drove for a few miles before going back to the school" ceptNet. We implement three baselines as follows.
Candidate Answers
The first baseline is TransE (Bordes et al. 2013), which is a simple yet effective method for KB completion that learns vector embeddings for both entities and relations on a knowledge base. We re-implement and train TransE model on ConceptNet. The commonsense-based score could be calculated by a dot-product between the embeddings of two concepts.
The second baseline is Pointwise Mutual Information (PMI), which has been used for commonsense inference (Lin, Sun, and Han 2017). Both TransE and PMI could be viewed as pretrained models from Conceptnet. The difference is that PMI scores are computed directly based on the co-occurred frequency between concepts in a knowledge base, without learning a embedding vector for each concept.
The third baseline is Key-Value Memory Network (KV-MemNet) (Miller et al. 2016), which has been used in commonsense inference (Mihaylov and Frank 2018). It first retrieves supporting evidences from external KB, and then regards the knowledge as a memory and uses them with a keyvalue memory network strategy. We implement this by encoding a set of commonsense facts into a joint representation by KV-MemNet. Next, we train the doc-based model which is enhanced by the KV-MemNet component.
From Table 4 we can see that learning direct and indirection connections based on contexts from word-level con- stituents and neighbor from knowledge graph performs better than TransE which is originally designed for KB completion. PMI performs well, however, its performance is limited by the information it can take into account, i.e. the word count information. The comparison between KV-MemNet and our approach further reveals the effectiveness of pretraining.
Discussion
We analyze the wrongly predicted instances from both datasets, and summarize the majority of errors of the following groups. The first type of error, which is also the dominant one, is caused by failing to highlight the most useful concept in all the retrieved ones. The usefulness of a concept should also be measured by its relevance to the question, its relevance to the document, and whether introducing it could help distinguish between candidate answers. For example, the question is "Where was the table set" is asked based on a document talking about dinner, according to which two candidate answers are "On the coffee table" and "At their house". Although the retrieved concepts for the first candidate answer also being relevant, they are not relevant to the question type "where". We believe that the problem would be alleviated by incorporating a context-aware module to model the importance of a retrieved concept in a particular instance, and combining it with the pretrained model to make the final prediction.
The second type of error is caused by the ambiguity of the entity/concept to be linked to the external knowledge base. For example, supposing the document talks about computer science and machine learning, the concept "Micheal Jordan" in question should be linked to the machine learning expert rather than the basketball player. However, to achieve this requires an entity/concept disambiguation model, the input of which also considers the question and the passage.
Moreover, the current system fails to handle difficult questions which need logical reasoning, such as "How long do the eggs cook for" and "How many people went to the movie together". We believe that deep question understanding, such as parsing a question based on a predefined grammar and operators in a semantic parsing manner (Liang 2016), is required to handle these questions, which is a very promising direction, and we leave it to future work.
Conclusion
We work on commonsense based question answering tasks in this work. We present a simple and effective way to pretrain models to measure relations between concepts. Each concept is represented based on its internal information (i.e. the words it contains) and external context (i.e. neighbors in the knowledge graph). We use ConceptNet as the external commonsense knowledge base, and apply the retrained on two question answering tasks (ARC and SemEval) in the same way. Results show that the pretrained models are complementary to standard document-based neural network approaches and could make further improvement through model combination. Model analysis shows that our system could discover useful evidences from an external commonsense knowledge base. In the future, we plan to address the issues raised in the discussion part including incorporating a context-aware module for concept ranking and considering logical reasoning operations. We also plan to apply the approach to other challenging datasets that require commonsense reasoning (Zellers et al. 2018).
| 4,688 |
1809.03214
|
2890375627
|
Making the right decision in traffic is a challenging task that is highly dependent on individual preferences as well as the surrounding environment. Therefore it is hard to model solely based on expert knowledge. In this work we use Deep Reinforcement Learning to learn maneuver decisions based on a compact semantic state representation. This ensures a consistent model of the environment across scenarios as well as a behavior adaptation function, enabling on-line changes of desired behaviors without re-training. The input for the neural network is a simulated object list similar to that of Radar or Lidar sensors, superimposed by a relational semantic scene description. The state as well as the reward are extended by a behavior adaptation function and a parameterization respectively. With little expert knowledge and a set of mid-level actions, it can be seen that the agent is capable to adhere to traffic rules and learns to drive safely in a variety of situations.
|
Initially most behavior planners were handcrafted state machines, made up by a variety of modules to handle different tasks of driving. During the DARPA Urban Challenge Boss (CMU) for example used five different modules to conduct on road driving. The responsibilities of the modules ranged from lane selection, merge planning to distance keeping @cite_10 . Other participants such as Odin (Virginia Tech) or Talos (MIT) developed very similar behavior generators @cite_1 @cite_27 .
|
{
"abstract": [
"The DARPA Urban Challenge required robotic vehicles to travel more than 90 km through an urban environment without human intervention and included situations such as stop intersections, traffic merges, parking, and roadblocks. Team VictorTango separated the problem into three parts: base vehicle, perception, and planning. A Ford Escape outfitted with a custom drive-by-wire system and computers formed the basis for Odin. Perception used laser scanners, global positioning system, and a priori knowledge to identify obstacles, cars, and roads. Planning relied on a hybrid deliberative-reactive architecture to analyze the situation, select the appropriate behavior, and plan a safe path. All vehicle modules communicated using the JAUS (Joint Architecture for Unmanned Systems) standard. The performance of these components in the Urban Challenge is discussed and successes noted. The result of VictorTango's work was successful completion of the Urban Challenge and a third-place finish. © 2008 Wiley Periodicals, Inc.",
"",
"Boss is an autonomous vehicle that uses on-board sensors (global positioning system, lasers, radars, and cameras) to track other vehicles, detect static obstacles, and localize itself relative to a road model. A three-layer planning system combines mission, behavioral, and motion planning to drive in urban environments. The mission planning layer considers which street to take to achieve a mission goal. The behavioral layer determines when to change lanes and precedence at intersections and performs error recovery maneuvers. The motion planning layer selects actions to avoid obstacles while making progress toward local goals. The system was developed from the ground up to address the requirements of the DARPA Urban Challenge using a spiral system development process with a heavy emphasis on regular, regressive system testing. During the National Qualification Event and the 85-km Urban Challenge Final Event, Boss demonstrated some of its capabilities, qualifying first and winning the challenge. © 2008 Wiley Periodicals, Inc."
],
"cite_N": [
"@cite_1",
"@cite_27",
"@cite_10"
],
"mid": [
"2127005930",
"",
"2121806728"
]
}
|
Adaptive Behavior Generation for Autonomous Driving using Deep Reinforcement Learning with Compact Semantic States
|
While sensors are improving at a staggering pace and actuators as well as control theory are well up to par to the challenging task of autonomous driving, it is yet to be seen how a robot can devise decisions that navigate it safely in a heterogeneous environment that is partially made up by humans who not always take rational decisions or have known cost functions.
Early approaches for maneuver decisions focused on predefined rules embedded in large state machines, each requiring thoughtful engineering and expert knowledge [1], [2], [3].
Recent work focuses on more complex models with additional domain knowledge to predict and generate maneuver decisions [4]. Some approaches explicitly model the interdependency between the actions of traffic participants [5] as well as address their replanning capabilities [6].
With the large variety of challenges that vehicles with a higher degree of autonomy need to face, the limitations of rule-and model-based approaches devised by human expert knowledge that proved successful in the past become apparent.
At least since the introduction of AlphaZero, which discovered the same game-playing strategies as humans did in Chess and Go in the past, but also learned entirely unknown strategies, it is clear, that human expert knowledge is overvalued [7], [8]. Hence, it is only reasonable to apply the same techniques to the task of behavior planning in 1 The initial traffic scene is transformed into a compact semantic state representation s and used as input for the reinforcement learning (RL) agent. The agent estimates the action a with the highest return (Q-value) and executes it, e.g., changing lanes. Afterwards a reward r is collected and a new state s is reached. The transition (s, a, r, s ) is stored in the agent's replay memory. autonomous driving, relying on data-driven instead of modelbased approaches.
The contributions of this work are twofold. First, we employ a compact semantic state representation, which is based on the most significant relations between other entities and the ego vehicle. This representation is neither dependent on the road geometry nor the number of surrounding vehicles, suitable for a variety of traffic scenarios. Second, using parameterization and a behavior adaptation function we demonstrate the ability to train agents with a changeable desired behavior, adaptable on-line, not requiring new training.
The remainder of this work is structured as follows: In Section II we give a brief overview of the research on behavior generation in the automated driving domain and deep reinforcement learning. A detailed description of our approach, methods and framework follows in Section III and Section IV respectively. In Section V we present the evaluation of our trained agents. Finally, we discuss our results in Section VI.
III. APPROACH
We employ a deep reinforcement learning approach to generate adaptive behavior for autonomous driving. A reinforcement learning process is commonly modeled as an MDP [28] (S, A, R, δ, T ) where S is the set of states, A the set of actions, R : S ×A×S → R the reward function, δ : S × A → S the state transition model and T the set of terminal states. At timestep i an agent in state s ∈ S can choose an action a ∈ A according to a policy π and will progress into the successor state s receiving reward r. This is defined as a transition t = (s, a, s , r).
The aim of reinforcement learning is to maximize the future discounted return G i = n=i γ n−i r i . A DQN uses Q-Learning [29] to learn Q-values for each action given input state s based on past transitions. The predicted Q-values of the DQN are used to adapt the policy π and therefore change the agent's behavior. A schematic of this process is depicted in Fig. 1.
For the input state representation we adapt the ontologybased concept from [10] focusing on relations with other traffic participants as well as the road topology. We design the state representation to use high level preprocessed sensory inputs such as object lists provided by common Radar and Lidar sensors and lane boundaries from visual sensors or map data. To generate the desired behavior the reward is comprised of different factors with varying priorities. In the following the different aspects are described in more detail.
A. Semantic Entity-Relationship Model
A traffic scene τ is described by a semantic entityrelationship model, consisting of all scene objects and relations between them. We define it as the tuple (E, R), where
• E = {e 0 , e 1 , .
.., e n }: set of scene objects (entities). • R = {r 0 , r 1 , ..., r m }: set of relations. The scene objects contain all static and dynamic objects, such as vehicles, pedestrians, lane segments, signs and traffic lights.
In this work we focus on vehicles V ⊂ E, lane segments L ⊂ E and the three relation types vehicle-vehicle relations, vehicle-lane relations and lane-lane relations. Using these entities and relations an entity-relationship representation of a traffic scene can be created as depicted in Fig. 3. Every and relation holds several properties or attributes of the scene objects, such as e.g. absolute positions or relative velocities. This scene description combines low level attributes with high level relational knowledge in a generic way. It is thus applicable to any traffic scene and vehicle sensor setup, making it a beneficial state representation.
But the representation is of varying size and includes more aspects than are relevant for a given driving task. In order to use this representation as the input to a neural network we transform it to a fixed-size relational grid that includes only the most relevant relations.
B. Relational Grid
We define a relational grid, centered at the ego vehicle v ego ∈ V, see Fig. 2. The rows correspond to the relational lane topology, whereas the columns correspond to the vehicle topology on these lanes.
To define the size of the relational grid, a vehicle scope Λ is introduced that captures the lateral and longitudinal dimensions, defined by the following parameters: The relational grid ensures a consistent representation of the environment, independent of the road geometry or the number of surrounding vehicles.
• Λ lateral ∈ N0 v 4 v 1 v 6 v 3 v 5 v 4 v 2 v ego φ i ∆ṡ i ∆s i lane topology features ∆d i
The resulting input state s ∈ S is depicted in Fig. 4 and fed into a DQN.
C. Action Space
The vehicle's actions space is defined by a set of semantic actions that is deemed sufficient for most on-road driving tasks, excluding special cases such as U-turns. The longitudinal movement of the vehicle is controlled by the actions accelerate and decelerate. While executing these actions, the ego vehicle keeps its lane. Lateral movement is generated by the actions lane change left as well as lane change right respectively. Only a single action is executed at a time and actions are executed in their entirety, the vehicle is not able to prematurely abort an action. The default action results in no change of either velocity, lateral alignment or heading.
D. Adaptive Desired Behavior through Reward Function
With the aim to generate adaptive behavior we extend the reward function R(s, a) by a parameterization θ. This parameterization is used in the behavior adaptation function Ω(τ, θ), so that the agent is able to learn different desired behaviors without the need to train a new model for varying parameter values.
Furthermore, the desired driving behavior consists of several individual goals, modeled by separated rewards. We rank these reward functions by three different priorities. The highest priority has collision avoidance, important but to a lesser extent are rewards associated with traffic rules, least prioritized are rewards connected to the driving style.
The overall reward function R(s, a, θ) can be expressed as follows:
R(s, a, θ) = R collision (s, θ)
s ∈ S collision R rules (s, θ) s ∈ S rules R driving style (s, a, θ) else,
The subset S collision ⊂ S consists of all states s describing a collision state of the ego vehicle v ego and another vehicle v i . In these states the agent only receives the immediate reward without any possibility to earn any other future rewards. Additionally, attempting a lane change to a nonexistent adjacent lane is also treated as a collision.
The state dependent evaluation of the reward factors facilitates the learning process. As the reward for a state is independent of rewards with lower priority, the eligibility trace is more concise for the agent being trained. For example, driving at the desired velocity does not mitigate the reward for collisions.
IV. EXPERIMENTS
A. Framework
While our concept is able to handle data from many preprocessing methods used in autonomous vehicles, we tested the approach with the traffic simulation SUMO [30]. A schematic overview of the framework is depicted in Fig. 5. We use SUMO in our setup as it allows the initialization and execution of various traffic scenarios with adjustable road layout, traffic density and the vehicles's driving behavior. To achieve this, we extend TensorForce [31] with a highly configurable interface to the SUMO environment. Tensor-Force is a reinforcement library based on TensorFlow [32], which enables the deployment of various customizable DRL methods, including DQN. To examine the agent's compliance to traffic rules, it is trained and evaluated on two different traffic scenarios. In (a) the agent has the obligation to drive on the right most lane and must not pass others from the right, amongst other constraints. In (b) the agent is allowed to accelerate while on the on-ramp and also might overtake vehicles on its left. But it has to leave the on-ramp before it ends.
This setup allows us to deploy agents using various DRL methods, state representations and rewards in multiple traffic scenarios. In our experiments we used a vehicle scope with Λ behind = 1 and Λ ahead = Λ lateral = 2. This allows the agent to always perceive all lanes of a 3 lane highway and increases its potential anticipation.
B. Network
In this work we use the DQN approach introduced by Mnih et al. [15] as it has shown its capabilities to successfully learn behavior policies for a range of different tasks. While we use the general learning algorithm described in [15], including the usage of experience replay and a secondary target network, our actual network architecture differs from theirs. The network from Mnih et al. was designed for a visual state representation of the environment. In that case, a series of convolutional layers is commonly used to learn a suitable low-dimensional feature set from this kind of highdimensional sensor input. This set of features is usually further processed in fully-connected network layers.
Since the state representation in this work already consists of selected features, the learning of a low-dimensional feature set using convolutional layers is not necessary. Therefore we use a network with solely fully-connected layers, see Tab. I.
The size of the input layer depends on the number of features in the state representation. On the output layer there is a neuron for each action. The given value for each action is its estimated Q-value.
C. Training
During training the agents are driving on one or more traffic scenarios in SUMO. An agent is trained for a maximum of 2 million timesteps, each generating a transition consisting of the observed state, the selected action, the subsequent state and the received reward. The transitions are stored in the replay memory, which holds up to 500,000 transitions. After reaching a threshold of at least 50,000 transitions in memory, a batch of 32 transitions is randomly selected to update the network's weights every fourth timestep. We discount future rewards by a factor of 0.9 during the weight update. The target network is updated every 50,000th step.
To allow for exploration early on in the training, an -greedy policy is followed. With a probability of the action to be executed is selected randomly, otherwise the action with the highest estimated Q-value is chosen. The variable is initialized as 1, but decreased linearly over the course of 500,000 timesteps until it reaches a minimum of 0.1, reducing the exploration in favor of exploitation. As optimization method for our DQN we use the RMSProp algorithm [33] with a learning rate of 10 −5 and decay of 0.95.
The training process is segmented into episodes. If an agent is trained on multiple scenarios, the scenarios alternate after the end of an episode. To ensure the agent experiences a wide range of different scenes, it is started with a randomized departure time, lane, velocity and θ in the selected scenario at the beginning and after every reset of the simulation. In a similar vein, it is important that the agent is able to observe a broad spectrum of situations in the scenario early in the training. Therefore, should the agent reach a collision state s ∈ S collision by either colliding with another vehicle or attempting to drive off the road, the current episode is finished with a terminal state. Afterwards, a new episode is started immediately without reseting the simulation or changing the agent's position or velocity. Since we want to avoid learning an implicit goal state at the end of a scenario's course, the simulation is reset if a maximum amount of 200 timesteps per episode has passed or the course's end has been reached and the episode ends with a non-terminal state.
D. Scenarios
Experiments are conducted using two different scenarios, see Fig. 6. One is a straight multi-lane highway scenario. The other is a merging scenario on a highway with an on-ramp.
To generate the desired adaptive behavior, parameterized reward functions are defined (see Eq. 1). We base R rules on German traffic rules such as the obligation to drive on the right most lane (r keepRight ), prohibiting overtaking on the right (r passRight ) as well as keeping a minimum distance to vehicles in front (r saf eDistance ). A special case is the acceleration lane, where the agent is allowed to pass on the right and is not required to stay on the right most lane. Instead the agent is not allowed to enter the acceleration lane (r notEnter ). Similarly, R driving style entails driving a desired velocity (r velocity ) ranging from 80 km/h to 115 km/h on the highway and from 40 km/h to 80 km/h on the merging scenario. The desired velocity in each training episode is defined by θ v which is sampled uniformly over the scenario's velocity range. Additionally, R driving style aims to avoid unnecssary lane and velocity changes (r action ).
With these constraints in mind, the parameterized reward functions are implemented as follows, to produce the desired behavior.
R collision (s, θ) = θ t r collision(2)
R rules (s, θ) = r passRight (s, θ p ) + r notEnter (s, θ n ) + r saf eDistance (s, θ s ) + r keepRight (s, θ k )
R driving style (s, a, θ) = r action (a, θ a ) + r velocity (s, θ v )
To enable different velocity preferences, the behavior adaptation function Ω returns the difference between the desired and the actual velocity of v ego .
V. EVALUATION
During evaluation we trained an agent g H only on the highway scenario and an agent g M only on the merging scenario. In order to show the versatility of our approach, we additionally trained an agent g C both on the highway as well as the merging scenario (see Tab.II). Due to the nature of our compact semantic state representation we are able to achieve this without further modifications. The agents are evaluated during and after training by running the respective scenarios 100 times. To assess the capabilities of the trained agents, using the concept mentioned in Section III, we introduce the following metrics.
Collision Rate [%]: The collision rate denotes the average amount of collisions over all test runs. In contrast to the training, a run is terminated if the agent collides. As this is such a critical measure it acts as the most expressive gauge assessing the agents performance.
Avg. Distance between Collisions [km]:
The average distance travelled between collisions is used to remove the bias of the episode length and the vehicle's speed.
Rule Violations [%]: Relative duration during which the agent is not keeping a safe distance or is overtaking on right.
Lane Distribution [%]: The lane distribution is an additional weak indicator for the agent's compliance with the traffic rules.
Avg. Speed [m/s]: The average speed of the agent does not only indicate how fast the agent drives, but also displays how accurate the agent matches its desired velocity.
The results of the agents trained on the different scenarios are shown in Tab. II. The agents generally achieve the desired behavior. An example of an overtaking maneuver is presented in Fig. 9. During training the collision rate of g H decreases to a decent level (see Fig. 7). Agent g M takes more training iterations to reduce its collision rate to a reasonable level, as it not only has to avoid other vehicles, but also needs to leave the on-ramp. Additionally, g M successfully learns to accelerate to its desired velocity on the on-ramp. But for higher desired velocities this causes difficulties leaving the ramp or braking in time in case the middle lane is occupied. This effect increases the collision rate in the latter half of the training process. The relative duration of rule violations by g H reduces over the course of the training, but stagnates at approximately 2% (see Fig. 8). A potential cause is our strict definition of when an overtaking on the right occurs. The agent almost never performs a full overtaking maneuver from the right, but might drive faster than another vehicle on the left hand side, which will already be counted towards our metric. For g M the duration of rule violations is generally shorter, starting low, peaking and then also stagnating similarly to g H . This is explained due to overtaking on the right not being considered on the acceleration lane. The peak emerges as a result of the agent leaving the lane more often at this point.
The lane distribution of g H (see Tab. II) demonstrates that the agent most often drives on the right lane of the highway, to a lesser extent on the middle lane and only seldom on the left lane. This reflects the desired behavior of adhering to the obligation of driving on the right most lane and only using the other lanes for overtaking slower vehicles. In the merging scenario this distribution is less informative since the task does not provide the same opportunities for lane changes. To measure the speed deviation of the agents, additional test runs with fixed values for the desired velocity were performed. The results are shown in Tab. III. As can be seen, the agents adapt their behavior, as an increase in the desired velocity raises the average speed of the agents. In tests with other traffic participants, the average speed is expectedly lower than the desired velocity, as the agents often have to slow down and wait for an opportunity to overtake. Especially in the merging scenario the agent is unable to reach higher velocities due to these circumstances. During runs on an empty highway scenario, the difference between the average and desired velocity diminishes.
Although g H and g M outperform it on their individual tasks, g C achieves satisfactory behavior on both. Especially, it is able to learn task specific knowledge such as overtaking in the acceleration lane of the on-ramp while not overtaking from the right on the highway.
A video of our agents behavior is provided online. 1
VI. CONCLUSIONS
In this work two main contributions have been presented. First, we introduced a compact semantic state representation that is applicable to a variety of traffic scenarios. Using 1 http://url.fzi.de/behavior-iv2018 a relational grid our representation is independent of road topology, traffic constellation and sensor setup.
Second, we proposed a behavior adaptation function which enables changing desired driving behavior online without the need to retrain the agent. This eliminates the requirement to generate new models for different driving style preferences or other varying parameter values. Agents trained with this approach performed well on different traffic scenarios, i.e. highway driving and highway merging. Due to the design of our state representation and behavior adaptation, we were able to develop a single model applicable to both scenarios. The agent trained on the combined model was able to successfully learn scenario specific behavior.
One of the major goals for future work we kept in mind while designing the presented concept is the transfer from the simulation environment to real world driving tasks. A possible option is to use the trained networks as a heuristic in MCTS methods, similar to [27]. Alternatively, our approach can be used in combination with model-driven systems to plan or evaluate driving behavior.
To achieve this transfer to real-world application, we will apply our state representation to further traffic scenarios, e.g., intersections. Additionally, we will extend the capabilities of the agents by adopting more advanced reinforcement learning techniques.
| 3,573 |
1809.03214
|
2890375627
|
Making the right decision in traffic is a challenging task that is highly dependent on individual preferences as well as the surrounding environment. Therefore it is hard to model solely based on expert knowledge. In this work we use Deep Reinforcement Learning to learn maneuver decisions based on a compact semantic state representation. This ensures a consistent model of the environment across scenarios as well as a behavior adaptation function, enabling on-line changes of desired behaviors without re-training. The input for the neural network is a simulated object list similar to that of Radar or Lidar sensors, superimposed by a relational semantic scene description. The state as well as the reward are extended by a behavior adaptation function and a parameterization respectively. With little expert knowledge and a set of mid-level actions, it can be seen that the agent is capable to adhere to traffic rules and learns to drive safely in a variety of situations.
|
Due to the limitations of state machines, current research has expanded on the initial efforts by creating more complex and formal models: A mixture of POMDP, stochastic non-linear MPC and domain knowledge can be used to generate lane change decisions in a variety of traffic scenarios @cite_33 . Capturing the mutual dependency of maneuver decisions between different agents, planning can be conducted with foresight @cite_13 @cite_18 . While @cite_13 plans only the next maneuver focusing on the reduction of collision probabilities between all traffic participants, @cite_34 explicitly addresses longer planning horizons and the replanning capabilities of others.
|
{
"abstract": [
"",
"This paper presents a novel cooperative-driving prediction and planning framework for dynamic environments based on the methods of game theory. The proposed algorithm can be used for highly automated driving on highways or as a sophisticated prediction module for advanced driver-assistance systems with no need for intervehicle communication. The main contribution of this paper is a model-based interaction-aware motion prediction of all vehicles in a scene. In contrast to other state-of-the-art approaches, the system also models the replanning capabilities of all drivers. With that, the driving strategy is able to capture complex interactions between vehicles, thus planning maneuver sequences over longer time horizons. It also enables an accurate prediction of traffic for the next immediate time step. The prediction model is supported by an interpretation of what other drivers intend to do, how they interact with traffic, and the ongoing observation. As part of the prediction loop, the proposed planning strategy incorporates the expected reactions of all traffic participants, offering cooperative and robust driving decisions. By means of experimental results under simulated highway scenarios, the validity of the proposed concept and its real-time capability is demonstrated.",
"In this work, a framework for motion prediction of vehicles and safety assessment of traffic scenes is presented. The developed framework can be used for driver assistant systems as well as for autonomous driving applications. In order to assess the safety of the future trajectories of the vehicle, these systems require a prediction of the future motion of all traffic participants. As the traffic participants have a mutual influence on each other, the interaction of them is explicitly considered in this framework, which is inspired by an optimization problem. Taking the mutual influence of traffic participants into account, this framework differs from the existing approaches which consider the interaction only insufficiently, suffering reliability in real traffic scenes. For motion prediction, the collision probability of a vehicle performing a certain maneuver, is computed. Based on the safety evaluation and the assumption that drivers avoid collisions, the prediction is realized. Simulation scenarios and real-world results show the functionality.",
"Recently, automated driving has more and more been transformed from an exciting vision into hands on reality by prototypes. While drivers are used to assistance and maybe even automation for driving within a lane, it is exciting to dare a step ahead: Deciding and executing tactical maneuvers like lane changes in automated vehicles without any human interaction. In this paper, we present our approach for tactical behavior planning for lane changes. We present a way to tackle perception uncertainties and how to achieve provident, prediction-based behavior planning. For this, we introduce a novel framework to plan in high-dimensional, mixed-integer state spaces in real-time. Our approach is evaluated not only in simulation, but also in real traffic. The implementation has recently been demonstrated to the public in the Audi A7 piloted driving concept vehicle, driving from Stanford to the Consumer Electronics Show (CES) 2015 in Las Vegas."
],
"cite_N": [
"@cite_18",
"@cite_34",
"@cite_13",
"@cite_33"
],
"mid": [
"",
"2344985987",
"2134239466",
"1919962963"
]
}
|
Adaptive Behavior Generation for Autonomous Driving using Deep Reinforcement Learning with Compact Semantic States
|
While sensors are improving at a staggering pace and actuators as well as control theory are well up to par to the challenging task of autonomous driving, it is yet to be seen how a robot can devise decisions that navigate it safely in a heterogeneous environment that is partially made up by humans who not always take rational decisions or have known cost functions.
Early approaches for maneuver decisions focused on predefined rules embedded in large state machines, each requiring thoughtful engineering and expert knowledge [1], [2], [3].
Recent work focuses on more complex models with additional domain knowledge to predict and generate maneuver decisions [4]. Some approaches explicitly model the interdependency between the actions of traffic participants [5] as well as address their replanning capabilities [6].
With the large variety of challenges that vehicles with a higher degree of autonomy need to face, the limitations of rule-and model-based approaches devised by human expert knowledge that proved successful in the past become apparent.
At least since the introduction of AlphaZero, which discovered the same game-playing strategies as humans did in Chess and Go in the past, but also learned entirely unknown strategies, it is clear, that human expert knowledge is overvalued [7], [8]. Hence, it is only reasonable to apply the same techniques to the task of behavior planning in 1 The initial traffic scene is transformed into a compact semantic state representation s and used as input for the reinforcement learning (RL) agent. The agent estimates the action a with the highest return (Q-value) and executes it, e.g., changing lanes. Afterwards a reward r is collected and a new state s is reached. The transition (s, a, r, s ) is stored in the agent's replay memory. autonomous driving, relying on data-driven instead of modelbased approaches.
The contributions of this work are twofold. First, we employ a compact semantic state representation, which is based on the most significant relations between other entities and the ego vehicle. This representation is neither dependent on the road geometry nor the number of surrounding vehicles, suitable for a variety of traffic scenarios. Second, using parameterization and a behavior adaptation function we demonstrate the ability to train agents with a changeable desired behavior, adaptable on-line, not requiring new training.
The remainder of this work is structured as follows: In Section II we give a brief overview of the research on behavior generation in the automated driving domain and deep reinforcement learning. A detailed description of our approach, methods and framework follows in Section III and Section IV respectively. In Section V we present the evaluation of our trained agents. Finally, we discuss our results in Section VI.
III. APPROACH
We employ a deep reinforcement learning approach to generate adaptive behavior for autonomous driving. A reinforcement learning process is commonly modeled as an MDP [28] (S, A, R, δ, T ) where S is the set of states, A the set of actions, R : S ×A×S → R the reward function, δ : S × A → S the state transition model and T the set of terminal states. At timestep i an agent in state s ∈ S can choose an action a ∈ A according to a policy π and will progress into the successor state s receiving reward r. This is defined as a transition t = (s, a, s , r).
The aim of reinforcement learning is to maximize the future discounted return G i = n=i γ n−i r i . A DQN uses Q-Learning [29] to learn Q-values for each action given input state s based on past transitions. The predicted Q-values of the DQN are used to adapt the policy π and therefore change the agent's behavior. A schematic of this process is depicted in Fig. 1.
For the input state representation we adapt the ontologybased concept from [10] focusing on relations with other traffic participants as well as the road topology. We design the state representation to use high level preprocessed sensory inputs such as object lists provided by common Radar and Lidar sensors and lane boundaries from visual sensors or map data. To generate the desired behavior the reward is comprised of different factors with varying priorities. In the following the different aspects are described in more detail.
A. Semantic Entity-Relationship Model
A traffic scene τ is described by a semantic entityrelationship model, consisting of all scene objects and relations between them. We define it as the tuple (E, R), where
• E = {e 0 , e 1 , .
.., e n }: set of scene objects (entities). • R = {r 0 , r 1 , ..., r m }: set of relations. The scene objects contain all static and dynamic objects, such as vehicles, pedestrians, lane segments, signs and traffic lights.
In this work we focus on vehicles V ⊂ E, lane segments L ⊂ E and the three relation types vehicle-vehicle relations, vehicle-lane relations and lane-lane relations. Using these entities and relations an entity-relationship representation of a traffic scene can be created as depicted in Fig. 3. Every and relation holds several properties or attributes of the scene objects, such as e.g. absolute positions or relative velocities. This scene description combines low level attributes with high level relational knowledge in a generic way. It is thus applicable to any traffic scene and vehicle sensor setup, making it a beneficial state representation.
But the representation is of varying size and includes more aspects than are relevant for a given driving task. In order to use this representation as the input to a neural network we transform it to a fixed-size relational grid that includes only the most relevant relations.
B. Relational Grid
We define a relational grid, centered at the ego vehicle v ego ∈ V, see Fig. 2. The rows correspond to the relational lane topology, whereas the columns correspond to the vehicle topology on these lanes.
To define the size of the relational grid, a vehicle scope Λ is introduced that captures the lateral and longitudinal dimensions, defined by the following parameters: The relational grid ensures a consistent representation of the environment, independent of the road geometry or the number of surrounding vehicles.
• Λ lateral ∈ N0 v 4 v 1 v 6 v 3 v 5 v 4 v 2 v ego φ i ∆ṡ i ∆s i lane topology features ∆d i
The resulting input state s ∈ S is depicted in Fig. 4 and fed into a DQN.
C. Action Space
The vehicle's actions space is defined by a set of semantic actions that is deemed sufficient for most on-road driving tasks, excluding special cases such as U-turns. The longitudinal movement of the vehicle is controlled by the actions accelerate and decelerate. While executing these actions, the ego vehicle keeps its lane. Lateral movement is generated by the actions lane change left as well as lane change right respectively. Only a single action is executed at a time and actions are executed in their entirety, the vehicle is not able to prematurely abort an action. The default action results in no change of either velocity, lateral alignment or heading.
D. Adaptive Desired Behavior through Reward Function
With the aim to generate adaptive behavior we extend the reward function R(s, a) by a parameterization θ. This parameterization is used in the behavior adaptation function Ω(τ, θ), so that the agent is able to learn different desired behaviors without the need to train a new model for varying parameter values.
Furthermore, the desired driving behavior consists of several individual goals, modeled by separated rewards. We rank these reward functions by three different priorities. The highest priority has collision avoidance, important but to a lesser extent are rewards associated with traffic rules, least prioritized are rewards connected to the driving style.
The overall reward function R(s, a, θ) can be expressed as follows:
R(s, a, θ) = R collision (s, θ)
s ∈ S collision R rules (s, θ) s ∈ S rules R driving style (s, a, θ) else,
The subset S collision ⊂ S consists of all states s describing a collision state of the ego vehicle v ego and another vehicle v i . In these states the agent only receives the immediate reward without any possibility to earn any other future rewards. Additionally, attempting a lane change to a nonexistent adjacent lane is also treated as a collision.
The state dependent evaluation of the reward factors facilitates the learning process. As the reward for a state is independent of rewards with lower priority, the eligibility trace is more concise for the agent being trained. For example, driving at the desired velocity does not mitigate the reward for collisions.
IV. EXPERIMENTS
A. Framework
While our concept is able to handle data from many preprocessing methods used in autonomous vehicles, we tested the approach with the traffic simulation SUMO [30]. A schematic overview of the framework is depicted in Fig. 5. We use SUMO in our setup as it allows the initialization and execution of various traffic scenarios with adjustable road layout, traffic density and the vehicles's driving behavior. To achieve this, we extend TensorForce [31] with a highly configurable interface to the SUMO environment. Tensor-Force is a reinforcement library based on TensorFlow [32], which enables the deployment of various customizable DRL methods, including DQN. To examine the agent's compliance to traffic rules, it is trained and evaluated on two different traffic scenarios. In (a) the agent has the obligation to drive on the right most lane and must not pass others from the right, amongst other constraints. In (b) the agent is allowed to accelerate while on the on-ramp and also might overtake vehicles on its left. But it has to leave the on-ramp before it ends.
This setup allows us to deploy agents using various DRL methods, state representations and rewards in multiple traffic scenarios. In our experiments we used a vehicle scope with Λ behind = 1 and Λ ahead = Λ lateral = 2. This allows the agent to always perceive all lanes of a 3 lane highway and increases its potential anticipation.
B. Network
In this work we use the DQN approach introduced by Mnih et al. [15] as it has shown its capabilities to successfully learn behavior policies for a range of different tasks. While we use the general learning algorithm described in [15], including the usage of experience replay and a secondary target network, our actual network architecture differs from theirs. The network from Mnih et al. was designed for a visual state representation of the environment. In that case, a series of convolutional layers is commonly used to learn a suitable low-dimensional feature set from this kind of highdimensional sensor input. This set of features is usually further processed in fully-connected network layers.
Since the state representation in this work already consists of selected features, the learning of a low-dimensional feature set using convolutional layers is not necessary. Therefore we use a network with solely fully-connected layers, see Tab. I.
The size of the input layer depends on the number of features in the state representation. On the output layer there is a neuron for each action. The given value for each action is its estimated Q-value.
C. Training
During training the agents are driving on one or more traffic scenarios in SUMO. An agent is trained for a maximum of 2 million timesteps, each generating a transition consisting of the observed state, the selected action, the subsequent state and the received reward. The transitions are stored in the replay memory, which holds up to 500,000 transitions. After reaching a threshold of at least 50,000 transitions in memory, a batch of 32 transitions is randomly selected to update the network's weights every fourth timestep. We discount future rewards by a factor of 0.9 during the weight update. The target network is updated every 50,000th step.
To allow for exploration early on in the training, an -greedy policy is followed. With a probability of the action to be executed is selected randomly, otherwise the action with the highest estimated Q-value is chosen. The variable is initialized as 1, but decreased linearly over the course of 500,000 timesteps until it reaches a minimum of 0.1, reducing the exploration in favor of exploitation. As optimization method for our DQN we use the RMSProp algorithm [33] with a learning rate of 10 −5 and decay of 0.95.
The training process is segmented into episodes. If an agent is trained on multiple scenarios, the scenarios alternate after the end of an episode. To ensure the agent experiences a wide range of different scenes, it is started with a randomized departure time, lane, velocity and θ in the selected scenario at the beginning and after every reset of the simulation. In a similar vein, it is important that the agent is able to observe a broad spectrum of situations in the scenario early in the training. Therefore, should the agent reach a collision state s ∈ S collision by either colliding with another vehicle or attempting to drive off the road, the current episode is finished with a terminal state. Afterwards, a new episode is started immediately without reseting the simulation or changing the agent's position or velocity. Since we want to avoid learning an implicit goal state at the end of a scenario's course, the simulation is reset if a maximum amount of 200 timesteps per episode has passed or the course's end has been reached and the episode ends with a non-terminal state.
D. Scenarios
Experiments are conducted using two different scenarios, see Fig. 6. One is a straight multi-lane highway scenario. The other is a merging scenario on a highway with an on-ramp.
To generate the desired adaptive behavior, parameterized reward functions are defined (see Eq. 1). We base R rules on German traffic rules such as the obligation to drive on the right most lane (r keepRight ), prohibiting overtaking on the right (r passRight ) as well as keeping a minimum distance to vehicles in front (r saf eDistance ). A special case is the acceleration lane, where the agent is allowed to pass on the right and is not required to stay on the right most lane. Instead the agent is not allowed to enter the acceleration lane (r notEnter ). Similarly, R driving style entails driving a desired velocity (r velocity ) ranging from 80 km/h to 115 km/h on the highway and from 40 km/h to 80 km/h on the merging scenario. The desired velocity in each training episode is defined by θ v which is sampled uniformly over the scenario's velocity range. Additionally, R driving style aims to avoid unnecssary lane and velocity changes (r action ).
With these constraints in mind, the parameterized reward functions are implemented as follows, to produce the desired behavior.
R collision (s, θ) = θ t r collision(2)
R rules (s, θ) = r passRight (s, θ p ) + r notEnter (s, θ n ) + r saf eDistance (s, θ s ) + r keepRight (s, θ k )
R driving style (s, a, θ) = r action (a, θ a ) + r velocity (s, θ v )
To enable different velocity preferences, the behavior adaptation function Ω returns the difference between the desired and the actual velocity of v ego .
V. EVALUATION
During evaluation we trained an agent g H only on the highway scenario and an agent g M only on the merging scenario. In order to show the versatility of our approach, we additionally trained an agent g C both on the highway as well as the merging scenario (see Tab.II). Due to the nature of our compact semantic state representation we are able to achieve this without further modifications. The agents are evaluated during and after training by running the respective scenarios 100 times. To assess the capabilities of the trained agents, using the concept mentioned in Section III, we introduce the following metrics.
Collision Rate [%]: The collision rate denotes the average amount of collisions over all test runs. In contrast to the training, a run is terminated if the agent collides. As this is such a critical measure it acts as the most expressive gauge assessing the agents performance.
Avg. Distance between Collisions [km]:
The average distance travelled between collisions is used to remove the bias of the episode length and the vehicle's speed.
Rule Violations [%]: Relative duration during which the agent is not keeping a safe distance or is overtaking on right.
Lane Distribution [%]: The lane distribution is an additional weak indicator for the agent's compliance with the traffic rules.
Avg. Speed [m/s]: The average speed of the agent does not only indicate how fast the agent drives, but also displays how accurate the agent matches its desired velocity.
The results of the agents trained on the different scenarios are shown in Tab. II. The agents generally achieve the desired behavior. An example of an overtaking maneuver is presented in Fig. 9. During training the collision rate of g H decreases to a decent level (see Fig. 7). Agent g M takes more training iterations to reduce its collision rate to a reasonable level, as it not only has to avoid other vehicles, but also needs to leave the on-ramp. Additionally, g M successfully learns to accelerate to its desired velocity on the on-ramp. But for higher desired velocities this causes difficulties leaving the ramp or braking in time in case the middle lane is occupied. This effect increases the collision rate in the latter half of the training process. The relative duration of rule violations by g H reduces over the course of the training, but stagnates at approximately 2% (see Fig. 8). A potential cause is our strict definition of when an overtaking on the right occurs. The agent almost never performs a full overtaking maneuver from the right, but might drive faster than another vehicle on the left hand side, which will already be counted towards our metric. For g M the duration of rule violations is generally shorter, starting low, peaking and then also stagnating similarly to g H . This is explained due to overtaking on the right not being considered on the acceleration lane. The peak emerges as a result of the agent leaving the lane more often at this point.
The lane distribution of g H (see Tab. II) demonstrates that the agent most often drives on the right lane of the highway, to a lesser extent on the middle lane and only seldom on the left lane. This reflects the desired behavior of adhering to the obligation of driving on the right most lane and only using the other lanes for overtaking slower vehicles. In the merging scenario this distribution is less informative since the task does not provide the same opportunities for lane changes. To measure the speed deviation of the agents, additional test runs with fixed values for the desired velocity were performed. The results are shown in Tab. III. As can be seen, the agents adapt their behavior, as an increase in the desired velocity raises the average speed of the agents. In tests with other traffic participants, the average speed is expectedly lower than the desired velocity, as the agents often have to slow down and wait for an opportunity to overtake. Especially in the merging scenario the agent is unable to reach higher velocities due to these circumstances. During runs on an empty highway scenario, the difference between the average and desired velocity diminishes.
Although g H and g M outperform it on their individual tasks, g C achieves satisfactory behavior on both. Especially, it is able to learn task specific knowledge such as overtaking in the acceleration lane of the on-ramp while not overtaking from the right on the highway.
A video of our agents behavior is provided online. 1
VI. CONCLUSIONS
In this work two main contributions have been presented. First, we introduced a compact semantic state representation that is applicable to a variety of traffic scenarios. Using 1 http://url.fzi.de/behavior-iv2018 a relational grid our representation is independent of road topology, traffic constellation and sensor setup.
Second, we proposed a behavior adaptation function which enables changing desired driving behavior online without the need to retrain the agent. This eliminates the requirement to generate new models for different driving style preferences or other varying parameter values. Agents trained with this approach performed well on different traffic scenarios, i.e. highway driving and highway merging. Due to the design of our state representation and behavior adaptation, we were able to develop a single model applicable to both scenarios. The agent trained on the combined model was able to successfully learn scenario specific behavior.
One of the major goals for future work we kept in mind while designing the presented concept is the transfer from the simulation environment to real world driving tasks. A possible option is to use the trained networks as a heuristic in MCTS methods, similar to [27]. Alternatively, our approach can be used in combination with model-driven systems to plan or evaluate driving behavior.
To achieve this transfer to real-world application, we will apply our state representation to further traffic scenarios, e.g., intersections. Additionally, we will extend the capabilities of the agents by adopting more advanced reinforcement learning techniques.
| 3,573 |
1809.03214
|
2890375627
|
Making the right decision in traffic is a challenging task that is highly dependent on individual preferences as well as the surrounding environment. Therefore it is hard to model solely based on expert knowledge. In this work we use Deep Reinforcement Learning to learn maneuver decisions based on a compact semantic state representation. This ensures a consistent model of the environment across scenarios as well as a behavior adaptation function, enabling on-line changes of desired behaviors without re-training. The input for the neural network is a simulated object list similar to that of Radar or Lidar sensors, superimposed by a relational semantic scene description. The state as well as the reward are extended by a behavior adaptation function and a parameterization respectively. With little expert knowledge and a set of mid-level actions, it can be seen that the agent is capable to adhere to traffic rules and learns to drive safely in a variety of situations.
|
In recent years, deep reinforcement learning (DRL) has been successfully used to learn policies for various challenges. used DRL in conjunction with supervised learning on human game data to train the policy networks of their program AlphaGo @cite_26 ; @cite_23 @cite_31 present an overview of RL and DRL respectively. In @cite_7 and @cite_25 their agents achieve superhuman performance in their respective domains solely using a self-play reinforcement learning algorithm which utilizes Monte Carlo Tree Search (MCTS) to accelerate the training. proposed their deep Q-network (DQN) @cite_14 @cite_15 which was able to learn policies for a plethora of different Atari 2600 games and reach or surpass human level of performance. The DQN approach offers a high generalizability and versatility in tasks with high dimensional state spaces and has been extended in various work @cite_6 @cite_29 @cite_8 . Especially actor-critic approaches have shown huge success in learning complex policies and are also able to learn behavior policies in domains with a continuous action space @cite_2 @cite_11 .
|
{
"abstract": [
"",
"We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them.",
"",
"An artificial agent is developed that learns to play a diverse range of classic Atari 2600 computer games directly from sensory experience, achieving a performance comparable to that of an expert human player; this work paves the way to building general-purpose learning algorithms that bridge the divide between perception and action.",
"In recent years there have been many successes of using deep representations in reinforcement learning. Still, many of these applications use conventional architectures, such as convolutional networks, LSTMs, or auto-encoders. In this paper, we present a new neural network architecture for model-free reinforcement learning. Our dueling network represents two separate estimators: one for the state value function and one for the state-dependent action advantage function. The main benefit of this factoring is to generalize learning across actions without imposing any change to the underlying reinforcement learning algorithm. Our results show that this architecture leads to better policy evaluation in the presence of many similar-valued actions. Moreover, the dueling architecture enables our RL agent to outperform the state-of-the-art on the Atari 2600 domain.",
"The popular Q-learning algorithm is known to overestimate action values under certain conditions. It was not previously known whether, in practice, such overestimations are common, whether they harm performance, and whether they can generally be prevented. In this paper, we answer all these questions affirmatively. In particular, we first show that the recent DQN algorithm, which combines Q-learning with a deep neural network, suffers from substantial overestimations in some games in the Atari 2600 domain. We then show that the idea behind the Double Q-learning algorithm, which was introduced in a tabular setting, can be generalized to work with large-scale function approximation. We propose a specific adaptation to the DQN algorithm and show that the resulting algorithm not only reduces the observed overestimations, as hypothesized, but that this also leads to much better performance on several games.",
"Experience replay lets online reinforcement learning agents remember and reuse experiences from the past. In prior work, experience transitions were uniformly sampled from a replay memory. However, this approach simply replays transitions at the same frequency that they were originally experienced, regardless of their significance. In this paper we develop a framework for prioritizing experience, so as to replay important transitions more frequently, and therefore learn more efficiently. We use prioritized experience replay in Deep Q-Networks (DQN), a reinforcement learning algorithm that achieved human-level performance across many Atari games. DQN with prioritized experience replay achieves a new state-of-the-art, outperforming DQN with uniform replay on 41 out of 49 games.",
"Reinforcement learning offers to robotics a framework and set of tools for the design of sophisticated and hard-to-engineer behaviors. Conversely, the challenges of robotic problems provide both inspiration, impact, and validation for developments in reinforcement learning. The relationship between disciplines has sufficient promise to be likened to that between physics and mathematics. In this article, we attempt to strengthen the links between the two research communities by providing a survey of work in reinforcement learning for behavior generation in robots. We highlight both key challenges in robot reinforcement learning as well as notable successes. We discuss how contributions tamed the complexity of the domain and study the role of algorithms, representations, and prior knowledge in achieving these successes. As a result, a particular focus of our paper lies on the choice between model-based and model-free as well as between value-function-based and policy-search methods. By analyzing a simple problem in some detail we demonstrate how reinforcement learning approaches may be profitably applied, and we note throughout open questions and the tremendous potential for future research.",
"We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.",
"",
"The game of chess is the most widely-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. In contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go, by tabula rasa reinforcement learning from games of self-play. In this paper, we generalise this approach into a single AlphaZero algorithm that can achieve, tabula rasa, superhuman performance in many challenging domains. Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi (Japanese chess) as well as Go, and convincingly defeated a world-champion program in each case.",
"In this work, we propose to apply trust region optimization to deep reinforcement learning using a recently proposed Kronecker-factored approximation to the curvature. We extend the framework of natural policy gradient and propose to optimize both the actor and the critic using Kronecker-factored approximate curvature (K-FAC) with trust region; hence we call our method Actor Critic using Kronecker-Factored Trust Region (ACKTR). To the best of our knowledge, this is the first scalable trust region natural gradient method for actor-critic methods. It is also a method that learns non-trivial tasks in continuous control as well as discrete control policies directly from raw pixel inputs. We tested our approach across discrete domains in Atari games as well as continuous domains in the MuJoCo environment. With the proposed methods, we are able to achieve higher rewards and a 2- to 3-fold improvement in sample efficiency on average, compared to previous state-of-the-art on-policy actor-critic methods. Code is available at this https URL"
],
"cite_N": [
"@cite_26",
"@cite_14",
"@cite_7",
"@cite_15",
"@cite_8",
"@cite_29",
"@cite_6",
"@cite_23",
"@cite_2",
"@cite_31",
"@cite_25",
"@cite_11"
],
"mid": [
"",
"1757796397",
"",
"2145339207",
"2173564293",
"2952523895",
"2201581102",
"1977655452",
"2260756217",
"",
"2772709170",
"2749928749"
]
}
|
Adaptive Behavior Generation for Autonomous Driving using Deep Reinforcement Learning with Compact Semantic States
|
While sensors are improving at a staggering pace and actuators as well as control theory are well up to par to the challenging task of autonomous driving, it is yet to be seen how a robot can devise decisions that navigate it safely in a heterogeneous environment that is partially made up by humans who not always take rational decisions or have known cost functions.
Early approaches for maneuver decisions focused on predefined rules embedded in large state machines, each requiring thoughtful engineering and expert knowledge [1], [2], [3].
Recent work focuses on more complex models with additional domain knowledge to predict and generate maneuver decisions [4]. Some approaches explicitly model the interdependency between the actions of traffic participants [5] as well as address their replanning capabilities [6].
With the large variety of challenges that vehicles with a higher degree of autonomy need to face, the limitations of rule-and model-based approaches devised by human expert knowledge that proved successful in the past become apparent.
At least since the introduction of AlphaZero, which discovered the same game-playing strategies as humans did in Chess and Go in the past, but also learned entirely unknown strategies, it is clear, that human expert knowledge is overvalued [7], [8]. Hence, it is only reasonable to apply the same techniques to the task of behavior planning in 1 The initial traffic scene is transformed into a compact semantic state representation s and used as input for the reinforcement learning (RL) agent. The agent estimates the action a with the highest return (Q-value) and executes it, e.g., changing lanes. Afterwards a reward r is collected and a new state s is reached. The transition (s, a, r, s ) is stored in the agent's replay memory. autonomous driving, relying on data-driven instead of modelbased approaches.
The contributions of this work are twofold. First, we employ a compact semantic state representation, which is based on the most significant relations between other entities and the ego vehicle. This representation is neither dependent on the road geometry nor the number of surrounding vehicles, suitable for a variety of traffic scenarios. Second, using parameterization and a behavior adaptation function we demonstrate the ability to train agents with a changeable desired behavior, adaptable on-line, not requiring new training.
The remainder of this work is structured as follows: In Section II we give a brief overview of the research on behavior generation in the automated driving domain and deep reinforcement learning. A detailed description of our approach, methods and framework follows in Section III and Section IV respectively. In Section V we present the evaluation of our trained agents. Finally, we discuss our results in Section VI.
III. APPROACH
We employ a deep reinforcement learning approach to generate adaptive behavior for autonomous driving. A reinforcement learning process is commonly modeled as an MDP [28] (S, A, R, δ, T ) where S is the set of states, A the set of actions, R : S ×A×S → R the reward function, δ : S × A → S the state transition model and T the set of terminal states. At timestep i an agent in state s ∈ S can choose an action a ∈ A according to a policy π and will progress into the successor state s receiving reward r. This is defined as a transition t = (s, a, s , r).
The aim of reinforcement learning is to maximize the future discounted return G i = n=i γ n−i r i . A DQN uses Q-Learning [29] to learn Q-values for each action given input state s based on past transitions. The predicted Q-values of the DQN are used to adapt the policy π and therefore change the agent's behavior. A schematic of this process is depicted in Fig. 1.
For the input state representation we adapt the ontologybased concept from [10] focusing on relations with other traffic participants as well as the road topology. We design the state representation to use high level preprocessed sensory inputs such as object lists provided by common Radar and Lidar sensors and lane boundaries from visual sensors or map data. To generate the desired behavior the reward is comprised of different factors with varying priorities. In the following the different aspects are described in more detail.
A. Semantic Entity-Relationship Model
A traffic scene τ is described by a semantic entityrelationship model, consisting of all scene objects and relations between them. We define it as the tuple (E, R), where
• E = {e 0 , e 1 , .
.., e n }: set of scene objects (entities). • R = {r 0 , r 1 , ..., r m }: set of relations. The scene objects contain all static and dynamic objects, such as vehicles, pedestrians, lane segments, signs and traffic lights.
In this work we focus on vehicles V ⊂ E, lane segments L ⊂ E and the three relation types vehicle-vehicle relations, vehicle-lane relations and lane-lane relations. Using these entities and relations an entity-relationship representation of a traffic scene can be created as depicted in Fig. 3. Every and relation holds several properties or attributes of the scene objects, such as e.g. absolute positions or relative velocities. This scene description combines low level attributes with high level relational knowledge in a generic way. It is thus applicable to any traffic scene and vehicle sensor setup, making it a beneficial state representation.
But the representation is of varying size and includes more aspects than are relevant for a given driving task. In order to use this representation as the input to a neural network we transform it to a fixed-size relational grid that includes only the most relevant relations.
B. Relational Grid
We define a relational grid, centered at the ego vehicle v ego ∈ V, see Fig. 2. The rows correspond to the relational lane topology, whereas the columns correspond to the vehicle topology on these lanes.
To define the size of the relational grid, a vehicle scope Λ is introduced that captures the lateral and longitudinal dimensions, defined by the following parameters: The relational grid ensures a consistent representation of the environment, independent of the road geometry or the number of surrounding vehicles.
• Λ lateral ∈ N0 v 4 v 1 v 6 v 3 v 5 v 4 v 2 v ego φ i ∆ṡ i ∆s i lane topology features ∆d i
The resulting input state s ∈ S is depicted in Fig. 4 and fed into a DQN.
C. Action Space
The vehicle's actions space is defined by a set of semantic actions that is deemed sufficient for most on-road driving tasks, excluding special cases such as U-turns. The longitudinal movement of the vehicle is controlled by the actions accelerate and decelerate. While executing these actions, the ego vehicle keeps its lane. Lateral movement is generated by the actions lane change left as well as lane change right respectively. Only a single action is executed at a time and actions are executed in their entirety, the vehicle is not able to prematurely abort an action. The default action results in no change of either velocity, lateral alignment or heading.
D. Adaptive Desired Behavior through Reward Function
With the aim to generate adaptive behavior we extend the reward function R(s, a) by a parameterization θ. This parameterization is used in the behavior adaptation function Ω(τ, θ), so that the agent is able to learn different desired behaviors without the need to train a new model for varying parameter values.
Furthermore, the desired driving behavior consists of several individual goals, modeled by separated rewards. We rank these reward functions by three different priorities. The highest priority has collision avoidance, important but to a lesser extent are rewards associated with traffic rules, least prioritized are rewards connected to the driving style.
The overall reward function R(s, a, θ) can be expressed as follows:
R(s, a, θ) = R collision (s, θ)
s ∈ S collision R rules (s, θ) s ∈ S rules R driving style (s, a, θ) else,
The subset S collision ⊂ S consists of all states s describing a collision state of the ego vehicle v ego and another vehicle v i . In these states the agent only receives the immediate reward without any possibility to earn any other future rewards. Additionally, attempting a lane change to a nonexistent adjacent lane is also treated as a collision.
The state dependent evaluation of the reward factors facilitates the learning process. As the reward for a state is independent of rewards with lower priority, the eligibility trace is more concise for the agent being trained. For example, driving at the desired velocity does not mitigate the reward for collisions.
IV. EXPERIMENTS
A. Framework
While our concept is able to handle data from many preprocessing methods used in autonomous vehicles, we tested the approach with the traffic simulation SUMO [30]. A schematic overview of the framework is depicted in Fig. 5. We use SUMO in our setup as it allows the initialization and execution of various traffic scenarios with adjustable road layout, traffic density and the vehicles's driving behavior. To achieve this, we extend TensorForce [31] with a highly configurable interface to the SUMO environment. Tensor-Force is a reinforcement library based on TensorFlow [32], which enables the deployment of various customizable DRL methods, including DQN. To examine the agent's compliance to traffic rules, it is trained and evaluated on two different traffic scenarios. In (a) the agent has the obligation to drive on the right most lane and must not pass others from the right, amongst other constraints. In (b) the agent is allowed to accelerate while on the on-ramp and also might overtake vehicles on its left. But it has to leave the on-ramp before it ends.
This setup allows us to deploy agents using various DRL methods, state representations and rewards in multiple traffic scenarios. In our experiments we used a vehicle scope with Λ behind = 1 and Λ ahead = Λ lateral = 2. This allows the agent to always perceive all lanes of a 3 lane highway and increases its potential anticipation.
B. Network
In this work we use the DQN approach introduced by Mnih et al. [15] as it has shown its capabilities to successfully learn behavior policies for a range of different tasks. While we use the general learning algorithm described in [15], including the usage of experience replay and a secondary target network, our actual network architecture differs from theirs. The network from Mnih et al. was designed for a visual state representation of the environment. In that case, a series of convolutional layers is commonly used to learn a suitable low-dimensional feature set from this kind of highdimensional sensor input. This set of features is usually further processed in fully-connected network layers.
Since the state representation in this work already consists of selected features, the learning of a low-dimensional feature set using convolutional layers is not necessary. Therefore we use a network with solely fully-connected layers, see Tab. I.
The size of the input layer depends on the number of features in the state representation. On the output layer there is a neuron for each action. The given value for each action is its estimated Q-value.
C. Training
During training the agents are driving on one or more traffic scenarios in SUMO. An agent is trained for a maximum of 2 million timesteps, each generating a transition consisting of the observed state, the selected action, the subsequent state and the received reward. The transitions are stored in the replay memory, which holds up to 500,000 transitions. After reaching a threshold of at least 50,000 transitions in memory, a batch of 32 transitions is randomly selected to update the network's weights every fourth timestep. We discount future rewards by a factor of 0.9 during the weight update. The target network is updated every 50,000th step.
To allow for exploration early on in the training, an -greedy policy is followed. With a probability of the action to be executed is selected randomly, otherwise the action with the highest estimated Q-value is chosen. The variable is initialized as 1, but decreased linearly over the course of 500,000 timesteps until it reaches a minimum of 0.1, reducing the exploration in favor of exploitation. As optimization method for our DQN we use the RMSProp algorithm [33] with a learning rate of 10 −5 and decay of 0.95.
The training process is segmented into episodes. If an agent is trained on multiple scenarios, the scenarios alternate after the end of an episode. To ensure the agent experiences a wide range of different scenes, it is started with a randomized departure time, lane, velocity and θ in the selected scenario at the beginning and after every reset of the simulation. In a similar vein, it is important that the agent is able to observe a broad spectrum of situations in the scenario early in the training. Therefore, should the agent reach a collision state s ∈ S collision by either colliding with another vehicle or attempting to drive off the road, the current episode is finished with a terminal state. Afterwards, a new episode is started immediately without reseting the simulation or changing the agent's position or velocity. Since we want to avoid learning an implicit goal state at the end of a scenario's course, the simulation is reset if a maximum amount of 200 timesteps per episode has passed or the course's end has been reached and the episode ends with a non-terminal state.
D. Scenarios
Experiments are conducted using two different scenarios, see Fig. 6. One is a straight multi-lane highway scenario. The other is a merging scenario on a highway with an on-ramp.
To generate the desired adaptive behavior, parameterized reward functions are defined (see Eq. 1). We base R rules on German traffic rules such as the obligation to drive on the right most lane (r keepRight ), prohibiting overtaking on the right (r passRight ) as well as keeping a minimum distance to vehicles in front (r saf eDistance ). A special case is the acceleration lane, where the agent is allowed to pass on the right and is not required to stay on the right most lane. Instead the agent is not allowed to enter the acceleration lane (r notEnter ). Similarly, R driving style entails driving a desired velocity (r velocity ) ranging from 80 km/h to 115 km/h on the highway and from 40 km/h to 80 km/h on the merging scenario. The desired velocity in each training episode is defined by θ v which is sampled uniformly over the scenario's velocity range. Additionally, R driving style aims to avoid unnecssary lane and velocity changes (r action ).
With these constraints in mind, the parameterized reward functions are implemented as follows, to produce the desired behavior.
R collision (s, θ) = θ t r collision(2)
R rules (s, θ) = r passRight (s, θ p ) + r notEnter (s, θ n ) + r saf eDistance (s, θ s ) + r keepRight (s, θ k )
R driving style (s, a, θ) = r action (a, θ a ) + r velocity (s, θ v )
To enable different velocity preferences, the behavior adaptation function Ω returns the difference between the desired and the actual velocity of v ego .
V. EVALUATION
During evaluation we trained an agent g H only on the highway scenario and an agent g M only on the merging scenario. In order to show the versatility of our approach, we additionally trained an agent g C both on the highway as well as the merging scenario (see Tab.II). Due to the nature of our compact semantic state representation we are able to achieve this without further modifications. The agents are evaluated during and after training by running the respective scenarios 100 times. To assess the capabilities of the trained agents, using the concept mentioned in Section III, we introduce the following metrics.
Collision Rate [%]: The collision rate denotes the average amount of collisions over all test runs. In contrast to the training, a run is terminated if the agent collides. As this is such a critical measure it acts as the most expressive gauge assessing the agents performance.
Avg. Distance between Collisions [km]:
The average distance travelled between collisions is used to remove the bias of the episode length and the vehicle's speed.
Rule Violations [%]: Relative duration during which the agent is not keeping a safe distance or is overtaking on right.
Lane Distribution [%]: The lane distribution is an additional weak indicator for the agent's compliance with the traffic rules.
Avg. Speed [m/s]: The average speed of the agent does not only indicate how fast the agent drives, but also displays how accurate the agent matches its desired velocity.
The results of the agents trained on the different scenarios are shown in Tab. II. The agents generally achieve the desired behavior. An example of an overtaking maneuver is presented in Fig. 9. During training the collision rate of g H decreases to a decent level (see Fig. 7). Agent g M takes more training iterations to reduce its collision rate to a reasonable level, as it not only has to avoid other vehicles, but also needs to leave the on-ramp. Additionally, g M successfully learns to accelerate to its desired velocity on the on-ramp. But for higher desired velocities this causes difficulties leaving the ramp or braking in time in case the middle lane is occupied. This effect increases the collision rate in the latter half of the training process. The relative duration of rule violations by g H reduces over the course of the training, but stagnates at approximately 2% (see Fig. 8). A potential cause is our strict definition of when an overtaking on the right occurs. The agent almost never performs a full overtaking maneuver from the right, but might drive faster than another vehicle on the left hand side, which will already be counted towards our metric. For g M the duration of rule violations is generally shorter, starting low, peaking and then also stagnating similarly to g H . This is explained due to overtaking on the right not being considered on the acceleration lane. The peak emerges as a result of the agent leaving the lane more often at this point.
The lane distribution of g H (see Tab. II) demonstrates that the agent most often drives on the right lane of the highway, to a lesser extent on the middle lane and only seldom on the left lane. This reflects the desired behavior of adhering to the obligation of driving on the right most lane and only using the other lanes for overtaking slower vehicles. In the merging scenario this distribution is less informative since the task does not provide the same opportunities for lane changes. To measure the speed deviation of the agents, additional test runs with fixed values for the desired velocity were performed. The results are shown in Tab. III. As can be seen, the agents adapt their behavior, as an increase in the desired velocity raises the average speed of the agents. In tests with other traffic participants, the average speed is expectedly lower than the desired velocity, as the agents often have to slow down and wait for an opportunity to overtake. Especially in the merging scenario the agent is unable to reach higher velocities due to these circumstances. During runs on an empty highway scenario, the difference between the average and desired velocity diminishes.
Although g H and g M outperform it on their individual tasks, g C achieves satisfactory behavior on both. Especially, it is able to learn task specific knowledge such as overtaking in the acceleration lane of the on-ramp while not overtaking from the right on the highway.
A video of our agents behavior is provided online. 1
VI. CONCLUSIONS
In this work two main contributions have been presented. First, we introduced a compact semantic state representation that is applicable to a variety of traffic scenarios. Using 1 http://url.fzi.de/behavior-iv2018 a relational grid our representation is independent of road topology, traffic constellation and sensor setup.
Second, we proposed a behavior adaptation function which enables changing desired driving behavior online without the need to retrain the agent. This eliminates the requirement to generate new models for different driving style preferences or other varying parameter values. Agents trained with this approach performed well on different traffic scenarios, i.e. highway driving and highway merging. Due to the design of our state representation and behavior adaptation, we were able to develop a single model applicable to both scenarios. The agent trained on the combined model was able to successfully learn scenario specific behavior.
One of the major goals for future work we kept in mind while designing the presented concept is the transfer from the simulation environment to real world driving tasks. A possible option is to use the trained networks as a heuristic in MCTS methods, similar to [27]. Alternatively, our approach can be used in combination with model-driven systems to plan or evaluate driving behavior.
To achieve this transfer to real-world application, we will apply our state representation to further traffic scenarios, e.g., intersections. Additionally, we will extend the capabilities of the agents by adopting more advanced reinforcement learning techniques.
| 3,573 |
1809.03214
|
2890375627
|
Making the right decision in traffic is a challenging task that is highly dependent on individual preferences as well as the surrounding environment. Therefore it is hard to model solely based on expert knowledge. In this work we use Deep Reinforcement Learning to learn maneuver decisions based on a compact semantic state representation. This ensures a consistent model of the environment across scenarios as well as a behavior adaptation function, enabling on-line changes of desired behaviors without re-training. The input for the neural network is a simulated object list similar to that of Radar or Lidar sensors, superimposed by a relational semantic scene description. The state as well as the reward are extended by a behavior adaptation function and a parameterization respectively. With little expert knowledge and a set of mid-level actions, it can be seen that the agent is capable to adhere to traffic rules and learns to drive safely in a variety of situations.
|
In the domain of autonomous driving, DRL has been used to directly control the movements of simulated vehicles to solve tasks like lane-keeping @cite_20 @cite_4 . In these approaches, the agents receive sensor input from the respective simulation environments and are trained to determine a suitable steering angle to keep the vehicle within its driving lane. Thereby, the focus lies mostly on low-level control.
|
{
"abstract": [
"We present a reinforcement learning approach using Deep Q-Networks to steer a vehicle in a 3D physics simulation. Relying solely on camera image input the approach directly learns steering the vehicle in an end-to-end manner. The system is able to learn human driving behavior without the need of any labeled training data. An action-based reward function is proposed, which is motivated by a potential use in real world reinforcement learning scenarios. Compared to a naive distance-based reward function, it improves the overall driving behavior of the vehicle agent. The agent is even able to reach comparable to human driving performance on a previously unseen track in our simulation environment.",
"Reinforcement learning is considered to be a strong AI paradigm which can be used to teach machines through interaction with the environment and learning from their mistakes. Despite its perceived utility, it has not yet been successfully applied in automotive applications. Motivated by the successful demonstrations of learning of Atari games and Go by Google DeepMind, we propose a framework for autonomous driving using deep reinforcement learning. This is of particular relevance as it is difficult to pose autonomous driving as a supervised learning problem due to strong interactions with the environment including other vehicles, pedestrians and roadworks. As it is a relatively new area of research for autonomous driving, we provide a short overview of deep reinforcement learning and then describe our proposed framework. It incorporates Recurrent Neural Networks for information integration, enabling the car to handle partially observable scenarios. It also integrates the recent work on attention models to focus on relevant information, thereby reducing the computational complexity for deployment on embedded hardware. The framework was tested in an open source 3D car racing simulator called TORCS. Our simulation results demonstrate learning of autonomous maneuvering in a scenario of complex road curvatures and simple interaction of other vehicles."
],
"cite_N": [
"@cite_4",
"@cite_20"
],
"mid": [
"2739529772",
"2583993537"
]
}
|
Adaptive Behavior Generation for Autonomous Driving using Deep Reinforcement Learning with Compact Semantic States
|
While sensors are improving at a staggering pace and actuators as well as control theory are well up to par to the challenging task of autonomous driving, it is yet to be seen how a robot can devise decisions that navigate it safely in a heterogeneous environment that is partially made up by humans who not always take rational decisions or have known cost functions.
Early approaches for maneuver decisions focused on predefined rules embedded in large state machines, each requiring thoughtful engineering and expert knowledge [1], [2], [3].
Recent work focuses on more complex models with additional domain knowledge to predict and generate maneuver decisions [4]. Some approaches explicitly model the interdependency between the actions of traffic participants [5] as well as address their replanning capabilities [6].
With the large variety of challenges that vehicles with a higher degree of autonomy need to face, the limitations of rule-and model-based approaches devised by human expert knowledge that proved successful in the past become apparent.
At least since the introduction of AlphaZero, which discovered the same game-playing strategies as humans did in Chess and Go in the past, but also learned entirely unknown strategies, it is clear, that human expert knowledge is overvalued [7], [8]. Hence, it is only reasonable to apply the same techniques to the task of behavior planning in 1 The initial traffic scene is transformed into a compact semantic state representation s and used as input for the reinforcement learning (RL) agent. The agent estimates the action a with the highest return (Q-value) and executes it, e.g., changing lanes. Afterwards a reward r is collected and a new state s is reached. The transition (s, a, r, s ) is stored in the agent's replay memory. autonomous driving, relying on data-driven instead of modelbased approaches.
The contributions of this work are twofold. First, we employ a compact semantic state representation, which is based on the most significant relations between other entities and the ego vehicle. This representation is neither dependent on the road geometry nor the number of surrounding vehicles, suitable for a variety of traffic scenarios. Second, using parameterization and a behavior adaptation function we demonstrate the ability to train agents with a changeable desired behavior, adaptable on-line, not requiring new training.
The remainder of this work is structured as follows: In Section II we give a brief overview of the research on behavior generation in the automated driving domain and deep reinforcement learning. A detailed description of our approach, methods and framework follows in Section III and Section IV respectively. In Section V we present the evaluation of our trained agents. Finally, we discuss our results in Section VI.
III. APPROACH
We employ a deep reinforcement learning approach to generate adaptive behavior for autonomous driving. A reinforcement learning process is commonly modeled as an MDP [28] (S, A, R, δ, T ) where S is the set of states, A the set of actions, R : S ×A×S → R the reward function, δ : S × A → S the state transition model and T the set of terminal states. At timestep i an agent in state s ∈ S can choose an action a ∈ A according to a policy π and will progress into the successor state s receiving reward r. This is defined as a transition t = (s, a, s , r).
The aim of reinforcement learning is to maximize the future discounted return G i = n=i γ n−i r i . A DQN uses Q-Learning [29] to learn Q-values for each action given input state s based on past transitions. The predicted Q-values of the DQN are used to adapt the policy π and therefore change the agent's behavior. A schematic of this process is depicted in Fig. 1.
For the input state representation we adapt the ontologybased concept from [10] focusing on relations with other traffic participants as well as the road topology. We design the state representation to use high level preprocessed sensory inputs such as object lists provided by common Radar and Lidar sensors and lane boundaries from visual sensors or map data. To generate the desired behavior the reward is comprised of different factors with varying priorities. In the following the different aspects are described in more detail.
A. Semantic Entity-Relationship Model
A traffic scene τ is described by a semantic entityrelationship model, consisting of all scene objects and relations between them. We define it as the tuple (E, R), where
• E = {e 0 , e 1 , .
.., e n }: set of scene objects (entities). • R = {r 0 , r 1 , ..., r m }: set of relations. The scene objects contain all static and dynamic objects, such as vehicles, pedestrians, lane segments, signs and traffic lights.
In this work we focus on vehicles V ⊂ E, lane segments L ⊂ E and the three relation types vehicle-vehicle relations, vehicle-lane relations and lane-lane relations. Using these entities and relations an entity-relationship representation of a traffic scene can be created as depicted in Fig. 3. Every and relation holds several properties or attributes of the scene objects, such as e.g. absolute positions or relative velocities. This scene description combines low level attributes with high level relational knowledge in a generic way. It is thus applicable to any traffic scene and vehicle sensor setup, making it a beneficial state representation.
But the representation is of varying size and includes more aspects than are relevant for a given driving task. In order to use this representation as the input to a neural network we transform it to a fixed-size relational grid that includes only the most relevant relations.
B. Relational Grid
We define a relational grid, centered at the ego vehicle v ego ∈ V, see Fig. 2. The rows correspond to the relational lane topology, whereas the columns correspond to the vehicle topology on these lanes.
To define the size of the relational grid, a vehicle scope Λ is introduced that captures the lateral and longitudinal dimensions, defined by the following parameters: The relational grid ensures a consistent representation of the environment, independent of the road geometry or the number of surrounding vehicles.
• Λ lateral ∈ N0 v 4 v 1 v 6 v 3 v 5 v 4 v 2 v ego φ i ∆ṡ i ∆s i lane topology features ∆d i
The resulting input state s ∈ S is depicted in Fig. 4 and fed into a DQN.
C. Action Space
The vehicle's actions space is defined by a set of semantic actions that is deemed sufficient for most on-road driving tasks, excluding special cases such as U-turns. The longitudinal movement of the vehicle is controlled by the actions accelerate and decelerate. While executing these actions, the ego vehicle keeps its lane. Lateral movement is generated by the actions lane change left as well as lane change right respectively. Only a single action is executed at a time and actions are executed in their entirety, the vehicle is not able to prematurely abort an action. The default action results in no change of either velocity, lateral alignment or heading.
D. Adaptive Desired Behavior through Reward Function
With the aim to generate adaptive behavior we extend the reward function R(s, a) by a parameterization θ. This parameterization is used in the behavior adaptation function Ω(τ, θ), so that the agent is able to learn different desired behaviors without the need to train a new model for varying parameter values.
Furthermore, the desired driving behavior consists of several individual goals, modeled by separated rewards. We rank these reward functions by three different priorities. The highest priority has collision avoidance, important but to a lesser extent are rewards associated with traffic rules, least prioritized are rewards connected to the driving style.
The overall reward function R(s, a, θ) can be expressed as follows:
R(s, a, θ) = R collision (s, θ)
s ∈ S collision R rules (s, θ) s ∈ S rules R driving style (s, a, θ) else,
The subset S collision ⊂ S consists of all states s describing a collision state of the ego vehicle v ego and another vehicle v i . In these states the agent only receives the immediate reward without any possibility to earn any other future rewards. Additionally, attempting a lane change to a nonexistent adjacent lane is also treated as a collision.
The state dependent evaluation of the reward factors facilitates the learning process. As the reward for a state is independent of rewards with lower priority, the eligibility trace is more concise for the agent being trained. For example, driving at the desired velocity does not mitigate the reward for collisions.
IV. EXPERIMENTS
A. Framework
While our concept is able to handle data from many preprocessing methods used in autonomous vehicles, we tested the approach with the traffic simulation SUMO [30]. A schematic overview of the framework is depicted in Fig. 5. We use SUMO in our setup as it allows the initialization and execution of various traffic scenarios with adjustable road layout, traffic density and the vehicles's driving behavior. To achieve this, we extend TensorForce [31] with a highly configurable interface to the SUMO environment. Tensor-Force is a reinforcement library based on TensorFlow [32], which enables the deployment of various customizable DRL methods, including DQN. To examine the agent's compliance to traffic rules, it is trained and evaluated on two different traffic scenarios. In (a) the agent has the obligation to drive on the right most lane and must not pass others from the right, amongst other constraints. In (b) the agent is allowed to accelerate while on the on-ramp and also might overtake vehicles on its left. But it has to leave the on-ramp before it ends.
This setup allows us to deploy agents using various DRL methods, state representations and rewards in multiple traffic scenarios. In our experiments we used a vehicle scope with Λ behind = 1 and Λ ahead = Λ lateral = 2. This allows the agent to always perceive all lanes of a 3 lane highway and increases its potential anticipation.
B. Network
In this work we use the DQN approach introduced by Mnih et al. [15] as it has shown its capabilities to successfully learn behavior policies for a range of different tasks. While we use the general learning algorithm described in [15], including the usage of experience replay and a secondary target network, our actual network architecture differs from theirs. The network from Mnih et al. was designed for a visual state representation of the environment. In that case, a series of convolutional layers is commonly used to learn a suitable low-dimensional feature set from this kind of highdimensional sensor input. This set of features is usually further processed in fully-connected network layers.
Since the state representation in this work already consists of selected features, the learning of a low-dimensional feature set using convolutional layers is not necessary. Therefore we use a network with solely fully-connected layers, see Tab. I.
The size of the input layer depends on the number of features in the state representation. On the output layer there is a neuron for each action. The given value for each action is its estimated Q-value.
C. Training
During training the agents are driving on one or more traffic scenarios in SUMO. An agent is trained for a maximum of 2 million timesteps, each generating a transition consisting of the observed state, the selected action, the subsequent state and the received reward. The transitions are stored in the replay memory, which holds up to 500,000 transitions. After reaching a threshold of at least 50,000 transitions in memory, a batch of 32 transitions is randomly selected to update the network's weights every fourth timestep. We discount future rewards by a factor of 0.9 during the weight update. The target network is updated every 50,000th step.
To allow for exploration early on in the training, an -greedy policy is followed. With a probability of the action to be executed is selected randomly, otherwise the action with the highest estimated Q-value is chosen. The variable is initialized as 1, but decreased linearly over the course of 500,000 timesteps until it reaches a minimum of 0.1, reducing the exploration in favor of exploitation. As optimization method for our DQN we use the RMSProp algorithm [33] with a learning rate of 10 −5 and decay of 0.95.
The training process is segmented into episodes. If an agent is trained on multiple scenarios, the scenarios alternate after the end of an episode. To ensure the agent experiences a wide range of different scenes, it is started with a randomized departure time, lane, velocity and θ in the selected scenario at the beginning and after every reset of the simulation. In a similar vein, it is important that the agent is able to observe a broad spectrum of situations in the scenario early in the training. Therefore, should the agent reach a collision state s ∈ S collision by either colliding with another vehicle or attempting to drive off the road, the current episode is finished with a terminal state. Afterwards, a new episode is started immediately without reseting the simulation or changing the agent's position or velocity. Since we want to avoid learning an implicit goal state at the end of a scenario's course, the simulation is reset if a maximum amount of 200 timesteps per episode has passed or the course's end has been reached and the episode ends with a non-terminal state.
D. Scenarios
Experiments are conducted using two different scenarios, see Fig. 6. One is a straight multi-lane highway scenario. The other is a merging scenario on a highway with an on-ramp.
To generate the desired adaptive behavior, parameterized reward functions are defined (see Eq. 1). We base R rules on German traffic rules such as the obligation to drive on the right most lane (r keepRight ), prohibiting overtaking on the right (r passRight ) as well as keeping a minimum distance to vehicles in front (r saf eDistance ). A special case is the acceleration lane, where the agent is allowed to pass on the right and is not required to stay on the right most lane. Instead the agent is not allowed to enter the acceleration lane (r notEnter ). Similarly, R driving style entails driving a desired velocity (r velocity ) ranging from 80 km/h to 115 km/h on the highway and from 40 km/h to 80 km/h on the merging scenario. The desired velocity in each training episode is defined by θ v which is sampled uniformly over the scenario's velocity range. Additionally, R driving style aims to avoid unnecssary lane and velocity changes (r action ).
With these constraints in mind, the parameterized reward functions are implemented as follows, to produce the desired behavior.
R collision (s, θ) = θ t r collision(2)
R rules (s, θ) = r passRight (s, θ p ) + r notEnter (s, θ n ) + r saf eDistance (s, θ s ) + r keepRight (s, θ k )
R driving style (s, a, θ) = r action (a, θ a ) + r velocity (s, θ v )
To enable different velocity preferences, the behavior adaptation function Ω returns the difference between the desired and the actual velocity of v ego .
V. EVALUATION
During evaluation we trained an agent g H only on the highway scenario and an agent g M only on the merging scenario. In order to show the versatility of our approach, we additionally trained an agent g C both on the highway as well as the merging scenario (see Tab.II). Due to the nature of our compact semantic state representation we are able to achieve this without further modifications. The agents are evaluated during and after training by running the respective scenarios 100 times. To assess the capabilities of the trained agents, using the concept mentioned in Section III, we introduce the following metrics.
Collision Rate [%]: The collision rate denotes the average amount of collisions over all test runs. In contrast to the training, a run is terminated if the agent collides. As this is such a critical measure it acts as the most expressive gauge assessing the agents performance.
Avg. Distance between Collisions [km]:
The average distance travelled between collisions is used to remove the bias of the episode length and the vehicle's speed.
Rule Violations [%]: Relative duration during which the agent is not keeping a safe distance or is overtaking on right.
Lane Distribution [%]: The lane distribution is an additional weak indicator for the agent's compliance with the traffic rules.
Avg. Speed [m/s]: The average speed of the agent does not only indicate how fast the agent drives, but also displays how accurate the agent matches its desired velocity.
The results of the agents trained on the different scenarios are shown in Tab. II. The agents generally achieve the desired behavior. An example of an overtaking maneuver is presented in Fig. 9. During training the collision rate of g H decreases to a decent level (see Fig. 7). Agent g M takes more training iterations to reduce its collision rate to a reasonable level, as it not only has to avoid other vehicles, but also needs to leave the on-ramp. Additionally, g M successfully learns to accelerate to its desired velocity on the on-ramp. But for higher desired velocities this causes difficulties leaving the ramp or braking in time in case the middle lane is occupied. This effect increases the collision rate in the latter half of the training process. The relative duration of rule violations by g H reduces over the course of the training, but stagnates at approximately 2% (see Fig. 8). A potential cause is our strict definition of when an overtaking on the right occurs. The agent almost never performs a full overtaking maneuver from the right, but might drive faster than another vehicle on the left hand side, which will already be counted towards our metric. For g M the duration of rule violations is generally shorter, starting low, peaking and then also stagnating similarly to g H . This is explained due to overtaking on the right not being considered on the acceleration lane. The peak emerges as a result of the agent leaving the lane more often at this point.
The lane distribution of g H (see Tab. II) demonstrates that the agent most often drives on the right lane of the highway, to a lesser extent on the middle lane and only seldom on the left lane. This reflects the desired behavior of adhering to the obligation of driving on the right most lane and only using the other lanes for overtaking slower vehicles. In the merging scenario this distribution is less informative since the task does not provide the same opportunities for lane changes. To measure the speed deviation of the agents, additional test runs with fixed values for the desired velocity were performed. The results are shown in Tab. III. As can be seen, the agents adapt their behavior, as an increase in the desired velocity raises the average speed of the agents. In tests with other traffic participants, the average speed is expectedly lower than the desired velocity, as the agents often have to slow down and wait for an opportunity to overtake. Especially in the merging scenario the agent is unable to reach higher velocities due to these circumstances. During runs on an empty highway scenario, the difference between the average and desired velocity diminishes.
Although g H and g M outperform it on their individual tasks, g C achieves satisfactory behavior on both. Especially, it is able to learn task specific knowledge such as overtaking in the acceleration lane of the on-ramp while not overtaking from the right on the highway.
A video of our agents behavior is provided online. 1
VI. CONCLUSIONS
In this work two main contributions have been presented. First, we introduced a compact semantic state representation that is applicable to a variety of traffic scenarios. Using 1 http://url.fzi.de/behavior-iv2018 a relational grid our representation is independent of road topology, traffic constellation and sensor setup.
Second, we proposed a behavior adaptation function which enables changing desired driving behavior online without the need to retrain the agent. This eliminates the requirement to generate new models for different driving style preferences or other varying parameter values. Agents trained with this approach performed well on different traffic scenarios, i.e. highway driving and highway merging. Due to the design of our state representation and behavior adaptation, we were able to develop a single model applicable to both scenarios. The agent trained on the combined model was able to successfully learn scenario specific behavior.
One of the major goals for future work we kept in mind while designing the presented concept is the transfer from the simulation environment to real world driving tasks. A possible option is to use the trained networks as a heuristic in MCTS methods, similar to [27]. Alternatively, our approach can be used in combination with model-driven systems to plan or evaluate driving behavior.
To achieve this transfer to real-world application, we will apply our state representation to further traffic scenarios, e.g., intersections. Additionally, we will extend the capabilities of the agents by adopting more advanced reinforcement learning techniques.
| 3,573 |
1809.03214
|
2890375627
|
Making the right decision in traffic is a challenging task that is highly dependent on individual preferences as well as the surrounding environment. Therefore it is hard to model solely based on expert knowledge. In this work we use Deep Reinforcement Learning to learn maneuver decisions based on a compact semantic state representation. This ensures a consistent model of the environment across scenarios as well as a behavior adaptation function, enabling on-line changes of desired behaviors without re-training. The input for the neural network is a simulated object list similar to that of Radar or Lidar sensors, superimposed by a relational semantic scene description. The state as well as the reward are extended by a behavior adaptation function and a parameterization respectively. With little expert knowledge and a set of mid-level actions, it can be seen that the agent is capable to adhere to traffic rules and learns to drive safely in a variety of situations.
|
Since it can be problematic to model multi-agent scenarios as a Markov Decision Process (MDP) due to the unpredictable behavior of other agents, one possibility is to decompose the problem into learning a cost function for driving trajectories @cite_12 . To make learning faster and more data efficient, expert knowledge can be incorporated by restricting certain actions during the training process @cite_22 . Additionally, there is the option to handle task and motion planning by learning low-level controls for lateral as well as longitudinal maneuvers from a predefined set and a high-level maneuver policy @cite_0 .
|
{
"abstract": [
"",
"In this paper, we consider the problem of autonomous lane changing for self driving vehicles in a multi-lane, multi-agent setting. We present a framework that demonstrates a more structured and data efficient alternative to end-to-end complete policy learning on problems where the high-level policy is hard to formulate using traditional optimization or rule based methods but well designed low-level controllers are available. Our framework uses deep reinforcement learning solely to obtain a high-level policy for tactical decision making, while still maintaining a tight integration with the low-level controller, thus getting the best of both worlds. We accomplish this with Q-masking, a technique with which we are able to incorporate prior knowledge, constraints, and information from a low-level controller, directly in to the learning process thereby simplifying the reward function and making learning faster and data efficient. We provide preliminary results in a simulator and show our approach to be more efficient than a greedy baseline, and more successful and safer than human driving.",
"Reinforcement learning has steadily improved and outperform human in lots of traditional games since the resurgence of deep neural network. However, these success is not easy to be copied to autonomous driving because the state spaces in real world are extreme complex and action spaces are continuous and fine control is required. Moreover, the autonomous driving vehicles must also keep functional safety under the complex environments. To deal with these challenges, we first adopt the deep deterministic policy gradient (DDPG) algorithm, which has the capacity to handle complex state and action spaces in continuous domain. We then choose The Open Racing Car Simulator (TORCS) as our environment to avoid physical damage. Meanwhile, we select a set of appropriate sensor information from TORCS and design our own rewarder. In order to fit DDPG algorithm to TORCS, we design our network architecture for both actor and critic inside DDPG paradigm. To demonstrate the effectiveness of our model, We evaluate on different modes in TORCS and show both quantitative and qualitative results."
],
"cite_N": [
"@cite_0",
"@cite_22",
"@cite_12"
],
"mid": [
"",
"2787196707",
"2902243259"
]
}
|
Adaptive Behavior Generation for Autonomous Driving using Deep Reinforcement Learning with Compact Semantic States
|
While sensors are improving at a staggering pace and actuators as well as control theory are well up to par to the challenging task of autonomous driving, it is yet to be seen how a robot can devise decisions that navigate it safely in a heterogeneous environment that is partially made up by humans who not always take rational decisions or have known cost functions.
Early approaches for maneuver decisions focused on predefined rules embedded in large state machines, each requiring thoughtful engineering and expert knowledge [1], [2], [3].
Recent work focuses on more complex models with additional domain knowledge to predict and generate maneuver decisions [4]. Some approaches explicitly model the interdependency between the actions of traffic participants [5] as well as address their replanning capabilities [6].
With the large variety of challenges that vehicles with a higher degree of autonomy need to face, the limitations of rule-and model-based approaches devised by human expert knowledge that proved successful in the past become apparent.
At least since the introduction of AlphaZero, which discovered the same game-playing strategies as humans did in Chess and Go in the past, but also learned entirely unknown strategies, it is clear, that human expert knowledge is overvalued [7], [8]. Hence, it is only reasonable to apply the same techniques to the task of behavior planning in 1 The initial traffic scene is transformed into a compact semantic state representation s and used as input for the reinforcement learning (RL) agent. The agent estimates the action a with the highest return (Q-value) and executes it, e.g., changing lanes. Afterwards a reward r is collected and a new state s is reached. The transition (s, a, r, s ) is stored in the agent's replay memory. autonomous driving, relying on data-driven instead of modelbased approaches.
The contributions of this work are twofold. First, we employ a compact semantic state representation, which is based on the most significant relations between other entities and the ego vehicle. This representation is neither dependent on the road geometry nor the number of surrounding vehicles, suitable for a variety of traffic scenarios. Second, using parameterization and a behavior adaptation function we demonstrate the ability to train agents with a changeable desired behavior, adaptable on-line, not requiring new training.
The remainder of this work is structured as follows: In Section II we give a brief overview of the research on behavior generation in the automated driving domain and deep reinforcement learning. A detailed description of our approach, methods and framework follows in Section III and Section IV respectively. In Section V we present the evaluation of our trained agents. Finally, we discuss our results in Section VI.
III. APPROACH
We employ a deep reinforcement learning approach to generate adaptive behavior for autonomous driving. A reinforcement learning process is commonly modeled as an MDP [28] (S, A, R, δ, T ) where S is the set of states, A the set of actions, R : S ×A×S → R the reward function, δ : S × A → S the state transition model and T the set of terminal states. At timestep i an agent in state s ∈ S can choose an action a ∈ A according to a policy π and will progress into the successor state s receiving reward r. This is defined as a transition t = (s, a, s , r).
The aim of reinforcement learning is to maximize the future discounted return G i = n=i γ n−i r i . A DQN uses Q-Learning [29] to learn Q-values for each action given input state s based on past transitions. The predicted Q-values of the DQN are used to adapt the policy π and therefore change the agent's behavior. A schematic of this process is depicted in Fig. 1.
For the input state representation we adapt the ontologybased concept from [10] focusing on relations with other traffic participants as well as the road topology. We design the state representation to use high level preprocessed sensory inputs such as object lists provided by common Radar and Lidar sensors and lane boundaries from visual sensors or map data. To generate the desired behavior the reward is comprised of different factors with varying priorities. In the following the different aspects are described in more detail.
A. Semantic Entity-Relationship Model
A traffic scene τ is described by a semantic entityrelationship model, consisting of all scene objects and relations between them. We define it as the tuple (E, R), where
• E = {e 0 , e 1 , .
.., e n }: set of scene objects (entities). • R = {r 0 , r 1 , ..., r m }: set of relations. The scene objects contain all static and dynamic objects, such as vehicles, pedestrians, lane segments, signs and traffic lights.
In this work we focus on vehicles V ⊂ E, lane segments L ⊂ E and the three relation types vehicle-vehicle relations, vehicle-lane relations and lane-lane relations. Using these entities and relations an entity-relationship representation of a traffic scene can be created as depicted in Fig. 3. Every and relation holds several properties or attributes of the scene objects, such as e.g. absolute positions or relative velocities. This scene description combines low level attributes with high level relational knowledge in a generic way. It is thus applicable to any traffic scene and vehicle sensor setup, making it a beneficial state representation.
But the representation is of varying size and includes more aspects than are relevant for a given driving task. In order to use this representation as the input to a neural network we transform it to a fixed-size relational grid that includes only the most relevant relations.
B. Relational Grid
We define a relational grid, centered at the ego vehicle v ego ∈ V, see Fig. 2. The rows correspond to the relational lane topology, whereas the columns correspond to the vehicle topology on these lanes.
To define the size of the relational grid, a vehicle scope Λ is introduced that captures the lateral and longitudinal dimensions, defined by the following parameters: The relational grid ensures a consistent representation of the environment, independent of the road geometry or the number of surrounding vehicles.
• Λ lateral ∈ N0 v 4 v 1 v 6 v 3 v 5 v 4 v 2 v ego φ i ∆ṡ i ∆s i lane topology features ∆d i
The resulting input state s ∈ S is depicted in Fig. 4 and fed into a DQN.
C. Action Space
The vehicle's actions space is defined by a set of semantic actions that is deemed sufficient for most on-road driving tasks, excluding special cases such as U-turns. The longitudinal movement of the vehicle is controlled by the actions accelerate and decelerate. While executing these actions, the ego vehicle keeps its lane. Lateral movement is generated by the actions lane change left as well as lane change right respectively. Only a single action is executed at a time and actions are executed in their entirety, the vehicle is not able to prematurely abort an action. The default action results in no change of either velocity, lateral alignment or heading.
D. Adaptive Desired Behavior through Reward Function
With the aim to generate adaptive behavior we extend the reward function R(s, a) by a parameterization θ. This parameterization is used in the behavior adaptation function Ω(τ, θ), so that the agent is able to learn different desired behaviors without the need to train a new model for varying parameter values.
Furthermore, the desired driving behavior consists of several individual goals, modeled by separated rewards. We rank these reward functions by three different priorities. The highest priority has collision avoidance, important but to a lesser extent are rewards associated with traffic rules, least prioritized are rewards connected to the driving style.
The overall reward function R(s, a, θ) can be expressed as follows:
R(s, a, θ) = R collision (s, θ)
s ∈ S collision R rules (s, θ) s ∈ S rules R driving style (s, a, θ) else,
The subset S collision ⊂ S consists of all states s describing a collision state of the ego vehicle v ego and another vehicle v i . In these states the agent only receives the immediate reward without any possibility to earn any other future rewards. Additionally, attempting a lane change to a nonexistent adjacent lane is also treated as a collision.
The state dependent evaluation of the reward factors facilitates the learning process. As the reward for a state is independent of rewards with lower priority, the eligibility trace is more concise for the agent being trained. For example, driving at the desired velocity does not mitigate the reward for collisions.
IV. EXPERIMENTS
A. Framework
While our concept is able to handle data from many preprocessing methods used in autonomous vehicles, we tested the approach with the traffic simulation SUMO [30]. A schematic overview of the framework is depicted in Fig. 5. We use SUMO in our setup as it allows the initialization and execution of various traffic scenarios with adjustable road layout, traffic density and the vehicles's driving behavior. To achieve this, we extend TensorForce [31] with a highly configurable interface to the SUMO environment. Tensor-Force is a reinforcement library based on TensorFlow [32], which enables the deployment of various customizable DRL methods, including DQN. To examine the agent's compliance to traffic rules, it is trained and evaluated on two different traffic scenarios. In (a) the agent has the obligation to drive on the right most lane and must not pass others from the right, amongst other constraints. In (b) the agent is allowed to accelerate while on the on-ramp and also might overtake vehicles on its left. But it has to leave the on-ramp before it ends.
This setup allows us to deploy agents using various DRL methods, state representations and rewards in multiple traffic scenarios. In our experiments we used a vehicle scope with Λ behind = 1 and Λ ahead = Λ lateral = 2. This allows the agent to always perceive all lanes of a 3 lane highway and increases its potential anticipation.
B. Network
In this work we use the DQN approach introduced by Mnih et al. [15] as it has shown its capabilities to successfully learn behavior policies for a range of different tasks. While we use the general learning algorithm described in [15], including the usage of experience replay and a secondary target network, our actual network architecture differs from theirs. The network from Mnih et al. was designed for a visual state representation of the environment. In that case, a series of convolutional layers is commonly used to learn a suitable low-dimensional feature set from this kind of highdimensional sensor input. This set of features is usually further processed in fully-connected network layers.
Since the state representation in this work already consists of selected features, the learning of a low-dimensional feature set using convolutional layers is not necessary. Therefore we use a network with solely fully-connected layers, see Tab. I.
The size of the input layer depends on the number of features in the state representation. On the output layer there is a neuron for each action. The given value for each action is its estimated Q-value.
C. Training
During training the agents are driving on one or more traffic scenarios in SUMO. An agent is trained for a maximum of 2 million timesteps, each generating a transition consisting of the observed state, the selected action, the subsequent state and the received reward. The transitions are stored in the replay memory, which holds up to 500,000 transitions. After reaching a threshold of at least 50,000 transitions in memory, a batch of 32 transitions is randomly selected to update the network's weights every fourth timestep. We discount future rewards by a factor of 0.9 during the weight update. The target network is updated every 50,000th step.
To allow for exploration early on in the training, an -greedy policy is followed. With a probability of the action to be executed is selected randomly, otherwise the action with the highest estimated Q-value is chosen. The variable is initialized as 1, but decreased linearly over the course of 500,000 timesteps until it reaches a minimum of 0.1, reducing the exploration in favor of exploitation. As optimization method for our DQN we use the RMSProp algorithm [33] with a learning rate of 10 −5 and decay of 0.95.
The training process is segmented into episodes. If an agent is trained on multiple scenarios, the scenarios alternate after the end of an episode. To ensure the agent experiences a wide range of different scenes, it is started with a randomized departure time, lane, velocity and θ in the selected scenario at the beginning and after every reset of the simulation. In a similar vein, it is important that the agent is able to observe a broad spectrum of situations in the scenario early in the training. Therefore, should the agent reach a collision state s ∈ S collision by either colliding with another vehicle or attempting to drive off the road, the current episode is finished with a terminal state. Afterwards, a new episode is started immediately without reseting the simulation or changing the agent's position or velocity. Since we want to avoid learning an implicit goal state at the end of a scenario's course, the simulation is reset if a maximum amount of 200 timesteps per episode has passed or the course's end has been reached and the episode ends with a non-terminal state.
D. Scenarios
Experiments are conducted using two different scenarios, see Fig. 6. One is a straight multi-lane highway scenario. The other is a merging scenario on a highway with an on-ramp.
To generate the desired adaptive behavior, parameterized reward functions are defined (see Eq. 1). We base R rules on German traffic rules such as the obligation to drive on the right most lane (r keepRight ), prohibiting overtaking on the right (r passRight ) as well as keeping a minimum distance to vehicles in front (r saf eDistance ). A special case is the acceleration lane, where the agent is allowed to pass on the right and is not required to stay on the right most lane. Instead the agent is not allowed to enter the acceleration lane (r notEnter ). Similarly, R driving style entails driving a desired velocity (r velocity ) ranging from 80 km/h to 115 km/h on the highway and from 40 km/h to 80 km/h on the merging scenario. The desired velocity in each training episode is defined by θ v which is sampled uniformly over the scenario's velocity range. Additionally, R driving style aims to avoid unnecssary lane and velocity changes (r action ).
With these constraints in mind, the parameterized reward functions are implemented as follows, to produce the desired behavior.
R collision (s, θ) = θ t r collision(2)
R rules (s, θ) = r passRight (s, θ p ) + r notEnter (s, θ n ) + r saf eDistance (s, θ s ) + r keepRight (s, θ k )
R driving style (s, a, θ) = r action (a, θ a ) + r velocity (s, θ v )
To enable different velocity preferences, the behavior adaptation function Ω returns the difference between the desired and the actual velocity of v ego .
V. EVALUATION
During evaluation we trained an agent g H only on the highway scenario and an agent g M only on the merging scenario. In order to show the versatility of our approach, we additionally trained an agent g C both on the highway as well as the merging scenario (see Tab.II). Due to the nature of our compact semantic state representation we are able to achieve this without further modifications. The agents are evaluated during and after training by running the respective scenarios 100 times. To assess the capabilities of the trained agents, using the concept mentioned in Section III, we introduce the following metrics.
Collision Rate [%]: The collision rate denotes the average amount of collisions over all test runs. In contrast to the training, a run is terminated if the agent collides. As this is such a critical measure it acts as the most expressive gauge assessing the agents performance.
Avg. Distance between Collisions [km]:
The average distance travelled between collisions is used to remove the bias of the episode length and the vehicle's speed.
Rule Violations [%]: Relative duration during which the agent is not keeping a safe distance or is overtaking on right.
Lane Distribution [%]: The lane distribution is an additional weak indicator for the agent's compliance with the traffic rules.
Avg. Speed [m/s]: The average speed of the agent does not only indicate how fast the agent drives, but also displays how accurate the agent matches its desired velocity.
The results of the agents trained on the different scenarios are shown in Tab. II. The agents generally achieve the desired behavior. An example of an overtaking maneuver is presented in Fig. 9. During training the collision rate of g H decreases to a decent level (see Fig. 7). Agent g M takes more training iterations to reduce its collision rate to a reasonable level, as it not only has to avoid other vehicles, but also needs to leave the on-ramp. Additionally, g M successfully learns to accelerate to its desired velocity on the on-ramp. But for higher desired velocities this causes difficulties leaving the ramp or braking in time in case the middle lane is occupied. This effect increases the collision rate in the latter half of the training process. The relative duration of rule violations by g H reduces over the course of the training, but stagnates at approximately 2% (see Fig. 8). A potential cause is our strict definition of when an overtaking on the right occurs. The agent almost never performs a full overtaking maneuver from the right, but might drive faster than another vehicle on the left hand side, which will already be counted towards our metric. For g M the duration of rule violations is generally shorter, starting low, peaking and then also stagnating similarly to g H . This is explained due to overtaking on the right not being considered on the acceleration lane. The peak emerges as a result of the agent leaving the lane more often at this point.
The lane distribution of g H (see Tab. II) demonstrates that the agent most often drives on the right lane of the highway, to a lesser extent on the middle lane and only seldom on the left lane. This reflects the desired behavior of adhering to the obligation of driving on the right most lane and only using the other lanes for overtaking slower vehicles. In the merging scenario this distribution is less informative since the task does not provide the same opportunities for lane changes. To measure the speed deviation of the agents, additional test runs with fixed values for the desired velocity were performed. The results are shown in Tab. III. As can be seen, the agents adapt their behavior, as an increase in the desired velocity raises the average speed of the agents. In tests with other traffic participants, the average speed is expectedly lower than the desired velocity, as the agents often have to slow down and wait for an opportunity to overtake. Especially in the merging scenario the agent is unable to reach higher velocities due to these circumstances. During runs on an empty highway scenario, the difference between the average and desired velocity diminishes.
Although g H and g M outperform it on their individual tasks, g C achieves satisfactory behavior on both. Especially, it is able to learn task specific knowledge such as overtaking in the acceleration lane of the on-ramp while not overtaking from the right on the highway.
A video of our agents behavior is provided online. 1
VI. CONCLUSIONS
In this work two main contributions have been presented. First, we introduced a compact semantic state representation that is applicable to a variety of traffic scenarios. Using 1 http://url.fzi.de/behavior-iv2018 a relational grid our representation is independent of road topology, traffic constellation and sensor setup.
Second, we proposed a behavior adaptation function which enables changing desired driving behavior online without the need to retrain the agent. This eliminates the requirement to generate new models for different driving style preferences or other varying parameter values. Agents trained with this approach performed well on different traffic scenarios, i.e. highway driving and highway merging. Due to the design of our state representation and behavior adaptation, we were able to develop a single model applicable to both scenarios. The agent trained on the combined model was able to successfully learn scenario specific behavior.
One of the major goals for future work we kept in mind while designing the presented concept is the transfer from the simulation environment to real world driving tasks. A possible option is to use the trained networks as a heuristic in MCTS methods, similar to [27]. Alternatively, our approach can be used in combination with model-driven systems to plan or evaluate driving behavior.
To achieve this transfer to real-world application, we will apply our state representation to further traffic scenarios, e.g., intersections. Additionally, we will extend the capabilities of the agents by adopting more advanced reinforcement learning techniques.
| 3,573 |
1809.03200
|
2949556825
|
Urban traffic scenarios often require a high degree of cooperation between traffic participants to ensure safety and efficiency. Observing the behavior of others, humans infer whether or not others are cooperating. This work aims to extend the capabilities of automated vehicles, enabling them to cooperate implicitly in heterogeneous environments. Continuous actions allow for arbitrary trajectories and hence are applicable to a much wider class of problems than existing cooperative approaches with discrete action spaces. Based on cooperative modeling of other agents, Monte Carlo Tree Search (MCTS) in conjunction with Decoupled-UCT evaluates the action-values of each agent in a cooperative and decentralized way, respecting the interdependence of actions among traffic participants. The extension to continuous action spaces is addressed by incorporating novel MCTS-specific enhancements for efficient search space exploration. The proposed algorithm is evaluated under different scenarios, showing that the algorithm is able to achieve effective cooperative planning and generate solutions egocentric planning fails to identify.
|
Other approaches are not explicitly cooperative, however they do capture the interdependencies of actions as they evaluate the threat resulting from different maneuver combinations, and hence predict the future motions of vehicles @cite_4 and are able to generate proactive cooperative driving actions @cite_26 .
|
{
"abstract": [
"This paper presents a novel cooperative-driving prediction and planning framework for dynamic environments based on the methods of game theory. The proposed algorithm can be used for highly automated driving on highways or as a sophisticated prediction module for advanced driver-assistance systems with no need for intervehicle communication. The main contribution of this paper is a model-based interaction-aware motion prediction of all vehicles in a scene. In contrast to other state-of-the-art approaches, the system also models the replanning capabilities of all drivers. With that, the driving strategy is able to capture complex interactions between vehicles, thus planning maneuver sequences over longer time horizons. It also enables an accurate prediction of traffic for the next immediate time step. The prediction model is supported by an interpretation of what other drivers intend to do, how they interact with traffic, and the ongoing observation. As part of the prediction loop, the proposed planning strategy incorporates the expected reactions of all traffic participants, offering cooperative and robust driving decisions. By means of experimental results under simulated highway scenarios, the validity of the proposed concept and its real-time capability is demonstrated.",
"In this work, a framework for motion prediction of vehicles and safety assessment of traffic scenes is presented. The developed framework can be used for driver assistant systems as well as for autonomous driving applications. In order to assess the safety of the future trajectories of the vehicle, these systems require a prediction of the future motion of all traffic participants. As the traffic participants have a mutual influence on each other, the interaction of them is explicitly considered in this framework, which is inspired by an optimization problem. Taking the mutual influence of traffic participants into account, this framework differs from the existing approaches which consider the interaction only insufficiently, suffering reliability in real traffic scenes. For motion prediction, the collision probability of a vehicle performing a certain maneuver, is computed. Based on the safety evaluation and the assumption that drivers avoid collisions, the prediction is realized. Simulation scenarios and real-world results show the functionality."
],
"cite_N": [
"@cite_26",
"@cite_4"
],
"mid": [
"2344985987",
"2134239466"
]
}
|
Decentralized Cooperative Planning for Automated Vehicles with Continuous Monte Carlo Tree Search
|
While the capabilities of automated vehicles are evolving, they still lack an essential component that distinguishes them from human drivers in their behavior -the ability to cooperate (implicitly) with others. Unlike today's automated vehicles, human drivers include the (subtle) actions and intentions of other drivers in their decisions. Thus they are able to demand or offer cooperative behavior even without explicit communication. In recent years many research projects have addressed cooperative driving. Yet, the focus to date has been on explicit cooperation, which requires communication between vehicles or vehicles and infrastructure ( [1], [2], [3]).
In the foreseeable future neither all vehicles will have the necessary technical equipment to enable communication between vehicles and the infrastructure, nor will algorithms be standardized to such an extent that communicated environmental information and behavioral decisions will be considered uniformly. Hence, automated vehicles should be able to cooperate with other traffic participants even without communication.
Automated vehicles nowadays conduct non-cooperative planning, neglecting the interdependence in decision-making. In general this leads to sub-optimal plans and in the worst case to situations that can only be mitigated by emergency actions such as braking. If the perception and decision making by the individual road user is taken into account, Backpropagation Simulation Expansion Selection Fig. 1: Phases of Monte Carlo Tree Search for a passing maneuver; the selection phase descends the tree by selecting auspicious future states until a state is encountered that has untried actions left. After the expansion of the state a simulation of subsequent actions is run until the planning horizon is reached. The result is backpropagated to all states along the selected path. Ultimately this process converges to the optimal policy. trajectory sequences can be planned with foresight and safety-critical traffic situations can be prevented.
Existing frameworks that integrate prediction and planning to model interdependencies and achieve cooperative decision making ( [4], [5], [6], [7]) are restricted to a discrete set of actions. However, the multitude of possible traffic scenarios in urban environments require a holistic solution without a priori discretization of the action space.
Thus this work develops a cooperative situationindependent selection of possible actions for each traffic participant, addressing situations with a high degree of interaction between road users, e.g., road constrictions due to parked vehicles or situations that require to merge into moving traffic.
The problem of cooperative trajectory planning is mod-c 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. 2018 IEEE International Conference on Intelligent Transportation Systems (ITSC) eled as a multi-agent markov decision process (MDP) with simultaneous decision making by all agents [8]. With the help of Monte Carlo Tree Search (MCTS) the optimal policy is inferred. Monte Carlo Tree Search, a special reinforcement learning method [9], has shown great potential during multiple occasions facing problems with large branching factors. AlphaGo, the Go software that reached super-human performance is the most prominent example ( [10], [11]). MCTS improves value estimates of actions by conducting model-based simulations until a terminal state is reached and uses backpropagation to update the nodes along the taken action sequence. These improved value estimates are essential to direct the selection and expansion phases towards promising areas of the search space. Browne et al. [12] present a thorough overview of MCTS and its extensions. An example for the domain of automated driving is depicted in Fig. 1.
The resulting algorithm, DeCoC-MCTS , plans decentralized cooperative trajectories sampling continuous action spaces. The problem of decentralized simultaneous decision making is modeled as a matrix game and solved by Decoupled-UCT (a variant of MCTS, [13], [7]), removing dependencies on the decisions of others. In order to cope with the combinatorial explosion resulting from continuous action spaces, we enhance the plain MCTS with semantic move groups, kernel updates and guided exploration. The resulting algorithm is able to perform cooperative maneuvers in tight spaces that are common in urban environments. This paper is structured as follows: First, a brief overview of research on cooperative automated driving is given in section II. Section III introduces the terminology and defines the problem formally. The general approach to the problem and enhancements to the plain MCTS are presented in Section IV. Last, DeCoC-MCTS's extensions for continuous action spaces are evaluated and it's capabilities are compared to other planning methods.
III. PROBLEM STATEMENT
The problem of cooperative trajectory planning is formulated as a decentralized Markov Decision Process (Dec-MDP). Agents independently choose an action in each time step without knowing the decisions taken by others. Each agent collects an immediate reward and the system is transfered to the next state. Being a cooperative multiagent system the state transition as well as the reward are dependent on all agents' actions.
The Dec-MDP is described by the tuple Υ, S, A, T, R, γ , presented in [7].
• Υ denotes the set of agents indexed by i ∈ 1, 2, . . . n. • S i denotes the state space of an agent, S = ×S i represents the joint state space of Υ. • A i denotes the action space of an agent, A = ×A i represents the joint action space of Υ.
• T : S × A × S → [0, 1] is the transition function P (s |s, a)
specifying the probability of a transition from state s to state s given the joint action a chosen by each agent independently. • R : S × A × S → R is the reward function with r(s, s , a) denoting the resulting reward of the joint action a. • γ ∈ [0, 1] denotes a discount factor controlling the influence of future rewards on the current state. The superscript i is used to indicate that a parameter relates to a specific agent i. The joint policy Π = π 1 , . . . , π n is a solution to the cooperative decision problem. An individual policy, π i , for a single agent, i.e., a mapping from the state to the probability of each available action is given by π i :
S i × A i → [0, 1].
It is the aim of each agent to maximize its expected cumulative reward in the MDP, starting from its current state: G = γ t r(s, s , a) where t denotes the time and G the return, representing the cumulated discounted reward. V (s) is called the state-value function, given by V π (s) = E[G|s, π]. Similarly, the action-value function Q(s, a) is defined as Q π (s, a) = E[G|s, a], representing the expected return of choosing action a in state s.
The optimal policy starting from state s is defined as π * = arg max π V π (s). The state-value function is optimal under the optimal policy: max V = V π * , the same is true for the action-value function: max Q = Q π * . The optimal policy is found by maximizing over Q * (s, a):
π * (a|s) = 1 if a = arg max a∈A Q * (s, a) 0 otherwise (1)
The optimal policies can easily be derived, once Q * has been determined. Hence the goal is to learn the optimal actionvalue function Q * (s, a) for a given state-action combination.
IV. APPROACH
While discrete actions as opposed to classical trajectory planning allows to plan over longer periods of time [7], the resolution is insufficient to plan detailed maneuvers. Due to the multitude of possible traffic scenarios in urban environments, a solution with heuristic a priori discretization of the action space of road users is not suitable ( [2], [5], [6], [7]). The planning of safe maneuvers within such a dynamic environment can only take place if the trajectory planning is equipped with a situation-independent selection of possible actions for each road user. In order to address driving situations with a high degree of interaction between the road users involved such as road constrictions due to parked vehicles, we employ a MCTS-based approach similar to ([7], [5]) but extend it to continuous action spaces. By extending the method, any desired trajectory sequence can be generated in order to solve complex scenarios through cooperative behavior.
The following subsections first explain the action space and the validation of actions. Based on actions and the resulting state the cooperative reward is described in the following subsection. The last subsections shine some light on the enhancements for continuous action spaces that enable a structured exploration and thus quicker convergence.
A. Action Space
Actions are applied to the vehicle's state which is given by its position, velocity and acceleration in longitudinal and lateral direction as well as its heading. An action is defined as a pair of values with a longitudinal velocity change ∆v longitudinal and a lateral position change ∆y lateral . The value pair describes the desired change of state to be achieved during the action duration ∆T = t 1 − t 0 . Using this pair of values in combination with the following initial and terminal conditions for the longitudinal as well as lateral direction, quintic polynomials are solved to generate jerk-minimizing trajectories for the respective direction [17].
The initial conditions for the longitudinal as well as lateral position, velocity and acceleration are determined by the current state. The terminal conditions are defined with:
x(t 1 ) = 0 (2) y(t 1 ) = 0 (3) y(t 1 ) = 0(4)
The initial and terminal conditions leave three free constraints withẋ(t 1 ), y(t 1 ) and x(t 1 ). The desired velocity change in longitudinal direction ∆v longitudinal (6), as well as the lateral offset of the trajectory ∆y lateral (5) are parameterized and introduced as dimensions of the action space of each agent.
x(t 1 ) =ẋ(t 0 ) + ∆v longitudinal (5) y(t 1 ) = y(t 0 ) + ∆y lateral(6)
The last free unknown is the distance covered in longitudinal direction, (7). In order to describe a physically feasible maneuver, it is defined as
x(t 1 ) =ẋ (t 0 ) +ẋ(t 1 ) 2 ∆T (7)
B. Action Validation
The resulting trajectory needs to be drivable for a front axle-controlled vehicle and should be directly trackable by a trajectory controller. Hence, we need to conduct an action validation, using kinematic as well as physical boundary conditions so that the simulation adheres to the constraints.
The continuity of the curvature, the steering angle as well as the vehicle dependent permissible minimal radii ensure that the resulting trajectories are drivable. In addition, the dynamic limits of power transmission limit the maximum and minimum acceleration of the vehicle.
C. Cooperative Reward Calculation
The immediate individual reward based on the behavior of an agent is calculated using (8). It considers the states reached and the actions taken, as well as a validation reward.
r i = r i state + r i action + r i validation(8)
The state reward r i state is determined given the divergence between the current and the desired state. A desired state is characterized by a longitudinal velocity v des and a lane index k des . To ensure that the agent drives in the middle of a lane, deviations from the lane's center line ∆y inflict penalties.
Actions always result in negative rewards (costs). They create a balance between the goal of minimizing the deviation from the desired state as quickly as possible and the most economical way to achieve this. Currently, r i action considers only basic properties such as the longitudinal and lateral acceleration (9) and (10) as well as lane changes, (11). However, they can easily be extended to capture additional safety, efficiency and comfort related aspects of the generated trajectories. The order of importance is adjusted with the respective weights.
Cẍ = wẍ t1 t0 (ẍ(t)) 2 dt (9) Cÿ = wÿ t1 t0 (ÿ(t)) 2 dt(10)C k± = w k±(11)
The last term is the action validation reward, see (12). It evaluates whether a state and action is valid, i.e., being inside the drivable environment and adhering to the the kinematic as well as physical constraints and whether a state action combination is collision free. r i validation = r i invalid state + r i invalid action + r i collision (12) To achieve cooperative behavior a cooperative reward r i coop is defined. The cooperative reward of an agent i is the sum of its own rewards, see (8), as well as the sum of all other rewards of all other agents multiplied by a cooperation factor λ i , see (13), ([5], [7]). The cooperation factor determines the agent's willingness to cooperate with other agents (from λ i = 0 egoistic, to λ i = 1 fully cooperative).
r i coop = r i + λ i n j=0,j =i r j(13)
D. Progressive Widening
In the basic version of the Monte Carlo Tree Search, agents are modeled with discrete action spaces of constant size. Since each action must be visited at least once when using the UCT algorithm [18], the method cannot simply be applied to continuous action spaces. Using Progressive Unpruning [19] or Progressive Widening [20] the number of discrete actions within the action space can be gradually increased at runtime. A larger number of available actions increases the branching factor of the search tree.
At the beginning, the agent receives an initial, discrete set of available actions in each state. If a state has been visited sufficiently often, it gets progressively widened, by adding another action to the action space. Note that progressive widening is conducted on a per agent basis. The criterion for progressive widening is defined as:
N (A(s)) ≥ C P W · n(s) α P W(14)
The number of possible actions N (A(s)) in state s therefore depends directly on the visit count of the state n(s).
The parameters C P W and α P W ∈ [0, 1] must be adapted empirically to the respective application. The simplest way to add new actions is to select a random action from the continuous action space. More advanced approaches use the information available in the current state, so that promising areas within the action space can be identified and new actions can be added ( [21], [22]).
E. Semantic Action Grouping
The following section introduces the concept of semantic action grouping. The approach taken is similar to the concepts of ( [23], [24]).
The goal of grouping similar actions is to reduce the computational complexity and increase the robustness of the algorithm. The grouping of actions is connected with the implementation of heuristic application knowledge.
Each action is uniquely assigned to a state-dependent action group using a criterion. During the update phase, the characteristic values of the groups are calculated on the basis of the characteristic values of the group members. In the selection strategy, the best action group according to UCT is first selected using the group statistics before a specific action is identified within the group. The action groups serve as filters for determining relevant areas of the action space. Within the action groups, the information of the individual actions is generalized to the group-specific values, but remains unaffected. By using an unambiguous assignment function from the action space to the group action space it is ensured that the action groups do not overlap.
Semantic action groups describe state-dependent, discrete areas within an agent's action space. The areas divide the action space into the following nine areas based on the semantic state description of the agent's future state, as depicted in Fig. 2:
F. Similarity Update
As shown in Fig. 3, actions can generate similar ego states independently of the semantic description of their subsequent state and the resulting action group. For example, the difference between the next state of action I and II is a different track assignment. Although this describes different semantic states, the lateral position deviates only slightly. In contrast, action III is assigned to the same action group as action I, but has a much larger difference with regard to the lateral deviation. To overcome the restrictions of nonoverlapping semantic groups, the values of an agent's actions are generalized to similar actions in the update step, regardless of the action groupings and the resulting boundaries within the action space. During the backpropagation phase of MCTS, the similarity between the current action a and all previously explored actions of the agent at the expanded state is determined using a distance measure. This measure of similarity is then used to weight the result of the current sample and select similar states to update. The similarity measure is calculated as
K(a, a ) = exp −γ a − a 2 ∀a ∈ A exp (s)(15)
With the help of the radial basis function a symmetrical distance dimension in the value range K(a i , a * ) ∈ [0, 1] can be mapped. The different dimensions of the action space can also be weighted differently. Since both dimensions are of the same order of magnitude, the same weighting is used for both dimensions. Due to the continuous nature of the kernel the visit count of states is no longer an integer, but a floating point number.
G. Guided Search
In [21] a heuristic -the so-called Blind Values (BV)n randomly drawn actions of the theoretical action space A are evaluated and the action with the resulting maximum Blind Value, see (16), is added to the locally available action space of the state. Blind values make use of the information gathered on all actions already explored from a given state in the past to select promising areas of exploration.
a * = arg max a ∈A rnd BV(a , ρ, A exp )(16)
The basic idea is to select actions at the beginning, which have a large distance to previous actions in order to foster exploration, avoiding local optima. However, as the visit count increases, actions with a short distance to actions with high UCT values are preferred. To regularize the sum of both criteria, the statistical properties of the previous data points -mean value and standard deviation -are used.
The blind value as a measure of the attractiveness of an action is calculated with
BV(a , ρ, A exp ) = min a∈Aexp UCT(a) + ρ · K(a, a )(17)
where
ρ = σ a∈Aexp (UCT(a)) σ a ∈A rnd (K(0, a ))(18)
0 : center of action space K(a, a ) : distance function for two actions A exp : set of discrete, previously explored actions A rnd : set of discrete, randomly selected actions So that the current statistics of the agent and its action space are being considered.
V. EVALUATION
The evaluation is conducted using a simulation. The developed algorithm is evaluated under two different scenarios, namely bottleneck and merge-in, (see Fig. 4). First, the enhancements with regard to continuous actions are evaluated. Second, we demonstrate that our algorithm can achieve effective cooperative planning and generate solutions egocentric planning fails to identify. A video of the algorithm in execution can be found online 1 .
A. Search Space Exploration
To get a better understanding of the effects of the enhancements and their combinations, the exploration of the agent's action space is evaluated for the root node. This is done using the bottleneck scenario, depicted in Fig. 4a.
Shortly after the start of the scenario, the green agent changes to the left for all variants of the enhancements. This point is used to analyze the exploration of the action space exemplarily. The actions available to the green agent are shown in Fig. 5. The action with the highest action-value is selected for execution (red triangle).
The basic MCTS algorithm explores the search space randomly, (Fig. 5a). There is no connection between the distribution of samples and the selected action. In comparison, (a) Bottleneck scenario; gray vehicles are parked, the red and green vehicles can only pass at the same time if the red vehicle cooperates by moving to the right (b) Merge-in scenario; the gray vehicle is parked, the green vehicle can either, accelerate, passing first, merge-in between the red and the blue vehicle, or decelerate to pass last (Fig. 5b). Other areas of the action space are neglected. The selected action lies in an unexplored area of the action space.
Using semantic action groups the result clearly differs, (cf. Fig. 5c), as the exploration is more structured. The number of samples in the action space is heavily reduced. Within each semantic action group, actions are randomly distributed. The action groups for lane changes to the left are visited more frequently and one of these action groups entails the final selection.
If semantic action groups are additionally combined with the guided search, (see Fig. 5d), samples accumulate in the area of a lane change to the left. The selected action is close to the highest action density.
Adding the similarity update to the previous configuration a similar behavior can be observed, (Fig. 5e). In addition, there is a strong focus on the target region in form of a high density of possible actions. The selected action is contained in this densely explored area.
The combination of all extensions achieves a structured exploration, requiring less samples due to its higher efficiency than the basic algorithm, indicating the potential of the methods introduced.
B. Scenarios
Using the scenarios depicted in Fig. 4 DeCoC-MCTS is evaluated for two different predictions. Namely, that other vehicles do not cooperate and keep their velocity (constant velocity assumption) as well as that other vehicles will cooperate. We conduct a qualitative evaluation based on the velocity deviation of all vehicles in a scene, where lower overall deviations indicate more efficient driving and a higher total utility.
The first is the defensive constant velocity assumption that is common in the vast amount of prediction and planning frameworks. This means that the red and blue vehicle will keep their velocity and thus do not cooperate. Fig. 6 and Fig. 7 show the velocity and position graphs for the respective scenario. The green agent g 0 needs to decelerate heavily in order to let the red vehicle pass before it can accelerate again to its desired velocity. A similar behavior with a greater deceleration is required for the merge-in scenario, with the green agent being required to nearly come to a full stop.
If the agent assumes that the other agents do not follow the naive constant velocity assumption, but rather take the perception and decision making by all traffic participants into consideration, a more efficient cooperative driving style can be observed. The respective velocity and position graphs are depicted in Fig. 8 and Fig. 9. During the bottleneck scenario, both vehicles decelerate only slightly and pass each other at the road constriction. During the merge-in, the blue vehicle accelerates slightly while the red vehicle decelerates to allow the green vehicle to merge in.
It was shown, that the constant velocity assumption can lead to suboptimal solutions, that can be avoided if one takes the interdependence of actions into account, reaching superior solutions with regards to efficiency.
VI. CONCLUSIONS
This paper proposes a method to plan decentralized cooperative continuous trajectories for multiple agents to increase the efficiency of automated driving especially in urban scenarios and tight spaces. Due to continuous action spaces arbitrary trajectory combinations can be planned, distinguishing this method from other cooperative planning approaches. In order to handle the combinatorial explosion introduced by the continuous action space, we enhanced the MCTS-based algorithm by guided search, semantic action groups and similarity updates. While these enhancements show promising results with regard to the exploration of the search space, current research focuses on tuning the methods, proving scenario independent applicability and conduct in depth quantitative analysis.
VII. ACKNOWLEDGEMENTS
We wish to thank the German Research Foundation (DFG) for funding the project Cooperatively Interacting Automobiles (CoInCar) within which the research leading to this contribution was conducted. The information as well as views presented in this publication are solely the ones expressed by the authors.
| 4,055 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.