aid
string
mid
string
abstract
string
related_work
string
ref_abstract
dict
title
string
text_except_rw
string
total_words
int64
1907.01885
2963359379
As the availability and the inter-connectivity of RDF datasets grow, so does the necessity to understand the structure of the data. Understanding the topology of RDF graphs can guide and inform the development of, e.g. synthetic dataset generators, sampling methods, index structures, or query optimizers. In this work, we propose two resources: (i) a software framework (Resource URL of the framework: https: doi.org 10.5281 zenodo.2109469) able to acquire, prepare, and perform a graph-based analysis on the topology of large RDF graphs, and (ii) results on a graph-based analysis of 280 datasets (Resource URL of the datasets: https: doi.org 10.5281 zenodo.1214433) from the LOD Cloud with values for 28 graph measures computed with the framework. We present a preliminary analysis based on the proposed resources and point out implications for synthetic dataset generators. Finally, we identify a set of measures, that can be used to characterize graphs in the Semantic Web.
This category includes studies about the general structure of RDF graphs at instance, schema, and metadata levels. @cite_11 present the status of RDF datasets in the LOD Cloud in terms of size, linking, vocabulary usage, and metadata. LODStats @cite_3 and the large-scale approach DistLODStats @cite_9 report on statistics about RDF datasets on the web, including number of triples, RDF terms, and properties per entity, and usage of vocabularies across datasets. Loupe @cite_18 is an online tool that reports on the usage of classes and properties in RDF datasets. Fern ' @cite_6 define measures to describe the relatedness between nodes and edges using subject-object, subject-predicate, and predicate-object ratios. @cite_0 study the distribution of RDF terms, classes, instances, and datatypes to measure the quality of public RDF data. In summary, the study of RDF-specific properties of publicly available RDF datasets have been extensively covered and is currently supported by online services and tools such as LODStats and Loupe. Therefore, in addition to these works, we focus on analyzing graph invariants in RDF datasets.
{ "abstract": [ "The Linked Data initiative continues to grow making more datasets available; however, discovering the type of data contained in a dataset, its structure, and the vocabularies used still remains a challenge hindering the querying and reuse. VoID descriptions provide a starting point but a more detailed analysis is required to unveil the implicit vocabulary usage such as common data patterns. Such analysis helps the selection of datasets, the formulation of effective queries, or the identification of quality issues. Loupe is an online tool for inspecting datasets by looking at both implicit data patterns as well as explicit vocabulary definitions in data. This demo paper presents the dataset inspection capabilities of Loupe.", "Over the last years, the Semantic Web has been growing steadily. Today, we count more than 10,000 datasets made available online following Semantic Web standards. Nevertheless, many applications, such as data integration, search, and interlinking, may not take the full advantage of the data without having a priori statistical information about its internal structure and coverage. In fact, there are already a number of tools, which offer such statistics, providing basic information about RDF datasets and vocabularies. However, those usually show severe deficiencies in terms of performance once the dataset size grows beyond the capabilities of a single machine. In this paper, we introduce a software component for statistical calculations of large RDF datasets, which scales out to clusters of machines. More specifically, we describe the first distributed in-memory approach for computing 32 different statistical criteria for RDF datasets using Apache Spark. The preliminary results show that our distributed approach improves upon a previous centralized approach we compare against and provides approximately linear horizontal scale-up. The criteria are extensible beyond the 32 default criteria, is integrated into the larger SANSA framework and employed in at least four major usage scenarios beyond the SANSA community.", "The publication of semantic web data, commonly represented in Resource Description Framework (RDF), has experienced outstanding growth over the last few years. Data from all fields of knowledge are shared publicly and interconnected in active initiatives such as Linked Open Data. However, despite the increasing availability of applications managing large-scale RDF information such as RDF stores and reasoning tools, little attention has been given to the structural features emerging in real-world RDF data. Our work addresses this issue by proposing specific metrics to characterise RDF data. We specifically focus on revealing the redundancy of each data set, as well as common structural patterns. We evaluate the proposed metrics on several data sets, which cover a wide range of designs and models. Our findings provide a basis for more efficient RDF data structures, indexes and compressors.", "One of the major obstacles for a wider usage of web data is the difficulty to obtain a clear picture of the available datasets. In order to reuse, link, revise or query a dataset published on the Web it is important to know the structure, coverage and coherence of the data. In order to obtain such information we developed LODStats --- a statement-stream-based approach for gathering comprehensive statistics about datasets adhering to the Resource Description Framework (RDF). LODStats is based on the declarative description of statistical dataset characteristics. Its main advantages over other approaches are a smaller memory footprint and significantly better performance and scalability. We integrated LODStats with the CKAN dataset metadata registry and obtained a comprehensive picture of the current state of a significant part of the Data Web.", "Over a decade after RDF has been published as a W3C recommendation, publishing open and machine-readable content on the Web has recently received a lot more attention, including from corporate and governmental bodies; notably thanks to the Linked Open Data community, there now exists a rich vein of heterogeneous RDF data published on the Web (the so-called of Data\") accessible to all. However, RDF publishers are prone to making errors which compromise the e ectiveness of applications leveraging the resulting data. In this paper, we discuss common errors in RDF publishing, their consequences for applications, along with possible publisher-oriented approaches to improve the quality of structured, machine-readable and open data on the Web.", "The central idea of Linked Data is that data publishers support applications in discovering and integrating data by complying to a set of best practices in the areas of linking, vocabulary usage, and metadata provision. In 2011, the State of the LOD Cloud report analyzed the adoption of these best practices by linked datasets within different topical domains. The report was based on information that was provided by the dataset publishers themselves via the datahub.io Linked Data catalog. In this paper, we revisit and update the findings of the 2011 State of the LOD Cloud report based on a crawl of the Web of Linked Data conducted in April 2014. We analyze how the adoption of the different best practices has changed and present an overview of the linkage relationships between datasets in the form of an updated LOD cloud diagram, this time not based on information from dataset providers, but on data that can actually be retrieved by a Linked Data crawler. Among others, we find that the number of linked datasets has approximately doubled between 2011 and 2014, that there is increased agreement on common vocabularies for describing certain types of entities, and that provenance and license metadata is still rarely provided by the data sources." ], "cite_N": [ "@cite_18", "@cite_9", "@cite_6", "@cite_3", "@cite_0", "@cite_11" ], "mid": [ "2401813704", "2891796645", "2568135881", "1805882648", "109245773", "140982117" ] }
A Software Framework and Datasets for the Analysis of Graph Measures on RDF Graphs
Since its first version in 2007, the Linked Open Data Cloud (LOD Cloud) has increased by the factor of 100, containing 1, 163 data sets in the last version of August 2017 6 . In various knowledge domains, like Government, Life Sciences, and Natural Science, it has been a prominent example and a reference for the success of the possibility to interlink and access open datasets that are described following the Resource Description Framework (RDF). RDF provides a graphbased data model where statements are modelled as triples. Furthermore, a set of RDF triples compose a directed and labelled graph, where subjects and objects can be defined as vertices while predicates correspond to edges. Previous empirical studies on the characteristics of real-world RDF graphs have focused on general properties of the graphs [18], or analyses on the instance or schema level of such data sets [5,14]. Examples of statistics are dataset size, property and vocabulary usage, data types used or average length of string literals. In terms of the topology of RDF graphs, previous works report on network measures mainly focusing on in-and out-degree distributions, reciprocity, and path lengths [2,8,9,21]. Nonetheless, the results of these studies are limited to a small fraction of the RDF datasets currently available. Conducting recurrent systematical analyses on a large set of RDF graph topologies is beneficial in many research areas. For instance: Synthetic Dataset Generation. One goal of benchmark suites is to emulate real-world datasets and queries with characteristics from a particular domain or application-specific characteristics. Beyond parameters like the dataset size that is typically interpreted as the number of triples, taking into consideration reliable statistics about the network topology, basic graph and degree-based measures for instance, enables synthetic dataset generators to more appropriately emulate datasets at large-scale, contributing to solve the dataset scaling problem [20]. Graph Sampling. At the same time, graph sampling techniques try to find a representative sample from an original dataset, with respect to different aspects. Questions that arise in this field are (1) how to obtain a (minimal) representative sample, (2) which sampling method to use, and (3) how to scale up measurements of the sample [13]. Apart from qualitative aspects, like classes, properties, instances, and used vocabularies and ontologies, also topological characteristics of the original RDF graph should be considered. To this end, primitive measures of the graphs, like the max in-, out-and average-degree of vertices, reciprocity, density, etc., may be consulted to achieve more accurate results. Profiling and Evolution. Due to its distributed and dynamic nature, monitoring the development of the LOD Cloud has been a challenge for some time, documented through a range of techniques for profiling datasets [3]. Apart from the number of datasets in the LOD Cloud, the aspect of its linkage (linking into other datasets) and connectivity (linking within one dataset) is of particular interest. From the graph perspective, the creation of new links has immediate impact on the characteristics of the graph. For this reason, graph measures may help to monitor changes and the impact of changes in datasets. To support graph-based tasks in the aforementioned areas, first, we propose an open source framework which is capable of acquiring RDF datasets, efficiently preparing and computing graph measures over large RDF graphs. The framework is built upon state-of-the-art third-party libraries and published under MIT license. The proposed framework reports on network measures and graph invariants, which can be categorized in five groups: i) basic graph measures, ii) degree-based measures, iii) centrality measures, iv) edge-based measures, and v) descriptive statistical measures. Second, we provide a collection of 280 datasets prepared with the framework and a report on 28 graph-based measures per dataset about the graph topology also computed with our framework. In this work, we present an analysis of graph measures over the aforementioned collection. This analysis involves over 11.3 billion RDF triples from nine knowledge domains, i.e., Cross Domain, Geography, Government, Life Sciences, Linguistics, Media, Publications, Social Networking, and User Generated. Finally, we conduct a correlation analysis among the studied invariants to identify a representative set of graph measures to characterize RDF datasets from a graph perspective. In summary, the contributions of our work are: -A framework to acquire RDF datasets and compute graph measures ( § 3). -Results of a graph-based analysis of 280 RDF datasets from the LOD Cloud. For each dataset, the collection includes 28 graph measures computed with the framework ( § 4). -An analysis of graph measures on real-world RDF datasets ( § 5.1). -A study to identify graph measures that characterize RDF datasets ( § 5.2). A Framework for Graph-based Analysis on RDF Data This section introduces the first resource published with this paper: the software framework. The main purpose of the framework is to prepare and perform a graph-based analysis on the graph topology of RDF datasets. One of the main challenges of the framework is to scale up to large graphs and to a high number of datasets, i.e., to compute graph metrics efficiently over current RDF graphs (hundreds of millions of edges) and in parallel with many datasets at once. The necessary steps to overcome these challenges are described in the following. Functionality The framework relies on the following methodology to systematically acquire and analyze RDF datasets. Figure 1 depicts the main steps of our processing pipeline of the framework. In the following, we describe steps 1-4 from Figure 1. Data Acquisition The framework acquires RDF data dumps available online. Online availability is not mandatory to perform the analysis, as the pipeline runs with data dumps available offline. For convenience reasons, when operating on many datasets, one may load an initial list of datasets together with their names, available formats, and URLs into a local database (see Section 4.1). One can find configuration details and database init-scripts in the source code repository 4 . Once acquired, the framework is capable of dealing with the following artifacts: -Packed data dumps. Various formats are supported, including bz2, 7zip, tar.gz, etc. This is achieved by utilizing the unix-tool dtrx. -Archives, which contain a hierarchy of files and folders, will get scanned for files containing RDF data. Other files will be ignored, e.g. xls, txt, etc. -Any files with a different serialization than N-Triples are transformed (if necessary). The list of supported formats 7 is currently limited to the most common ones for RDF data, which are N-Triples, RDF/XML, Turtle, N-Quads, and Notation3. This is achieved by utilizing rapper 8 . Preparation of the Graph Structure In order to deal with large RDF graphs, our aim is to create a as much automated and reliable processing pipeline as possible that focuses on performance. The graph structure is created from an edgelist, which is the result of this preparation step. One line in the edgelist constitutes one edge in the graph, which is a relation between a pair of vertices, the subject s and object o of an RDF triple. The line contains the predicate p of an RDF triple in addition, so that it is stored as an attribute of the edge. This attribute can be accessed during graph analysis and processing. To ease the creation of this edgelist with edge attributes, we utilized the N-Triples format, thus, a triple s p o becomes s o p in the edgelist. By this means, the framework is able to prepare several datasets in parallel. In order to reduce the usage of hard-disk space and also main memory during the creation process of the graph structure, we make use of an efficient state-ofthe-art non-cryptographic hashing function 9 to encode actual values of the RDF triples. For example, the RDF triple <http://data.linkedopendata.it/musei/resource/Roma> <http://www.w3.org/2000/01/rdf-schema#label> "Roma" . is turned into the hashed edgelist representation 43f2f4f2e41ae099 c9643559faeed68e 02325f53aeba2f02 Besides the fact that this hashing strategy can reduce space by the factor of up to 12, compared to simple integer representation it has the advantage that it facilitates the comparison between edgelists of different RDF datasets. One could examine which resource URIs are the most frequently used across all datasets. The framework provides a script to de-reference hashes, in order to find a resource URI for the vertex with maximum degree, for instance. Graph Creation As graph analysis library we used graph-tool 10 , an efficient library for statistical analysis of graphs. In graph-tool, core data structures and algorithms are implemented in C ++ /C, while the library itself can be used with Python. graph-tool comes with a lot of pre-defined implementations for graph analysis, e.g., degree distributions or more advanced implementations on graphs like PageRank or clustering coefficient. Further, some values may be stored as attributes of vertices or edges in the graph structure. The library's internal graph-structure may be serialized as a compressed binary object for future re-use. It can be reloaded by graph-tool with much higher performance than the original edgelist. Our framework instantiates the graph from the prepared edgelist or binary representation and operates on the graph object provided by the graph-tool library. As with dataset preparation, the framework can handle multiple computations of graph measures in parallel. Graph Measures In this section, we present statistical measures that are computed in the framework grouped into five dimensions: basic graph measures, degree-based measures, centrality measures, edge-based measures, and descriptive statistical measures. The computation of some metrics are carried out with graph-tool (e.g., PageRank), and others are computed by our framework (e.g., degree of centralization). In the following, we introduce the graph notation used throughout the paper. A graph G is a pair of finite sets (V , E), with V denoting the set of all vertices (RDF subject and object resources). E is a multiset of (labeled) edges in the graph G, since in RDF a pair of subject and object resources may be described with more than one predicate. E.g. in the graph { s p1 o. s p2 o }, E has two pairs of vertices, i.e. E = {(s, o) 1 , (s, o) 2 | s, o ∈ V }. RDF predicates are considered as additional edge labels, which also may occur as individual vertices in the same graph G. Newman [15] presents a more detailed introduction to networks and structural network analysis. Basic Graph Measures We report on the total number of vertices |V | = n and the number of edges |E| = m for a graph. Some works in the literature refer to these values as size and volume, respectively. The number of vertices and edges usually varies drastically across knowledge domains. By its nature, RDF graphs contain a fraction of edges that share the same pair of source and target vertices (as in the example above). In our work, m p represents the number of parallel edges, i.e., m p = |{e ∈ E | count(e, E) > 1}|, with count(e, E) being a function that returns the number of times e is contained in E. Based on this measure, we also compute the total number of edges without counting parallel edges, denoted m u . It is computed by subtracting m p from the total number of edges m, i.e., m u = m − m p . Degree-based Measures The degree of a vertex v ∈ V , denoted d(v), corresponds to the total number of incoming and outgoing edges of v, i.e., d(v) = |{(u, v) ∈ E or (v, u) ∈ E | u ∈ V }|. For directed graphs, as is true for RDF datasets, it is common to distinguish between in-and out-degree, i.e. d in (v) = |{(u, v) ∈ E | u ∈ V }| and d out (v) = |{(v, u) ∈ E | u ∈ V }|, respec- tively. In social network analyses, vertices with a high out-degree are said to be "influential", whereas vertices with a high in-degree are called "prestigious". To identify these vertices in RDF graphs, we compute the maximum total-, in-, and out-degree of the graph's vertices, i.e., d max = max d(v), d max,in = max d in (v), d max,out = max d out (v), ∀v ∈ V respectively. In addition, we compute the graph's average total-, in-, and out-degree denoted z, z in , and z out , respectively. These measures can be important in research on RDF data management, for instance, where the (average) degree of a vertex (database table record) has significant impact on query evaluation, since queries on dense graphs can be more costly in terms of execution time to evaluate [17]. Another degree-based measure supported in the framework is h−index, known from citation networks [11]. It is an indicator for the importance of a vertex, similar to a centrality measure (see Section 3.2). A value of h means that for the number of h vertices the degree of these vertices is greater or equal to h. A high value of a graph's h−index could be an indicator for a "dense" graph and that its vertices are more "prestigious". We compute this network measure for the directed graph (using only the in-degree of vertices) denoted as h d and the undirected graph (using in-and out-degree of vertices) denoted as h u . Centrality Measures In social network analyses, the concept of point centrality is used to express the importance of nodes in a network. There are many interpretations for the term "importance" and so are measures for centrality [15]. Comparing centrality measures with fill p shows that the higher the density of the graph the higher centrality measures it has for the vertices. Point centrality uses the degree of a vertex, d(v). To indicate that it is a centrality measure, the literature sometimes normalizes this values by the total number of all vertices. We compute the maximum value of this measure, denoted as C D,max = d max . Another centrality measure computed is PageRank [16]. For each RDF graph, we identified the vertex with the highest PageRank values, denoted as P R max . Besides the point centrality, there is also the measure of graph centralization [10], which is known from social network analysis. This measure may also be seen as an indicator for the type of the graph, in that it expresses the degree of inequality and concentration of vertices as can be found in a perfect starshaped graphs, that is at most centralized and unequal with regard to its degree distribution. The centralization of a graph regarding the degree is defined as: C D = v∈V (d max − d(v)) (|V | − 1) * (|V | − 2)(1) where C D denotes the graph centralization measure using degree [10]. In contrast to social networks, RDF graphs usually contain many parallel edges between vertices (see next subsection). Thus, for this measure to make sense, we used the number of unique edges in the graph, m u . Edge-based Measures We compute the "density" or "connectance" of a graph, called f ill denoted as p. It also can be interpreted as the probability that an edge is present between two randomly chosen vertices. The density is computed as the ratio of all edges to the total number of all possible edges. We use the formula for a directed graph with possible loops, accordance to the definition of RDF graphs, using m and m u , i.e. p = m/n 2 and p u = m u /n 2 . Further, we analyze the fraction of bidirectional connections between vertices in the graph, thus pairs of vertices forward-connected by some edge, which are also backward-connected by some other edge. The value of reciprocity, denoted as y, is expressed as percentage, i.e. y = m bi /m, with m bi = |{(u, v) ∈ E | ∃(v, u) ∈ E}|. A high value means there are many connections between vertices which are bidirectional. This value is expected to be high in citation or social networks. Another important group of measures that is described by the graph topology is related to paths. A path is a set of edges one can follow along between two vertices. As there can be more than one path, the diameter is defined as the longest shortest path between two vertices of the network [15], denoted as δ. This is a valuable measure when storing an RDF dataset in a relational database, as this measure affects join cardinality estimations depending on the type of schema implementation for the graph set. The diameter is usually a very time consuming measure to compute, since all possible paths have to be computed. Thus we used the pseudo diameter algorithm 11 to estimate the value for our datasets. Descriptive Statistical Measures Descriptive statistical measures are important to describe distributions of some set of values, in our scenario, values for graph measures. In statistics, it is common to compute the variance σ 2 and standard deviation σ in order to express the degree of dispersion of a distribution. We do this for the in-and out-degree distributions in the graphs, denoted by σ 2 in , σ 2 out and σ in , σ out , respectively. Furthermore, the coefficient of variation cv is consulted to have a comparable measure for distributions with different mean values. cv in and cv out are obtained by dividing the corresponding standard deviation σ in and σ out by the mean z in , and z out , respectively, times 100. cv can also be utilized to analyze the type of distribution with regard to a set of values. For example, a low value of cv out means constant influence of vertices in the graph (homogeneous group), whereas a high value of cv in means high prominence of some vertices in the graph (heterogeneous group). Further, the type of degree distribution is an often considered measure of graphs. Some domains and datasets report on degree distributions that follow a power-law function, which means that the number of vertices with degree k behaves proportionally to the power of k −α , for some α ∈ R. Such networks are called scale-free. The literature has found that values in the range of 2 < α < 3 are typical in many real-world networks [15]. The scale-free behaviour also applies to some datasets and measures of RDF datasets [6,8]. However, to reason about whether a distribution follows a power-law can be technically challenging [1], and computing the exponent α, that falls into a certain range of values, is not sufficient. We compute the exponent for the total-and in-degree distributions [1], denoted as α and α in , respectively. In addition, to support the analysis of powerlaw distributions, the framework produces plots for both distributions. A powerlaw distribution is described as a line in a log-log plot. Determining the function that fits the distribution may be of high value for algorithms, in order to estimate the selectivity of vertices and attributes in graphs. The structure and size of datasets created by synthetic datasets, for instance, can be controlled with these measures. Also, a clear power-law distribution allows for high compression rates of RDF datasets [8]. Availability, Sustainability and Maintenance The software framework is published under MIT license on GitHub 4 . The repository contains all code and a comprehensive documentation to install, prepare an RDF dataset, and run the analysis. The main part of the code implements most of the measures as a list of python functions that is extendable. Future features and bugfixes will be published under a minor or bugfix release, v0. x.x, respectively. The source code is frequently maintained and debugged, since it is actively used in other research projects at our institute (see Section 6). It is citable via a registered DOI obtained from Zenodo. Both web services, GitHub and Zenodo, provide search interfaces, which makes the code also be web findable. RDF Datasets for the Analysis of Graph Measures We conducted a systematic graph-based analysis with a large group of datasets which were part of the last LOD Cloud 2017 12 , as a case study for the framework introduced in the previous Section 3. The results of the graph-based analysis with 28 graph-based measures, is the second resource 5 published with this paper. To facilitate browsing of the data we provide a website 13 . It contains all 280 datasets that were analyzed, grouped by topics (as in the LOD Cloud) together with links (a) to the original metadata obtained from DataHub, and (b) a downloadable version of the serialized graph-structure used for the analysis. This section describes the data acquisition process (cf. sections 4.1 and 4.2), and how the datasets and the results of the analysis can be accessed (cf. Section 4.3). Table 1 summarizes the number of processed datasets and their sizes. From the total number of 1,163 potentially available datasets in last LOD Cloud 2017, a number of 280 datasets were in fact analyzed. This was mainly due to these reasons: (i) RDF media types statements that were actually correct for the datasets, and (ii) the availability of data dumps provided by the services. To not stress SPARQL endpoints to transfer large amounts of data, in this experiment, only datasets that provide downloadable dumps were considered. Data Acquisition To dereference RDF datasets we relied on the metadata (so called datapackage) available at DataHub, which specifies URLs and media types for the corresponding data provider of one dataset 14 metadata for all datasets (step A in Figure 1) and manually mapped the obtained media types from the datapackage to their corresponding official media type statements that are given in the specifications. For instance, rdf, xml rdf or rdf xml was mapped to application/rdf+xml and similar. Other media type statements like html json ld ttl rdf xml or rdf xml turtle html were ignored, since they are ambiguous. This way, we obtained the URLs of 890 RDF datasets (step B in Figure 1). After that, we checked whether the dumps are available by performing HTTP HEAD requests on the URLs. At the time of the experiment, this returned 486 potential RDF dataset dumps to download. For the other not available URLs we verified the status of those datasets with http://stats.lod2.eu. After these manual preparation steps the data dumps could be downloaded with the framework (step 1 in Figure 1). The framework needs to transform all formats into N-Triples (cf. Section 3.1). From here, the number of prepared datasets for the analysis further reduced to 280. The reasons were: (1) corrupt downloads, (2) wrong file media type statements, and (3) syntax errors or other formats than these what were expected during the transformation process. This number seems low compared to the total number of available datasets in the LOD cloud, although it sounds reasonable compared to a recent study on the LOD Cloud 2014 [4]. Execution Environment Operating system, database installation, dataset, and client software reside all on one server during analysis. The analysis was made on a rack server Dell PowerBridge R720, having two Intel(R) Xeon(R) E5-2600 processors with 16 cores each, 192GB of main memory, and a 5TB main storage. The operating system was Linux, Debian 7.11, kernel version 3.2.0.5. The framework was configured to download and prepare the RDF data dumps in a parallel manner, limited to 28 concurrent processes, since transformation processes require some hard-disk IO. Around 2TB of hard-disk space was required to finish the preparation. The analysis on the graphs require more main memory, thus it was conducted only with 12 concurrent processes. As serialized binary objects all 280 datasets required around 38GB. Table 2 depicts examples of times for dataset preparation and analysis in our environment. Availability, Sustainability and Maintenance The results of the analysis of 280 datasets with 28 graph-based measures and degree distribution plots per dataset can be examined and downloaded via the (1, 1) 0cm 0cm (1, 2) 0cm (1, 3) 1grobwidth (1, 4) 0.4878897cm (1,5) 1null (1,6) 0cm (1, 7) 5.5pt (1,8) 0cm (1,9) registered DOI 5 . The aforementioned website 5 is automatically generated from the results. It contains all 280 datasets that were analyzed, grouped by topic domains (as in the LOD Cloud) together with links (a) to the original metadata obtained from datahub and (b) a downloadable version of the serialized graphstructure used by the time of analysis (as described in Section 3.1). As an infrastructure institute for the Social Sciences, we will regularly load data from the LOD Cloud and (re-)calculate the measures for the obtained datasets. This is part of a linking strategy, where linking candidates for our datasets shall be identified 15 . Datasets and results of future analyses will be made available to the community for further research. Preliminary Analysis and Discussion This section presents some results and observations about RDF graph topologies in the LOD Cloud, obtained from analyzing 280 datasets with the framework, as described in the previous Section 4. The interested reader is encouraged to look-up single values in the measures section of one dataset on the website of the project 13 . In the following, we present our main observations on basic graph measures, degree-based measures, and degree distribution statistics. Observations about Graph Topologies in the LOD Cloud Basic Graph Measures Figure 2 shows the average degree of all analyzed datasets. Among all domains but Geography and Government, it seems that the average degree is not affected by the volume of the graph (number of edges). Datasets in the Geography and Government domains report an increasing linear relationship with respect to the volume. Some outliers with high values can be observed across all domains, especially in Geography, Life Sciences, and Publications. The highest value over all datasets can be found in the Life Sciences (1, 1) 0cm 0cm (1, 2) 0cm (1, 3) 1grobwidth (1, 4) 0.9587654cm (1,5) 1null (1, 6) 0cm (1, 7) 5.5pt (1,8) 0cm (1,9) (18,5) 1null (18,6) 0cm (18,7) 5.5pt (18,8) 0cm (18,9) 1null (18, 10) 0cm (18,11) 5.5pt (18,12) 0cm (18,13) 1null (18,14) 0cm (18,15) 5.5pt (18,16) 0cm (18,17) 1null (18,18) 0cm (18,19) 5.5pt (18,20) 0cm (18,21) 1null (18,22) 0cm (18,23) 0cm (18,24) 0pt (18,25) Figure 3 shows the results on h-index. We would like to address some (a) domain-specific and (b) dataset-specific observations. Regarding (a), we can see that in general, the h-index grows exponentially with the size of the graph (note the log-scaled y-axis). Some datasets in the Government, Life Sciences, and Publications domains have high values for h−index: 8,128; 6,839; 5,309, respectively. Cross Domain exhibits the highest h−index values on average, with dbpedia-en having the highest value of 11,363. Repeating the definition, this means that there are 11,363 vertices in the graph with at least 11,363 or more edges, which is surprising. Compared to other domains, datasets in the Linguistics domain have a fairly low h-index, with 115 on average (other domains at least 3 times higher). Degree-based Measures Regarding (b), dataset-specific phenomena can be observed in the Linguistics domain. There seems to be two groups with totally different values, obviously due to datasets with very different graph topology. In this domain, universal-dependencies-treebank is present with 63 datasets, apertium-rdf with 22 datasets. Looking at the actual values for these groups of datasets, we can see that apertium-rdf datasets are 6x larger in size (vertices) and 2.6x larger in volume (edges) than universal-dependencies-treebank. The average degree in the first group is half the value of the second group (5.43 vs. 11.62). However, their size and volume seems to have no effect on the values of h-index. The first group of datasets have almost constant h-index value (lower group of dots in the figure), which is 10x smaller on average than that of datasets of universal-dependenciestreebank (upper group of dots). This, obviously, is not a domain-specific, but rather a dataset-specific phenomenon. Degree Distribution Statistics Researchers have found scale-free networks and graphs in many datasets [6,15], with a power-law exponent value of 2 < α < 3. We can confirm that for many of the analyzed datasets. As described in Section 3.2, it is generally not sufficient to decide whether a distribution fits a power-law function just by determining the value of α. Exemplary plots created by the framework for graphs of different sizes are presented in Figures 4a and 4b. These graphs reveal a scale-free behaviour with 2 < α < 3 for their degree distribution. Figure 4c is an example for a degree-distribution not following a powerlaw function. For a detailed study on the distributions please find plots for all analyzed datasets on the website of our project 13 . Looking at the actual data for all datasets, we could observe that, in general, values for exponent α and for d min vary a lot across domains. Furthermore, many datasets exhibit a scale-free behaviour on the total-degree distributions, but not on the in-degree, and vice-versa. It is hard to tell if a scale-free behaviour is a characteristic for a certain domain. We came to the conclusion that this is a dataset-specific phenomenon. However, the Publications domain has the highest share of datasets with 2 < α < 3 for total-and in-degree distributions, i.e., 62% and 74%, respectively. Effective Measures for RDF Graph Analysis Regarding the aforementioned use case of synthetic dataset generation, one goal of benchmark suites is to emulate real-world datasets with characteristics from a particular domain. Typical usages of benchmark suites is the study of runtime performance of common (domain-specific) queries at large scale. Some of them have been criticized to not necessarily generate meaningful results, due to the fact that datasets and queries are artificial with little relation to real datasets [7]. Recent works are proposing a paradigm shift from domain-specific benchmarks, which utilize a predefined schema and domain-specific data, towards designing application-specific benchmarks [17,20]. We have observed such discrepancies in the Linguistics domain, for instance (cf. Section 5.1). For both approaches, the results of our framework could facilitate the development of more accurate results, by combining topological measures, like the ones that can be obtained by the framework presented in this paper, with measures that describe statistics of vocabulary usage, for instance. One may come to the question, which measures are essential for graph characterization. We noticed that many measures rely on the degree of a vertex. A Pearson correlation test on the results of the analysis of datasets from Section 4 shows that n, m, m u , and m p , correlate strongly to both hindex measures and to the standard descriptive statistical measure. The degree of centralization and degree centrality correlates with d max , d max,in , d max,out . Both findings are intuitive. Measures that do almost not correlate are fill p, reciprocity y, the pseudodiameter δ, and the power-law-exponent α (cf. Figure 5). Hence, regardless of the group of measures and use case of interest, we conclude that the following minimal set of graph measures can be considered in order to characterize an RDF dataset: n, m, d max , z, fill p, reciprocity y, pseudo-diameter δ, and the power-law-exponent α. Conclusions and Future Work In this paper, we first introduce a software framework to acquire and prepare RDF datasets. By this means, one can conduct recurrent, systematical, and efficient analyses on their graph topologies. Second, we provide the results of the analysis conducted on 280 datasets from the LOD Cloud 2017 together with the datasets prepared by our framework. We have motivated our work by mentioning usage scenarios in at least three research areas in the Semantic Web: synthetic dataset generation, graph sampling, and dataset profiling. In a preliminary analysis of the results, we reported on observations in the group of basic graph measures, degree-based measures, and degree distribution statistics. We have found that (1) the average degree across all domains is approximately 8, (2) without regard to some exceptional datasets, the average degree does not depend on the volume of the graphs (number of edges). Furthermore, (3) due to the way how datasets are modelled, there are domain-and dataset-specific phenomena, e.g., an h-index that is constant with the size of the graph on one hand, and an exponentially growing h-index on the other. We can think of various activities for future work. We would like to face the question what actually causes domain-and dataset-specific irregularities and derive implications for dataset modelling tasks. Further, we would like to investigate correlation analyses of graph-based measures with measures for quality of RDF datasets or for data-driven tasks like query processing. For this reason, the experiment will be done on a more up-to-date version of datasets in the LOD Cloud. In the next version we are planing to publish a SPARQL endpoint to query datasets and measures from the graph-based analyses.
5,428
1907.01885
2963359379
As the availability and the inter-connectivity of RDF datasets grow, so does the necessity to understand the structure of the data. Understanding the topology of RDF graphs can guide and inform the development of, e.g. synthetic dataset generators, sampling methods, index structures, or query optimizers. In this work, we propose two resources: (i) a software framework (Resource URL of the framework: https: doi.org 10.5281 zenodo.2109469) able to acquire, prepare, and perform a graph-based analysis on the topology of large RDF graphs, and (ii) results on a graph-based analysis of 280 datasets (Resource URL of the datasets: https: doi.org 10.5281 zenodo.1214433) from the LOD Cloud with values for 28 graph measures computed with the framework. We present a preliminary analysis based on the proposed resources and point out implications for synthetic dataset generators. Finally, we identify a set of measures, that can be used to characterize graphs in the Semantic Web.
In the area of structural network analysis, it is common to study the distribution of certain graph measures in order to characterize a graph. RDF datasets have also been subject to these studies. The study by @cite_20 reveals that the power-law distribution is prevalent across graph invariants in RDF graphs obtained from 1.7 million documents. Also, the small-world phenomenon, known from experiments on social networks were studied within the Semantic Web @cite_19 . More recently, Fern ' @cite_6 have studied the structural features of real-world RDF data. Fern ' also propose measures in terms of in- and -out degrees for subjects, objects, and predicates and analyze the structure of @math RDF graphs from different knowledge domains. Most of these works focus on studying different in- and out-degree distributions and are limited to a rather small collection of RDF datasets. Moreover, the work by @cite_14 analyze further relevant graph invariants in RDF graphs including @math index and reciprocity. The work by applied graph-based metrics on synthetic RDF datasets. Complementary to these works, we present an study on @math RDF datasets from the LOD Cloud and analyze their structure based on the average degree, @math -index, and powerlaw exponent.
{ "abstract": [ "In this paper, we describe a comprehensive analysis of graph-theoretical properties of online social networks based on the Friend-of-a-Friend (FOAF) ontology. Of particular interest for this work were properties related to the small-world phenomenon. More than 1.6 million of the FOAF documents collected on the Semantic Web met our requirements and were analyzed in depth. Most FOAF documents are created and published by social networking services, blog hosting services, or combinations of the two as a matter of routine; only a fractional amount are maintained by individuals. Although the FOAF ontology defines unique identifiers for persons in theory, retrieval and particularly fusion of personal information is difficult and error-prone in practice. Nevertheless, we identified the largest strongly connected components of various community networks based on FOAF documents and analyzed them in regard to the small-world phenomenon. Interestingly, all components examined exhibited a characteristic path length comparable to the smallest length achievable for a graph of the respective size, and the clustering coefficient was much greater than expected for an equivalent random graph; along with power law degree distributions, both are typical features of small-world graphs.", "We present Graphium Chrysalis, a tool to visualize the main graph invariants that characterize RDF graphs, i.e., graph properties that are independent of the graph representation such as, vertex and edge counts, in- and out-degree distribution, and in-coming and out-going (h )-index. Graph invariants characterize a graph and impact on the cost of the core graph-based tasks, e.g., graph traversal and sub-graph pattern matching, affecting time and space complexity of main RDF reasoning and query processing tasks. During the demonstration of Graphium Chrysalis, attendees will be able to observe and analyze the invariants that describe graphs of existing RDF benchmarks. Additionally, we will show the expressiveness power of state-of-the-art graph database engine APIs (e.g., Neo4j or Sparksee) (Sparksee was previously known as DEX), when main graph invariants are computed against RDF graphs.", "The publication of semantic web data, commonly represented in Resource Description Framework (RDF), has experienced outstanding growth over the last few years. Data from all fields of knowledge are shared publicly and interconnected in active initiatives such as Linked Open Data. However, despite the increasing availability of applications managing large-scale RDF information such as RDF stores and reasoning tools, little attention has been given to the structural features emerging in real-world RDF data. Our work addresses this issue by proposing specific metrics to characterise RDF data. We specifically focus on revealing the redundancy of each data set, as well as common structural patterns. We evaluate the proposed metrics on several data sets, which cover a wide range of designs and models. Our findings provide a basis for more efficient RDF data structures, indexes and compressors.", "Semantic Web languages are being used to represent, encode and exchange semantic data in many contexts beyond the Web – in databases, multiagent systems, mobile computing, and ad hoc networking environments. The core paradigm, however, remains what we call the Web aspect of the Semantic Web – its use by independent and distributed agents who publish and consume data on the World Wide Web. To better understand this central use case, we have harvested and analyzed a collection of Semantic Web documents from an estimated ten million available on the Web. Using a corpus of more than 1.7 million documents comprising over 300 million RDF triples, we describe a number of global metrics, properties and usage patterns. Most of the metrics, such as the size of Semantic Web documents and the use frequency of Semantic Web terms, were found to follow a power law distribution." ], "cite_N": [ "@cite_19", "@cite_14", "@cite_6", "@cite_20" ], "mid": [ "92788038", "244060230", "2568135881", "2132573925" ] }
A Software Framework and Datasets for the Analysis of Graph Measures on RDF Graphs
Since its first version in 2007, the Linked Open Data Cloud (LOD Cloud) has increased by the factor of 100, containing 1, 163 data sets in the last version of August 2017 6 . In various knowledge domains, like Government, Life Sciences, and Natural Science, it has been a prominent example and a reference for the success of the possibility to interlink and access open datasets that are described following the Resource Description Framework (RDF). RDF provides a graphbased data model where statements are modelled as triples. Furthermore, a set of RDF triples compose a directed and labelled graph, where subjects and objects can be defined as vertices while predicates correspond to edges. Previous empirical studies on the characteristics of real-world RDF graphs have focused on general properties of the graphs [18], or analyses on the instance or schema level of such data sets [5,14]. Examples of statistics are dataset size, property and vocabulary usage, data types used or average length of string literals. In terms of the topology of RDF graphs, previous works report on network measures mainly focusing on in-and out-degree distributions, reciprocity, and path lengths [2,8,9,21]. Nonetheless, the results of these studies are limited to a small fraction of the RDF datasets currently available. Conducting recurrent systematical analyses on a large set of RDF graph topologies is beneficial in many research areas. For instance: Synthetic Dataset Generation. One goal of benchmark suites is to emulate real-world datasets and queries with characteristics from a particular domain or application-specific characteristics. Beyond parameters like the dataset size that is typically interpreted as the number of triples, taking into consideration reliable statistics about the network topology, basic graph and degree-based measures for instance, enables synthetic dataset generators to more appropriately emulate datasets at large-scale, contributing to solve the dataset scaling problem [20]. Graph Sampling. At the same time, graph sampling techniques try to find a representative sample from an original dataset, with respect to different aspects. Questions that arise in this field are (1) how to obtain a (minimal) representative sample, (2) which sampling method to use, and (3) how to scale up measurements of the sample [13]. Apart from qualitative aspects, like classes, properties, instances, and used vocabularies and ontologies, also topological characteristics of the original RDF graph should be considered. To this end, primitive measures of the graphs, like the max in-, out-and average-degree of vertices, reciprocity, density, etc., may be consulted to achieve more accurate results. Profiling and Evolution. Due to its distributed and dynamic nature, monitoring the development of the LOD Cloud has been a challenge for some time, documented through a range of techniques for profiling datasets [3]. Apart from the number of datasets in the LOD Cloud, the aspect of its linkage (linking into other datasets) and connectivity (linking within one dataset) is of particular interest. From the graph perspective, the creation of new links has immediate impact on the characteristics of the graph. For this reason, graph measures may help to monitor changes and the impact of changes in datasets. To support graph-based tasks in the aforementioned areas, first, we propose an open source framework which is capable of acquiring RDF datasets, efficiently preparing and computing graph measures over large RDF graphs. The framework is built upon state-of-the-art third-party libraries and published under MIT license. The proposed framework reports on network measures and graph invariants, which can be categorized in five groups: i) basic graph measures, ii) degree-based measures, iii) centrality measures, iv) edge-based measures, and v) descriptive statistical measures. Second, we provide a collection of 280 datasets prepared with the framework and a report on 28 graph-based measures per dataset about the graph topology also computed with our framework. In this work, we present an analysis of graph measures over the aforementioned collection. This analysis involves over 11.3 billion RDF triples from nine knowledge domains, i.e., Cross Domain, Geography, Government, Life Sciences, Linguistics, Media, Publications, Social Networking, and User Generated. Finally, we conduct a correlation analysis among the studied invariants to identify a representative set of graph measures to characterize RDF datasets from a graph perspective. In summary, the contributions of our work are: -A framework to acquire RDF datasets and compute graph measures ( § 3). -Results of a graph-based analysis of 280 RDF datasets from the LOD Cloud. For each dataset, the collection includes 28 graph measures computed with the framework ( § 4). -An analysis of graph measures on real-world RDF datasets ( § 5.1). -A study to identify graph measures that characterize RDF datasets ( § 5.2). A Framework for Graph-based Analysis on RDF Data This section introduces the first resource published with this paper: the software framework. The main purpose of the framework is to prepare and perform a graph-based analysis on the graph topology of RDF datasets. One of the main challenges of the framework is to scale up to large graphs and to a high number of datasets, i.e., to compute graph metrics efficiently over current RDF graphs (hundreds of millions of edges) and in parallel with many datasets at once. The necessary steps to overcome these challenges are described in the following. Functionality The framework relies on the following methodology to systematically acquire and analyze RDF datasets. Figure 1 depicts the main steps of our processing pipeline of the framework. In the following, we describe steps 1-4 from Figure 1. Data Acquisition The framework acquires RDF data dumps available online. Online availability is not mandatory to perform the analysis, as the pipeline runs with data dumps available offline. For convenience reasons, when operating on many datasets, one may load an initial list of datasets together with their names, available formats, and URLs into a local database (see Section 4.1). One can find configuration details and database init-scripts in the source code repository 4 . Once acquired, the framework is capable of dealing with the following artifacts: -Packed data dumps. Various formats are supported, including bz2, 7zip, tar.gz, etc. This is achieved by utilizing the unix-tool dtrx. -Archives, which contain a hierarchy of files and folders, will get scanned for files containing RDF data. Other files will be ignored, e.g. xls, txt, etc. -Any files with a different serialization than N-Triples are transformed (if necessary). The list of supported formats 7 is currently limited to the most common ones for RDF data, which are N-Triples, RDF/XML, Turtle, N-Quads, and Notation3. This is achieved by utilizing rapper 8 . Preparation of the Graph Structure In order to deal with large RDF graphs, our aim is to create a as much automated and reliable processing pipeline as possible that focuses on performance. The graph structure is created from an edgelist, which is the result of this preparation step. One line in the edgelist constitutes one edge in the graph, which is a relation between a pair of vertices, the subject s and object o of an RDF triple. The line contains the predicate p of an RDF triple in addition, so that it is stored as an attribute of the edge. This attribute can be accessed during graph analysis and processing. To ease the creation of this edgelist with edge attributes, we utilized the N-Triples format, thus, a triple s p o becomes s o p in the edgelist. By this means, the framework is able to prepare several datasets in parallel. In order to reduce the usage of hard-disk space and also main memory during the creation process of the graph structure, we make use of an efficient state-ofthe-art non-cryptographic hashing function 9 to encode actual values of the RDF triples. For example, the RDF triple <http://data.linkedopendata.it/musei/resource/Roma> <http://www.w3.org/2000/01/rdf-schema#label> "Roma" . is turned into the hashed edgelist representation 43f2f4f2e41ae099 c9643559faeed68e 02325f53aeba2f02 Besides the fact that this hashing strategy can reduce space by the factor of up to 12, compared to simple integer representation it has the advantage that it facilitates the comparison between edgelists of different RDF datasets. One could examine which resource URIs are the most frequently used across all datasets. The framework provides a script to de-reference hashes, in order to find a resource URI for the vertex with maximum degree, for instance. Graph Creation As graph analysis library we used graph-tool 10 , an efficient library for statistical analysis of graphs. In graph-tool, core data structures and algorithms are implemented in C ++ /C, while the library itself can be used with Python. graph-tool comes with a lot of pre-defined implementations for graph analysis, e.g., degree distributions or more advanced implementations on graphs like PageRank or clustering coefficient. Further, some values may be stored as attributes of vertices or edges in the graph structure. The library's internal graph-structure may be serialized as a compressed binary object for future re-use. It can be reloaded by graph-tool with much higher performance than the original edgelist. Our framework instantiates the graph from the prepared edgelist or binary representation and operates on the graph object provided by the graph-tool library. As with dataset preparation, the framework can handle multiple computations of graph measures in parallel. Graph Measures In this section, we present statistical measures that are computed in the framework grouped into five dimensions: basic graph measures, degree-based measures, centrality measures, edge-based measures, and descriptive statistical measures. The computation of some metrics are carried out with graph-tool (e.g., PageRank), and others are computed by our framework (e.g., degree of centralization). In the following, we introduce the graph notation used throughout the paper. A graph G is a pair of finite sets (V , E), with V denoting the set of all vertices (RDF subject and object resources). E is a multiset of (labeled) edges in the graph G, since in RDF a pair of subject and object resources may be described with more than one predicate. E.g. in the graph { s p1 o. s p2 o }, E has two pairs of vertices, i.e. E = {(s, o) 1 , (s, o) 2 | s, o ∈ V }. RDF predicates are considered as additional edge labels, which also may occur as individual vertices in the same graph G. Newman [15] presents a more detailed introduction to networks and structural network analysis. Basic Graph Measures We report on the total number of vertices |V | = n and the number of edges |E| = m for a graph. Some works in the literature refer to these values as size and volume, respectively. The number of vertices and edges usually varies drastically across knowledge domains. By its nature, RDF graphs contain a fraction of edges that share the same pair of source and target vertices (as in the example above). In our work, m p represents the number of parallel edges, i.e., m p = |{e ∈ E | count(e, E) > 1}|, with count(e, E) being a function that returns the number of times e is contained in E. Based on this measure, we also compute the total number of edges without counting parallel edges, denoted m u . It is computed by subtracting m p from the total number of edges m, i.e., m u = m − m p . Degree-based Measures The degree of a vertex v ∈ V , denoted d(v), corresponds to the total number of incoming and outgoing edges of v, i.e., d(v) = |{(u, v) ∈ E or (v, u) ∈ E | u ∈ V }|. For directed graphs, as is true for RDF datasets, it is common to distinguish between in-and out-degree, i.e. d in (v) = |{(u, v) ∈ E | u ∈ V }| and d out (v) = |{(v, u) ∈ E | u ∈ V }|, respec- tively. In social network analyses, vertices with a high out-degree are said to be "influential", whereas vertices with a high in-degree are called "prestigious". To identify these vertices in RDF graphs, we compute the maximum total-, in-, and out-degree of the graph's vertices, i.e., d max = max d(v), d max,in = max d in (v), d max,out = max d out (v), ∀v ∈ V respectively. In addition, we compute the graph's average total-, in-, and out-degree denoted z, z in , and z out , respectively. These measures can be important in research on RDF data management, for instance, where the (average) degree of a vertex (database table record) has significant impact on query evaluation, since queries on dense graphs can be more costly in terms of execution time to evaluate [17]. Another degree-based measure supported in the framework is h−index, known from citation networks [11]. It is an indicator for the importance of a vertex, similar to a centrality measure (see Section 3.2). A value of h means that for the number of h vertices the degree of these vertices is greater or equal to h. A high value of a graph's h−index could be an indicator for a "dense" graph and that its vertices are more "prestigious". We compute this network measure for the directed graph (using only the in-degree of vertices) denoted as h d and the undirected graph (using in-and out-degree of vertices) denoted as h u . Centrality Measures In social network analyses, the concept of point centrality is used to express the importance of nodes in a network. There are many interpretations for the term "importance" and so are measures for centrality [15]. Comparing centrality measures with fill p shows that the higher the density of the graph the higher centrality measures it has for the vertices. Point centrality uses the degree of a vertex, d(v). To indicate that it is a centrality measure, the literature sometimes normalizes this values by the total number of all vertices. We compute the maximum value of this measure, denoted as C D,max = d max . Another centrality measure computed is PageRank [16]. For each RDF graph, we identified the vertex with the highest PageRank values, denoted as P R max . Besides the point centrality, there is also the measure of graph centralization [10], which is known from social network analysis. This measure may also be seen as an indicator for the type of the graph, in that it expresses the degree of inequality and concentration of vertices as can be found in a perfect starshaped graphs, that is at most centralized and unequal with regard to its degree distribution. The centralization of a graph regarding the degree is defined as: C D = v∈V (d max − d(v)) (|V | − 1) * (|V | − 2)(1) where C D denotes the graph centralization measure using degree [10]. In contrast to social networks, RDF graphs usually contain many parallel edges between vertices (see next subsection). Thus, for this measure to make sense, we used the number of unique edges in the graph, m u . Edge-based Measures We compute the "density" or "connectance" of a graph, called f ill denoted as p. It also can be interpreted as the probability that an edge is present between two randomly chosen vertices. The density is computed as the ratio of all edges to the total number of all possible edges. We use the formula for a directed graph with possible loops, accordance to the definition of RDF graphs, using m and m u , i.e. p = m/n 2 and p u = m u /n 2 . Further, we analyze the fraction of bidirectional connections between vertices in the graph, thus pairs of vertices forward-connected by some edge, which are also backward-connected by some other edge. The value of reciprocity, denoted as y, is expressed as percentage, i.e. y = m bi /m, with m bi = |{(u, v) ∈ E | ∃(v, u) ∈ E}|. A high value means there are many connections between vertices which are bidirectional. This value is expected to be high in citation or social networks. Another important group of measures that is described by the graph topology is related to paths. A path is a set of edges one can follow along between two vertices. As there can be more than one path, the diameter is defined as the longest shortest path between two vertices of the network [15], denoted as δ. This is a valuable measure when storing an RDF dataset in a relational database, as this measure affects join cardinality estimations depending on the type of schema implementation for the graph set. The diameter is usually a very time consuming measure to compute, since all possible paths have to be computed. Thus we used the pseudo diameter algorithm 11 to estimate the value for our datasets. Descriptive Statistical Measures Descriptive statistical measures are important to describe distributions of some set of values, in our scenario, values for graph measures. In statistics, it is common to compute the variance σ 2 and standard deviation σ in order to express the degree of dispersion of a distribution. We do this for the in-and out-degree distributions in the graphs, denoted by σ 2 in , σ 2 out and σ in , σ out , respectively. Furthermore, the coefficient of variation cv is consulted to have a comparable measure for distributions with different mean values. cv in and cv out are obtained by dividing the corresponding standard deviation σ in and σ out by the mean z in , and z out , respectively, times 100. cv can also be utilized to analyze the type of distribution with regard to a set of values. For example, a low value of cv out means constant influence of vertices in the graph (homogeneous group), whereas a high value of cv in means high prominence of some vertices in the graph (heterogeneous group). Further, the type of degree distribution is an often considered measure of graphs. Some domains and datasets report on degree distributions that follow a power-law function, which means that the number of vertices with degree k behaves proportionally to the power of k −α , for some α ∈ R. Such networks are called scale-free. The literature has found that values in the range of 2 < α < 3 are typical in many real-world networks [15]. The scale-free behaviour also applies to some datasets and measures of RDF datasets [6,8]. However, to reason about whether a distribution follows a power-law can be technically challenging [1], and computing the exponent α, that falls into a certain range of values, is not sufficient. We compute the exponent for the total-and in-degree distributions [1], denoted as α and α in , respectively. In addition, to support the analysis of powerlaw distributions, the framework produces plots for both distributions. A powerlaw distribution is described as a line in a log-log plot. Determining the function that fits the distribution may be of high value for algorithms, in order to estimate the selectivity of vertices and attributes in graphs. The structure and size of datasets created by synthetic datasets, for instance, can be controlled with these measures. Also, a clear power-law distribution allows for high compression rates of RDF datasets [8]. Availability, Sustainability and Maintenance The software framework is published under MIT license on GitHub 4 . The repository contains all code and a comprehensive documentation to install, prepare an RDF dataset, and run the analysis. The main part of the code implements most of the measures as a list of python functions that is extendable. Future features and bugfixes will be published under a minor or bugfix release, v0. x.x, respectively. The source code is frequently maintained and debugged, since it is actively used in other research projects at our institute (see Section 6). It is citable via a registered DOI obtained from Zenodo. Both web services, GitHub and Zenodo, provide search interfaces, which makes the code also be web findable. RDF Datasets for the Analysis of Graph Measures We conducted a systematic graph-based analysis with a large group of datasets which were part of the last LOD Cloud 2017 12 , as a case study for the framework introduced in the previous Section 3. The results of the graph-based analysis with 28 graph-based measures, is the second resource 5 published with this paper. To facilitate browsing of the data we provide a website 13 . It contains all 280 datasets that were analyzed, grouped by topics (as in the LOD Cloud) together with links (a) to the original metadata obtained from DataHub, and (b) a downloadable version of the serialized graph-structure used for the analysis. This section describes the data acquisition process (cf. sections 4.1 and 4.2), and how the datasets and the results of the analysis can be accessed (cf. Section 4.3). Table 1 summarizes the number of processed datasets and their sizes. From the total number of 1,163 potentially available datasets in last LOD Cloud 2017, a number of 280 datasets were in fact analyzed. This was mainly due to these reasons: (i) RDF media types statements that were actually correct for the datasets, and (ii) the availability of data dumps provided by the services. To not stress SPARQL endpoints to transfer large amounts of data, in this experiment, only datasets that provide downloadable dumps were considered. Data Acquisition To dereference RDF datasets we relied on the metadata (so called datapackage) available at DataHub, which specifies URLs and media types for the corresponding data provider of one dataset 14 metadata for all datasets (step A in Figure 1) and manually mapped the obtained media types from the datapackage to their corresponding official media type statements that are given in the specifications. For instance, rdf, xml rdf or rdf xml was mapped to application/rdf+xml and similar. Other media type statements like html json ld ttl rdf xml or rdf xml turtle html were ignored, since they are ambiguous. This way, we obtained the URLs of 890 RDF datasets (step B in Figure 1). After that, we checked whether the dumps are available by performing HTTP HEAD requests on the URLs. At the time of the experiment, this returned 486 potential RDF dataset dumps to download. For the other not available URLs we verified the status of those datasets with http://stats.lod2.eu. After these manual preparation steps the data dumps could be downloaded with the framework (step 1 in Figure 1). The framework needs to transform all formats into N-Triples (cf. Section 3.1). From here, the number of prepared datasets for the analysis further reduced to 280. The reasons were: (1) corrupt downloads, (2) wrong file media type statements, and (3) syntax errors or other formats than these what were expected during the transformation process. This number seems low compared to the total number of available datasets in the LOD cloud, although it sounds reasonable compared to a recent study on the LOD Cloud 2014 [4]. Execution Environment Operating system, database installation, dataset, and client software reside all on one server during analysis. The analysis was made on a rack server Dell PowerBridge R720, having two Intel(R) Xeon(R) E5-2600 processors with 16 cores each, 192GB of main memory, and a 5TB main storage. The operating system was Linux, Debian 7.11, kernel version 3.2.0.5. The framework was configured to download and prepare the RDF data dumps in a parallel manner, limited to 28 concurrent processes, since transformation processes require some hard-disk IO. Around 2TB of hard-disk space was required to finish the preparation. The analysis on the graphs require more main memory, thus it was conducted only with 12 concurrent processes. As serialized binary objects all 280 datasets required around 38GB. Table 2 depicts examples of times for dataset preparation and analysis in our environment. Availability, Sustainability and Maintenance The results of the analysis of 280 datasets with 28 graph-based measures and degree distribution plots per dataset can be examined and downloaded via the (1, 1) 0cm 0cm (1, 2) 0cm (1, 3) 1grobwidth (1, 4) 0.4878897cm (1,5) 1null (1,6) 0cm (1, 7) 5.5pt (1,8) 0cm (1,9) registered DOI 5 . The aforementioned website 5 is automatically generated from the results. It contains all 280 datasets that were analyzed, grouped by topic domains (as in the LOD Cloud) together with links (a) to the original metadata obtained from datahub and (b) a downloadable version of the serialized graphstructure used by the time of analysis (as described in Section 3.1). As an infrastructure institute for the Social Sciences, we will regularly load data from the LOD Cloud and (re-)calculate the measures for the obtained datasets. This is part of a linking strategy, where linking candidates for our datasets shall be identified 15 . Datasets and results of future analyses will be made available to the community for further research. Preliminary Analysis and Discussion This section presents some results and observations about RDF graph topologies in the LOD Cloud, obtained from analyzing 280 datasets with the framework, as described in the previous Section 4. The interested reader is encouraged to look-up single values in the measures section of one dataset on the website of the project 13 . In the following, we present our main observations on basic graph measures, degree-based measures, and degree distribution statistics. Observations about Graph Topologies in the LOD Cloud Basic Graph Measures Figure 2 shows the average degree of all analyzed datasets. Among all domains but Geography and Government, it seems that the average degree is not affected by the volume of the graph (number of edges). Datasets in the Geography and Government domains report an increasing linear relationship with respect to the volume. Some outliers with high values can be observed across all domains, especially in Geography, Life Sciences, and Publications. The highest value over all datasets can be found in the Life Sciences (1, 1) 0cm 0cm (1, 2) 0cm (1, 3) 1grobwidth (1, 4) 0.9587654cm (1,5) 1null (1, 6) 0cm (1, 7) 5.5pt (1,8) 0cm (1,9) (18,5) 1null (18,6) 0cm (18,7) 5.5pt (18,8) 0cm (18,9) 1null (18, 10) 0cm (18,11) 5.5pt (18,12) 0cm (18,13) 1null (18,14) 0cm (18,15) 5.5pt (18,16) 0cm (18,17) 1null (18,18) 0cm (18,19) 5.5pt (18,20) 0cm (18,21) 1null (18,22) 0cm (18,23) 0cm (18,24) 0pt (18,25) Figure 3 shows the results on h-index. We would like to address some (a) domain-specific and (b) dataset-specific observations. Regarding (a), we can see that in general, the h-index grows exponentially with the size of the graph (note the log-scaled y-axis). Some datasets in the Government, Life Sciences, and Publications domains have high values for h−index: 8,128; 6,839; 5,309, respectively. Cross Domain exhibits the highest h−index values on average, with dbpedia-en having the highest value of 11,363. Repeating the definition, this means that there are 11,363 vertices in the graph with at least 11,363 or more edges, which is surprising. Compared to other domains, datasets in the Linguistics domain have a fairly low h-index, with 115 on average (other domains at least 3 times higher). Degree-based Measures Regarding (b), dataset-specific phenomena can be observed in the Linguistics domain. There seems to be two groups with totally different values, obviously due to datasets with very different graph topology. In this domain, universal-dependencies-treebank is present with 63 datasets, apertium-rdf with 22 datasets. Looking at the actual values for these groups of datasets, we can see that apertium-rdf datasets are 6x larger in size (vertices) and 2.6x larger in volume (edges) than universal-dependencies-treebank. The average degree in the first group is half the value of the second group (5.43 vs. 11.62). However, their size and volume seems to have no effect on the values of h-index. The first group of datasets have almost constant h-index value (lower group of dots in the figure), which is 10x smaller on average than that of datasets of universal-dependenciestreebank (upper group of dots). This, obviously, is not a domain-specific, but rather a dataset-specific phenomenon. Degree Distribution Statistics Researchers have found scale-free networks and graphs in many datasets [6,15], with a power-law exponent value of 2 < α < 3. We can confirm that for many of the analyzed datasets. As described in Section 3.2, it is generally not sufficient to decide whether a distribution fits a power-law function just by determining the value of α. Exemplary plots created by the framework for graphs of different sizes are presented in Figures 4a and 4b. These graphs reveal a scale-free behaviour with 2 < α < 3 for their degree distribution. Figure 4c is an example for a degree-distribution not following a powerlaw function. For a detailed study on the distributions please find plots for all analyzed datasets on the website of our project 13 . Looking at the actual data for all datasets, we could observe that, in general, values for exponent α and for d min vary a lot across domains. Furthermore, many datasets exhibit a scale-free behaviour on the total-degree distributions, but not on the in-degree, and vice-versa. It is hard to tell if a scale-free behaviour is a characteristic for a certain domain. We came to the conclusion that this is a dataset-specific phenomenon. However, the Publications domain has the highest share of datasets with 2 < α < 3 for total-and in-degree distributions, i.e., 62% and 74%, respectively. Effective Measures for RDF Graph Analysis Regarding the aforementioned use case of synthetic dataset generation, one goal of benchmark suites is to emulate real-world datasets with characteristics from a particular domain. Typical usages of benchmark suites is the study of runtime performance of common (domain-specific) queries at large scale. Some of them have been criticized to not necessarily generate meaningful results, due to the fact that datasets and queries are artificial with little relation to real datasets [7]. Recent works are proposing a paradigm shift from domain-specific benchmarks, which utilize a predefined schema and domain-specific data, towards designing application-specific benchmarks [17,20]. We have observed such discrepancies in the Linguistics domain, for instance (cf. Section 5.1). For both approaches, the results of our framework could facilitate the development of more accurate results, by combining topological measures, like the ones that can be obtained by the framework presented in this paper, with measures that describe statistics of vocabulary usage, for instance. One may come to the question, which measures are essential for graph characterization. We noticed that many measures rely on the degree of a vertex. A Pearson correlation test on the results of the analysis of datasets from Section 4 shows that n, m, m u , and m p , correlate strongly to both hindex measures and to the standard descriptive statistical measure. The degree of centralization and degree centrality correlates with d max , d max,in , d max,out . Both findings are intuitive. Measures that do almost not correlate are fill p, reciprocity y, the pseudodiameter δ, and the power-law-exponent α (cf. Figure 5). Hence, regardless of the group of measures and use case of interest, we conclude that the following minimal set of graph measures can be considered in order to characterize an RDF dataset: n, m, d max , z, fill p, reciprocity y, pseudo-diameter δ, and the power-law-exponent α. Conclusions and Future Work In this paper, we first introduce a software framework to acquire and prepare RDF datasets. By this means, one can conduct recurrent, systematical, and efficient analyses on their graph topologies. Second, we provide the results of the analysis conducted on 280 datasets from the LOD Cloud 2017 together with the datasets prepared by our framework. We have motivated our work by mentioning usage scenarios in at least three research areas in the Semantic Web: synthetic dataset generation, graph sampling, and dataset profiling. In a preliminary analysis of the results, we reported on observations in the group of basic graph measures, degree-based measures, and degree distribution statistics. We have found that (1) the average degree across all domains is approximately 8, (2) without regard to some exceptional datasets, the average degree does not depend on the volume of the graphs (number of edges). Furthermore, (3) due to the way how datasets are modelled, there are domain-and dataset-specific phenomena, e.g., an h-index that is constant with the size of the graph on one hand, and an exponentially growing h-index on the other. We can think of various activities for future work. We would like to face the question what actually causes domain-and dataset-specific irregularities and derive implications for dataset modelling tasks. Further, we would like to investigate correlation analyses of graph-based measures with measures for quality of RDF datasets or for data-driven tasks like query processing. For this reason, the experiment will be done on a more up-to-date version of datasets in the LOD Cloud. In the next version we are planing to publish a SPARQL endpoint to query datasets and measures from the graph-based analyses.
5,428
1812.05418
2905153446
In this work, we propose a domain flow generation(DLOW) approach to model the domain shift between two domains by generating a continuous sequence of intermediate domains flowing from one domain to the other. The benefits of our DLOW model are two-fold. First, it is able to transfer source images into different styles in the intermediate domains. The transferred images smoothly bridge the gap between source and target domains, thus easing the domain adaptation task. Second, when multiple target domains are provided in the training phase, our DLOW model can be learnt to generate new styles of images that are unseen in the training data. We implement our DLOW model based on the state-of-the-art CycleGAN. A domainness variable is introduced to guide the model to generate the desired intermediate domain images. In the inference phase, a flow of various styles of images can be obtained by varying the domainness variable. We demonstrate the effectiveness of our approach for both cross-domain semantic segmentation and the style generalization tasks on benchmark datasets.
Our work is partially inspired by SGF @cite_29 and GFK @cite_55 , which have shown that the intermediate domains between source and target domains are useful for addressing the domain adaptation problem. They represented each domain as a subspace, and then connected them on Grassmannian manifold to model intermediate domains. Different from them, we model the intermediate domains by directly translate images on pixel level. This allows us to easily improve the existing deep domain adaptation models by using the translated images as training data. Moreover, our model can also be applied to image-level domain generalization by generating mixed-style images.
{ "abstract": [ "In real-world applications of visual recognition, many factors — such as pose, illumination, or image quality — can cause a significant mismatch between the source domain on which classifiers are trained and the target domain to which those classifiers are applied. As such, the classifiers often perform poorly on the target domain. Domain adaptation techniques aim to correct the mismatch. Existing approaches have concentrated on learning feature representations that are invariant across domains, and they often do not directly exploit low-dimensional structures that are intrinsic to many vision datasets. In this paper, we propose a new kernel-based method that takes advantage of such structures. Our geodesic flow kernel models domain shift by integrating an infinite number of subspaces that characterize changes in geometric and statistical properties from the source to the target domain. Our approach is computationally advantageous, automatically inferring important algorithmic parameters without requiring extensive cross-validation or labeled data from either domain. We also introduce a metric that reliably measures the adaptability between a pair of source and target domains. For a given target domain and several source domains, the metric can be used to automatically select the optimal source domain to adapt and avoid less desirable ones. Empirical studies on standard datasets demonstrate the advantages of our approach over competing methods.", "Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets." ], "cite_N": [ "@cite_55", "@cite_29" ], "mid": [ "2149466042", "2128053425" ] }
DLOW: Domain Flow for Adaptation and Generalization
The domain shift problem is drawing more and more attention in recent years [19,58,50,48,13,6]. In particular, there are two tasks that are of interest in computer vision community. One is the domain adaptation problem, where the goal is to learn a model for a given task from a labelrich data domain (i.e., source domain) that performs well in a label-scarce data domain (i.e., target domain). The other one is the image translation problem, where the goal is to transfer images in the source domain to mimic the image style in a target domain. Generally, most existing works focus on the target domain only. They aim to learn models that well fit the target data distribution, e.g., achieving good classification accuracy in the target domain, or transferring source images into the target style. In this work, we instead are interested in the intermediate domains between the source and The benefits of our DLOW approach are two-fold. First, those intermediate domains are helpful to bridge the distribution gap between two domains. By translating images into intermediate domains, those translated images can be employed to ease the domain adaptation task. We show that the traditional domain adaptation methods can be boosted to achieve better performances in target domain with intermediate domain images. Moreover, the obtained models also exhibit good generalization ability on new datasets that are unseen in the training phase, benefiting from the diverse intermediate domain images. Second, our DLOW model can be used for style generalization. Traditional image-to-image translation works [58, 1 25, 27, 35] focus on learning a deterministic one-to-one mapping that transfers a source image into the target style. In contrast, our DLOW model allows to translate a source image into an intermediate domain that are related to multiple target domains. For example, when performing the photo to painting transfer, instead of obtaining a Monet or Van Gogh style, our DLOW model could produce a painting with mixed styles of Van Gogh, Monet, etc. Such mixture can be customized arbitrarily in the inference phase by simply adjusting an input vector that encodes the relatedness to different domains. We implement our DLOW model based on Cycle-GAN [58], which is one of the state-of-the-art unpaired image-to-image translation methods. We augment the Cy-cleGAN to include an additional input of domainness variable. On one hand, the domainness variable is injected into the translation network by using a conditional instance normalization layer to affect the style of output images. On the other hand, it is also used as weights on discriminators to balance the relatedness of the output images to source and target domains. For multiple target domains, the domainness variable is extended as a vector containing the relatedness to all target domains. We evaluate our DLOW model with two tasks, mixedstyle image translation, and domain adaptation. For the first task, we show that our learnt model is able to translate a source image into an arbitrary mixture of multiple styles. For the second task, we are able to further improve the state-of-the-art cross-domain semantic segmentation methods by using the translated images in intermediate domains as training data. Extensive results on benchmark dataasets demonstrate the effectiveness of our proposed model. Domain Flow Generation In this section, we introduce domain flow generation (DLOW) model for translating source images into intermedaite domains that bridge the source and target domains. Problem Statement In the domain shift problem, we are given a source domain S and a target domain T containing samples from two different distributions P S and P T , respectively. Denoting x s ∈ S as a source domain sample and x t ∈ T , we have x s ∼ P S , x t ∼ P T , and P S = P T . Such distribution mismatch usually leads to a significant performance drop when applying the model trained on S to the new target domain T . Many works have been proposed to address the domain shift for different vision applications. A group of recent works aim to reduce the distribution difference on the feature level by learning domain-invariant features [11,16,28,14], while others work on the image level to transfer source images to mimic the target domain style [58,35,59,24,1,6]. In this work, we also propose to address the domain shift problem on image level. However, different from existing works that focus on transferring source images into only the target domain, we instead transfer them into all intermediate domains that connect source and target domains. This is partially motivated by the previous works [16,14], which have shown that the intermediate domains between source and target domains are useful for addressing the domain adaptation problem. In the follows, we first briefly review the conventional image-to-image translation model CycleGAN. Then, we formulate the intermediate domain adaptation problem based on the data distribution distance. Next, we develop our DLOW model based on the CycleGAN model. We then show the benefits of our DLOW model for two applications: 1) how to improve existing domain adaptation models with the images generated from DLOW model, and 2) how to transfer images into arbitrarily mixed styles when there are multiple target domains. The CycleGAN Model We build our model upon the state-of-the-art CycleGAN model [58] which was proposed for unpaired image-toimage translation. Formally, the CycleGAN model learns two mappings between S and T , i.e., G ST : S → T which transfers the images in S into the style of T , and G T S : T → S which acts in the inverse direction. We take the S → T direction as an example to explain CycleGAN. To transfer source images into the target style and also preserve the semantics, the CycleGAN employed an adversarial training module and a reconstruction module, respectively. In particular, the adversarial training module is to align the image distributions for two domains, such that the style of mapped images matches the target domain. Let us denote D T as the discriminator, which attempts to distinguish the translated images and the target images. Then the objective function of the adversarial training module can be written as, min G ST max D T E x t ∼P T log(D T (x t )) (1) + E x s ∼P S [log(1 − D T (G ST (x s )))] . Moreover, the reconstruction module is to ensure the mapped image G ST (x s ) to preserve the semantic content of the original image x s . This is realized by enforcing a cyclic consistency loss such that G ST (x s ) is able to recover x s when being mapped back to the source style, i.e., min G ST E x s ∼P S [ G T S (G ST (x s )) − x s 1 ] .(2) Similar modules are also applied to the T → S direction. By jointly optimizing all modules, the CycleGAN model is able to transfer source images into the target style and v.v. Modeling Intermediate Domains In our task, we aim to translate the source images not only into the target domain, but also into all intermediate domains that connect the source and target domains. In particular, let us denote an intermediate domain as There are many possible paths to connect the source and target domains. As shown in Fig 2, assuming there is a manifold of domains, where a domain with given data distribution can be seen as a point residing at the manifold. We expect the domain flow M (z) to be the shortest geodesic path connecting S and T . Moreover, given any z, the distance from S to M (z) should also be proportional to the distance between S to T by the value of z. Denoting the data distribution of M (z) as P M (z) , where z ∈ [0, 1] is(z) M , we expect that dist P S , P (z) M dist P T , P (z) M = z 1 − z ,(3) where dist(·, ·) is a valid distance measurement over two distributions.Thus, generating an intermediate domain M (z) for a given z becomes to find the point satisfying Eq. (3) that is closet to S and T , which leads to minimize the following loss, L = (1 − z) · dist P S , P (z) M + z · dist P T , P (z) M .(4) As shown in [2], many types of distance have been exploited for image generation and image translation. The adversarial loss in Eq. (1) can be seen as a lower bound of the Jessen-Shannon divergence. We also use it for measuring distribution distance in this work. The DLOW Model We now develop our DLOW model to generate intermediate domains. Given a source image x s ∼ P s , and a domainness parameter z ∈ [0, 1], our task is to transfer x s into the intermediate domain M (z) with the distribution P (z) M that minimizes the objective in Eq. (4). We take the S → T direction as an example, and the other direction can be similarly applied. In our DLOW model, a generator G ST is no longer to directly transfer x s to the target domain T , but to move x s towards it. The interval of such moving is controlled by the domainness variable z. Let us denote Z = [0, 1] as the domain of z, then the generator in our DLOW model can be represented as G ST (x s , z) : S × Z → M (z) where the input is a joint space of S and Z. Adversarial Loss: As discussed in Section 3.3, We employ the adversarial loss as the distribution distance measurement to control the relatedness of an intermediate domain to the source and target domains. Specifically, we introduce two discriminators, D S (x) to distinguish M (z) and S, and D T (x) to distinguish M (z) and T , respectively. Then, the adversarial losses between M (z) and S and T can be written respectively as, L adv ( G ST , D S ) = E x s ∼P S [log(D S (x s ))] (5) + E x s ∼P S [log(1 − D S (G ST (x s , z)))] L adv ( G ST , D T ) = E x t ∼P T log(D T (x t )) (6) + E x s ∼P S [log(1 − D T (G ST (x s , z)))] . By using the above losses to model dist(P S , P (z) (4), we arrive at the following loss, M ) and dist(P T , P (z) M ) in Eq.L adv = (1 − z)L adv (G ST , D S ) + zL adv (G ST , D T ). (7) Image Cycle Consistency Loss: Similarly as in Cyl-ceGAN, we also apply a cyclic consistency loss to ensure the semantic content is well-preserved in the translated images. Let us denote G T S (x t , z) : T × Z → M (1−z) as the generator on the other direction, which transfers a sample x t from the target domain towards the source domain by a interval of z. Since G T S acts in an inverse way to G ST , we can use it to recover x s from the translated version G ST (x s , z), which gives the following loss, L cyc =E x s ∼Ps [ G T S (G ST (x s , z), z) − x s 1 ] .(8) Domainness Cycle Consistency Loss: To guarantee that the translated image G ST (x s , z) correctly encodes the information of the domainness parameter z, we introduce a regressor R S : M (z) → z to reconstruct the domainness parameter z. In particular, R S is expected to output 0 for source images, 1 for target images, and z for images in M (z) . We use the cross entropy loss for source and target images, and the square loss for z, and arrive at the domainness cycle consistency loss as follows, L dns = −E x t ∼P T log(R S (x t )) −E x s ∼P S [log(1 − R S (x s )] +E x s ∼P S [ R S (G ST (x s , z)) − z 2 ] .(9) Full Objective: We integrate the losses defined above, the full objective can be defined as: L = L adv + λ 1 L cyc + λ 2 L dns ,(10) where λ 1 and λ 2 are hyper-parameters used to balance the adversarial loss, the image cycle consistency loss and the domainness cycle consistency loss in the training process. Similar loss can be defined for the other direction T → S. Due to the usage of adversarial loss L adv , the training is performed in an alternating manner. We first minimize the full objective with regard to the generators and regressors, and then maximize it with regard to the discriminators. Implementation: We illustrate the network structure of S → T direction of our DLOW model in Fig 3. A figure of the complete model is provided in the Appendix. First, the domainness parameter z is taken as the input of the generator G ST . This is implemented with the Conditional Instance Normalization (CN) layer [1,23]. We first use one deconvolution layer to map the domainness parameter z to the vector with dimension (1, 16, 1, 1), and then use this vector as the input for the CN layer. Moreover, the domainness parameter also plays the role of weighting discriminators to balance the relatedness of the generated images to different domains. It is also used as input in the image cycle consistency module, as well as label for the domainness cycle consistency module. During the training phase, we randomly generate the domainess parameter z for each input image. As inspired by [x], we forces the domainness parameter z to obey the beta distribution, i.e. f (z, α, β) = 1 B(α,β) z α−1 (1 − z) β−1 , where β is fixed as 1, and α is a function of the training step α = e t−0.5T 0.25T with t being the current iteration and T being the total number of iterations. In this way, z tends to be sampled more likely as small values at the beginning, and gradually shift to larger values at the end, which gives slightly more stable training than uniform sampling. the uniform distribution U(0, 1), we then obtain a trans- Boosting Domain Adaptation Models lated datasetS = {(x s i , y i )| n i=1 } wherex s i = G ST (x s i , z i ) is the translated version of x s i . The images inS spread along the domain flow from source to target domain, therefore, becomes much more diverse. UsingS as the training data is helpful to learn a domain-invariant models for computer vision tasks. In Section 4.1, we demonstrate that model trained onS achieves good performance for the cross-domain semantic segmentation problem. Moreover, the translated datasetS can also be used to boost the existing adversarial training based domain adaptation approaches. Images inS fill the gap between the source and target domains, and thus ease the domain adaptation task. Taking semantic segmentation as an example, a typical way is to append a discriminator to the segmentation model, which is used to distinguish the source and target samples. Using the adversarial training strategy to optimize the discriminator and the segmentation model, the segmentation model is trained to be more domain-invariant. As shown in Fig 4, we replace the source dataset S with the translated versionS, and apply a weight √ 1 − z i to the adversarial loss. The motivation is as follows, for each samplex s i , if the domainness z i is higher, it is closer to the target domain, then the weight of adversarial loss can be reduced. Otherwise, we should enhance the loss weight. Style Generalization Most existing image-to-image translation works learn a deterministic mapping between two domains. After learning the model, source images can only be translated to a fixed style. In contrast, our DLOW model take an random z to translate images into various styles. When multiple target domains are provided, it is also able to transfer the source image into a mixture of different target styles. In other words, we are able to generalize to an unseen intermediate domain that are related to existing domains. In particular, suppose we have K target domains, denoted as T 1 , . . . , T K . Accordingly, the domainness variable z is expanded as a K-dim vector z = [z 1 , . . . , z K ] with K k=1 z k = 1. Each elelment z k represents the relatedness to the k-th target domain. To map an image from the source domain to the intermediate domain defined by z, we need to optimize the following objective, L = K k=1 z k · dist(P M , P T k ), s.t. K 1 z k = 1 (11) where P M is the distribution of the intermediate domain, P T K is the distribution of T k . The network structure can be easily adjusted from our DLOW model to optimize the above objective. We leave the details in the Appendix due to the space limitation. Experiments In this section, we demonstrate the benefits of our DLOW model with two tasks. In the first task, we address the domain adaptation problem, and train our DLOW model to generate the intermediate domain sample to boost the domain adaptation performance. In the second task, we consider the style generalization problem, and train our DLOW model to transfer images into new styles that are unseen in the training data. Domain Adaptation and Generalization Experiments Setup For the domain adaptation problem, we follow [20,19,5,61] to conduct experiments on the urban scene semantic segmentation by learning from synthetic data to real scenario. The GTA5 dataset [45] is used as the source domain while the Cityscapes dataset [7] as the target domain. Moreover, we also evaluate the generalization ability of learnt segmentation models to unseen domains, for which we take the KITTI [12], WildDash [55] and BDD100K [54] datasets as additional unseen datasets for evaluation. Cityscapes is a dataset consisting of urban scene images taken from some European cities. We use the 2, 993 training images without annotation as unlabeled target samples in training phase, and 500 validation images with annotation for evaluation, which are densely labelled with 19 classes. GTA5 is a dataset consisting of 24, 966 densely labelled synthetic frames generated from the computer game whose scenes are based on the city of Los Angeles. The annotations of the images are compatible with the Cityscaps. KITTI is a dataset consisting of images taken from mid-size city of Karlsruhe. We use 200 validation images densely labeled and compatible with Cityscapes. WildDash is a dataset covers images from different sources, different environments(place, weather, time and so on) and different camera characteristics. We use 70 labeled and Cityscapes annotation compatible validation images. BDD100K is a driving dataset covering diverse images taken from US whose label maps are with training indices specified in Cityscapes. We use 1, 000 densely labeled images for validation in our experiment. In this task, we first train our proposed DLOW model using the GTA5 dataset as the source domain, and Cityscapes as the target domain. Then, we generate a translated GTA5 dataset with the learnt DLOW model. Each source image is inputted into DLOW with a random domainness variable z. The new translated GTA5 dataset contains exactly the same number images as the original one, but the styles of images randomly drift from the synthetic style to the real style. We then use the translated GTA dataset as the new source domain for training segmentation models. We implement our DLOW model based on Augmented CycleGAN [1] and CyCADA [19]. Following their setup, all images are resized to 1024 × 1024 and the crop size is set as 400 × 400. When training the DLOW model, the image cycle loss weight is 10 and the domain cycle loss weight is 1. The learning rate is fixed as 0.0002. For the segmentation network, we use the AdaptSegNet [50] model, which is based on DeepLab-v2 [4] with ResNnet-101 [17] as the backbone network. The training images are resized to 1280 × 720. We follow the exact the same training policy as in the AdaptSegNet. Experimental Results Intermediate Domain Images: To verify the ability of our DLOW model to generate intermediate domain images, after learning the model, we fix the input source image, and vary the domainness parameter from 0 to 1. A few examples are shown in Fig 5. It can be observed that the styles of translated images gradually shift from the synthetic style of GTA5 to the real style of Cityscapes, which demonstrates the DLOW model is capable of modeling the domain flow to bridge the source and target domains as expected. Cross-Domain Semantic Segmentation: We further evaluate the usefulness of intermediate domain images in two settings. In the first setting, we compare with the Cy-cleGAN model [58], which was used in the CycADA approach [19] for performing pixel-level domain adaptation. The difference between CycleGAN and our DLOW model is that CycleGAN transfers source images to mimic only the target style, while our DLOW model transfers source images into random styles flowing from the source domain to the target domain. We first obtain a translated version of the GTA5 dataset with each model. Then, we respectively use the two transalated GTA5 datasets to train DeepLab-v2 models, which are evaluated on the Cityscapes dataset for semantic segmentation. We also include the "NonAdapt" baseline which uses the original GTA5 images as training data, as well as a special case of our approach, "DLOW(z = 1)", where we set z = 1 for all source images when making image translation using the learnt DLOW model. The results are shown in Table 1. We observe that all pixel-level adaptation methods outperform the "NonAdapt" baseline, which verifies that image translation is helpful for training models for cross-domain semantic segmentation. Moreover, 'DLOW(z = 1)" is a special case of our model that directly translates source images into the target domain, which non-surprisingly gives comparable results as the CycADA-pixel method (40.7% v.s. 41.0%). By further using intermedaite domain images, our DLOW model is able to improve the segmentation results from 40.7% to 42.3%, which demonstrates that intermediate domain images are helpful for learning a more robust domain-invariant segmentation model. In the second setting, we further use intermediate domain images to improve the feature-level domain adpatation model. We conduct experiments based on the Adapt-SegNet method [50], which is open source and has reported the state-of-the-art result for GTA5→CityScapes. It consists of multiple levels of adversarial training, and we augment each level with the loss weight discussed in Section 3.5. The results are reported in Table 2. The "Original" method denotes the AdaptSegNet model that trained using GTA5 as the source domain, for which the results are obtained using their released pretrained model. The "DLOW" method is AdaptSegNet trained using translated dataset with our DLOW model. From the first column, we observe that the intermediate domain images are able to improve the AdaptSegNet model by 2.5% from 42.3 to 44.8. More interestingly, we show that the AdaptSegNet model with DLOW translated images also exhibits excellent domain generalization ability when being applied to unseen domains, which achieves significantly better results than the original AdaptSegNet model on the KITTI, WildDash and BDD100K datasets as reported in the second to the fourth columns, respectively. This shows that intermediate domain images are useful to improve the model's cross-domain generalization ability. Style Generalization We conduct the style generalization experiment on the Photo to Artworks dataset [58], which consists of real photographs (6, 853 images) and artworks from Monet(1, 074 images), Cezanne(584 images), Van Gogh(401 images) and Ukiyo-e(1, 433 images). We use the real photographs as the source domain, and the remaining as four target domains. As discussed in Section 3.6, The domainness variable in this experiment is expanded as a 1 × 4 vector [z 1 , z 2 , z 3 , z 4 ] meeting the condition Fig 6. From the qualitative results, it is shown that our DLOW model can translate the photo image to corresponding artworks with different styles. When varying the values of domainness vector, we could also successfully produce new styles related to different painting styles, which demonstrates the good generalization ability of our model to unseen domains. Note, different from [57,23] works, we do not need any reference image in the test phase, and the domainness vector can be changed instantly to generate different new styles of images. We provide more examples in Appendix. Conclusion In this paper, we have presented the DLOW model to generate intermediate domains for bridging different domains. The model takes a domainness variable z (or domainness vector z) as the conditional input, and transfers images into the intermediate domain controlled by z or z. We demonstrate the benefits of our DLOW model in two scenarios. Firstly, for the cross-domain semantic segmentation task, our DLOW model could improve the performance of the pixel-level domain adaptation by taking the translated images in intermediate domains as training data. Secondly, our DLOW model also exhibits excellent style generalization ability for image translation and we are able to transfer images into a new style that is unseen in the training data. Extensive experiments on benchmark datasets have verified the effectiveness of our proposed model. 8 Appendix In this Appendix, we provide additional information for, • the complete pipeline of our proposed DLOW model, • the detailed network structure of our DLOW model for style generalization with four target domains, • more examples for style generalization. Pipeline of the DLOW model In Section 3.4 of the main paper, we introduced three modules of the DLOW model by taking the direction S → T as an example. Those three modules were illustrated separately for clarity. In this Appendix, we further present the complete pipeline for a better illustration of our DLOW model. By combining the three modules of the direction S → T and supplementing the structure of the direction T → S, the complete model is shown in Fig 7. Taking the direction S → T for an example (c.f . Fig 7a), the orange dotted box shows the adversarial loss module which was illustrated in Fig 3a of the main paper. Correspondingly, the green dotted box and the purple dotted box are the image reconstruction module and the domainness reconstruction module which were illustrated in Fig 3b and Fig 3c of the main paper, respectively. Also, the structure of the other direction T → S is presented in Fig 7b, which is symmetric to the direction S → T . Network Structure for Style Generalization In Section 3.6 of the main paper, we introduced that our DLOW model can be adapted for style generalization when there are multiple target domains available. We present the details in this section. The network structure of our DLOW model for style generalization is shown in Fig 8, where we have four target domains, each of which represents an image style. For the direction of S → T , shown in Fig 8a, the style generalization model consists of three modules, the adversarial module, the image reconstruction module and the domainness reconstruction module. For each target domain T i , there is one corresponding discriminator D Ti measuring the distribution distance between the source domain S and the target domain T i . Accordingly, the domainness variable z is expanded as a 4-dim vector z = [z 1 , . . . , z 4 ] . Also, the output of the regressor is expanded to be multiple dimensions to reconstruct the domainness vector. For the other direction T → S, shown in Fig 8b, the adversarial module is similar to that of the direction S → T . However, the image reconstruction module is slightly different, since the image reconstruction loss should be weighted by the domainness vector z. Additional Results for Style Generalization We provided two examples for style generalization in Fig 6 of the main paper. Here we provide more experimental results in Fig 9, Fig 10 and Fig 11. The images with red bounding boxes are translated images in four target domains, i.e., Monet, Van Gogh, Cezanne, and Ukiyo-e. Those can be considered as the "seen" styles. Our model gives similar translation results to CycleGAN model for each target domain. But the difference is that we only need one unified model for the four target domains while the Cy-cleGAN should train four models. Moreover, the images with green bounding boxes are the mixed style images of their neighboring target styles and the image in the center is the mixed style image of all the four target styles, which are new styles that are never seen in the training data. We can observe that our DLOW model could generalize well across different styles, which proves the good domain generalization ability of our model. show that our DLOW model generalizes well across styles, and produces new images styles smoothly.
4,911
1812.05282
2904221347
Metric graphs are meaningful objects for modeling complex structures that arise in many real-world applications, such as road networks, river systems, earthquake faults, blood vessels, and filamentary structures in galaxies. To study metric graphs in the context of comparison, we are interested in determining the relative discriminative capabilities of two topology-based distances between a pair of arbitrary finite metric graphs: the persistence distortion distance and the intrinsic Cech distance. We explicitly show how to compute the intrinsic Cech distance between two metric graphs based solely on knowledge of the shortest systems of loops for the graphs. Our main theorem establishes an inequality between the intrinsic Cech and persistence distortion distances in the case when one of the graphs is a bouquet graph and the other is arbitrary. The relationship also holds when both graphs are constructed via wedge sums of cycles and edges.
Well-known methods for comparing graphs using distance measures include combinatorial (e.g., graph edit distance @cite_14 ) and spectral (e.g., eigenvalue decomposition @cite_4 ) approaches. Graph edit distance minimizes the cost of transforming one graph to another via a set of elementary operators such as node edge insertions deletions, while spectral approaches optimize objective functions based on properties of the graph spectra.
{ "abstract": [ "Graph data have become ubiquitous and manipulating them based on similarity is essential for many applications. Graph edit distance is one of the most widely accepted measures to determine similarities between graphs and has extensive applications in the fields of pattern recognition, computer vision etc. Unfortunately, the problem of graph edit distance computation is NP-Hard in general. Accordingly, in this paper we introduce three novel methods to compute the upper and lower bounds for the edit distance between two graphs in polynomial time. Applying these methods, two algorithms AppFull and AppSub are introduced to perform different kinds of graph search on graph databases. Comprehensive experimental studies are conducted on both real and synthetic datasets to examine various aspects of the methods for bounding graph edit distance. Result shows that these methods achieve good scalability in terms of both the number of graphs and the size of graphs. The effectiveness of these algorithms also confirms the usefulness of using our bounds in filtering and searching of graphs.", "An approximate solution to the weighted-graph-matching problem is discussed for both undirected and directed graphs. The weighted-graph-matching problem is that of finding the optimum matching between two weighted graphs, which are graphs with weights at each arc. The proposed method uses an analytic instead of a combinatorial or iterative approach to the optimum matching problem. Using the eigendecompositions of the adjacency matrices (in the case of the undirected-graph-matching problem) or Hermitian matrices derived from the adjacency matrices (in the case of the directed-graph-matching problem), a matching close to the optimum can be found efficiently when the graphs are sufficiently close to each other. Simulation results are given to evaluate the performance of the proposed method. >" ], "cite_N": [ "@cite_14", "@cite_4" ], "mid": [ "2032338144", "2108182844" ] }
The Relationship Between the IntrinsicČech and Persistence Distortion Distances for Metric Graphs *
When working with graph-like data equipped with a notion of distance, a very useful means of capturing existing geometric and topological relationships within the data is via a metric graph. Given an ordinary graph G = (V, E) and a length function on the edges, one may view G as a metric space with the shortest path metric in any geometric realization. Metric graphs are used to model a variety of real-world data sets, such as road networks, river systems, earthquake faults, blood vessels, and filamentary structures in galaxies [1,24,25]. Given these practical applications, it is natural to ask how to compare two metric graphs in a meaningful way. Such a comparison is important to understand the stability of these structures in the noisy setting. One way to do this is to check whether there is a bijection between the two input graphs as part of a graph isomorphism problem [3]. Another way is to define, compute, and compare various distances on the space of graphs. In this paper, we are interested in determining the discriminative capabilities of two distances that arise from computational topology: the persistence distortion distance and the intrinsicČech distance. If two distances d 1 and d 2 on the space of metric graphs satisfy an inequality d 1 (G 1 , G 2 ) ≤ c · d 2 (G 1 , G 2 ) (for some constant c > 0 and any pair of graphs G 1 and G 2 ), this means that d 2 has greater discriminative capacity for differentiating between two input graphs. For instance, if d 1 (G 1 , G 2 ) = 0 and d 2 (G 1 , G 2 ) > 0, then d 2 has a better discriminative power than d 1 . Related work Well-known methods for comparing graphs using distance measures include combinatorial (e.g., graph edit distance [27]) and spectral (e.g., eigenvalue decomposition [26]) approaches. Graph edit distance minimizes the cost of transforming one graph to another via a set of elementary operators such as node/edge insertions/deletions, while spectral approaches optimize objective functions based on properties of the graph spectra. Recently, several distances for comparing metric graphs have been proposed based on ideas from computational topology. In the case of a special type of metric graph called a Reeb graph, these distances include: the functional distortion distance [4], the combinatorial edit distance [15], the interleaving distance [12], and its variant in the setting of merge trees [19]. In particular, the functional distortion distance can be considered as a variation of the Gromov-Hausdorff distance between two metric spaces [4]. The interleaving distance is defined via algebraic topology and utilizes the equivalence between Reeb graphs and cosheaves [12]. For metric graphs in general, both the persistence distortion distance [13] and the intrinsicČech distance [10] take into consideration the structure of metric graphs, independent of their geometric embeddings, by treating them as continuous metric spaces. In [21], Oudot and Solomon point out that since compact geodesic spaces can be approximated by finite metric graphs in the Gromov-Hausdorff sense [6] (see also the recent work of Mémoli and Okutan [18]), one can study potentially complicated length spaces by studying the persistence distortion of a sequence of approximating graphs. In the context of comparing the relative discriminative capabilities of these distances, Bauer, Ge, and Wang [4] show that the functional distortion distance between two Reeb graphs is bounded from below by the bottleneck distance between the persistence diagrams of the Reeb graphs. Bauer, Munch, and Wang [5] establish a strong equivalence between the functional distortion distance and the interleaving distance on the space of all Reeb graphs, which implies the two distances are within a constant factor of one another. Carrière and Oudot [9] consider the intrinsic versions of the aforementioned distances and prove that they are all globally equivalent. They also establish a lower bound for the bottleneck distance in terms of a constant multiple of the functional distortion distance. In [13], Dey, Shi, and Wang show that the persistence distortion distance is stable with respect to changes to input metric graphs as measured by the Gromov-Hausdorff distance. In other words, the persistence distortion distance is bounded above by a constant factor of the Gromov-Hausdorff distance. Furthermore, the intrinsicČech distance is also bounded from above by the Gromov-Hausdorff distance for general metric spaces [10]. Our contribution The main focus of this paper is relating two specific topological distances between general metric graphs G 1 and G 2 : the intrinsicČech distance and the persistence distortion distance. Both of these can be viewed as distances between topological signatures or summaries of G 1 and G 2 . Indeed, in the case of the intrinsicČech distance, a metric graph (G, d G ) is mapped to the persistence diagram Dg 1 IC G induced by the so-called intrinsicČech filtration IC G , and we may think of Dg 1 IC G as the signature of G. The intrinsicČech distance d IC (G 1 , G 2 ) between two metric graphs G 1 and G 2 is the bottleneck distance between these signatures, denoted d B (Dg 1 IC G 1 , Dg 1 IC G 2 ). For the persistence distortion distance, each metric graph G is mapped to a set Φ(G) of persistence diagrams, which is the signature of the graph G in this case. The persistence distortion distance d P D (G 1 , G 2 ) between G 1 and G 2 is measured by the Hausdorff distance between these image sets or signatures. See Section 2 for the definition of Φ, along with more detailed definitions of these two distances. Our objective is to determine the relative discriminative capacities of such signatures. We conjecture that the persistence distortion distance is more discriminative than the intrinsicČech distance. Conjecture 1. d IC ≤ c · d P D for some constant c > 0. It is known from [16] that Dg 1 IC G depends only on the lengths of the shortest system of loops in G, and thus the persistence distortion distance appears to be more discriminative, intuitively. We show in Section 3 that the intrinsicČech distance between two arbitrary finite metric graphs is determined solely by the difference in these shortest cycle lengths; see Theorem 5 for a precise statement. This further implies that the intrinsicČech distance between two arbitrary metric trees is always 0. In contrast, the persistence distortion distance takes relative positions of loops as well as branches into account, and is nonzero in the case of two trees. In other words, the conjecture holds for metric trees. We make progress toward proving the conjecture in greater generality in this paper. Theorem 11 establishes an inequality between the intrinsicČech and persistence distortion distances for two finite metric graphs in the case when one of the graphs is a bouquet graph and the other is arbitrary. In this case, the constant c = 1/2 so that the inequality is sharper than what is conjectured. The theorem and proof appear in Section 4, and we conclude that section by proving that Conjecture 1 also holds when both graphs are constructed by taking wedge sums of cycles and edges. While this does not yet prove the conjecture for arbitrary metric graphs, our work provides the first non-trivial relationship between these two meaningful topological distances. Our proofs also provide insights on the map Φ from a metric graph into the space of persistence diagrams as utilized in the definition of the persistence distortion distance. This map Φ is of interest itself; indeed, see the recent study of this map in [21]. In general, we believe that this direction of establishing qualitative understanding of topological signatures and their corresponding distances is interesting and valuable for use in applications. We leave the proof of the conjecture for arbitrary metric graphs as an open problem and give a brief discussion on some future directions in Section 5. Persistent homology and metric graphs We begin with a brief summary of persistent homology and how it can be utilized in the context of metric graphs. For background on homology and simplicial complexes, we refer the reader to [17,20], and for further details on persistent homology, see, e.g., [7,14]. In persistent homology, one studies the changing homology of an increasing sequence of subspaces of a topological space X. One (typical) way to obtain a filtration of X is to take a continuous function f : X → R and construct the sublevel set filtration, ∅ = X a 0 ⊆ X a 1 ⊆ . . . ⊆ X am = X, by writing X a i = f −1 (−∞, a i ) for the sublevel set defined by the value a i . The inclusions {X a i → X a j } 0≤i<j≤m induce the persistence module H k (X a 0 ) → H k (X a 1 ) → . . . → H k (X am ) in any homological dimension k by applying the homology functor with coefficients in some field. Another way to obtain a filtration is to build a sequence of simplicial complexes on a set of points using, for instance, the intrinsič Cech filtration [10] discussed in Section 2.2. Elements of each homology group may then be tracked through the filtration and recorded in a persistence diagram, with one diagram for each k. A persistence diagram is a multiset of points (a i , a j ) in the extended plane (R ∪ ±∞) 2 , where each point (a i , a j ) corresponds to a homological element that appears for the first time (is "born") at H k (X a i ) and which disappears ("dies") at H k (X a j ). A persistence diagram also includes the infinitely many points along the diagonal line y = x. The usual mantra for persistence is that points close to the diagonal are likely to represent noise, while points further from the diagonal may encode more robust topological features. In this paper, we are interested in summarizing the topological structure of a finite metric graph, specifically in homological dimension k = 1. Given a graph G = (V, E), where V and E denote the vertex and edge sets, respectively, as well as a length function, length : E → R ≥0 , on edges in E, a finite metric graph (|G|, d G ) is a metric space where |G| is a geometric realization of G and d G is defined as in [13]. Namely, if e and |e| denote an edge and its image in the geometric realization, we define α : [0, length(e)] → |e| to be the arclength parametrization, so that d G (u, v) = |α −1 (v) − α −1 (u)| for any u, v ∈ |e|. This definition may then be extended to any two points in |G| by restricting a given path from one point to another to edges in G, adding up these lengths, then taking the distance to be the minimum length of any such path. In this way, all points along an edge are points in a metric graph, not just the original graph's vertices. A system of loops of G refers to a set of cycles whose associated homology classes form a minimal generating set for the 1-dimensional (singular) homology group of G. The length-sequence of a system of loops is the sequence of lengths of elements in this set listed in non-decreasing order. Thus, a system of loops of G is shortest if its length-sequence is lexicographically smallest among all possible systems of loops of G. One particular class of metric graphs we will be working with are bouquet graphs. These are metric graphs containing a single vertex with a number of self-loops of various lengths attached to it. IntrinsicČech and persistence distortion distances In this section, we recall the distances between metric graphs that are being explored in this work. We note that both are actually pseudo-distances because it can be the case that d(G 1 , G 2 ) = 0 when G 1 = G 2 . However, for ease of exposition, we will refer to them simply as distances in this paper. Both rely on the bottleneck distance on the space of persistence diagrams, a version of which we now state. Definition 2. Let X and Y be persistence diagrams with µ : X → Y a bijection. The bottleneck distance between X and Y is d B (X, Y ) := inf µ:X→Y sup x∈X ||x − µ(x)|| 1 . Although this definition differs from the standard version of the bottleneck distance, which uses ||x−µ(x)|| ∞ rather than ||x−µ(x)|| 1 , the two are related via the inequalities ||x|| ∞ ≤ ||x|| 1 ≤ 2||x|| ∞ . Next, let (G, d G ) be a metric graph with geometric realization |G|. Define the intrinsic ball B(x, a i ) = {y ∈ |G| : d G (x, y) ≤ a i } for any x ∈ |G|, as well as the uncountable open cover U a i = {B(x, a i ) : x ∈ |G|}. We useČech(a i ) to denote the nerve of the cover U a i , referred to as the intrinsicČech complex. See Figure 1 for an illustration. Then {Čech(a i ) →Čech(a j )} 0≤a i <a j is the intrinsicČech filtration inducing the intrinsicČech persistence module {H k (Čech(a i )) → H k (Čech(a j ))} 0≤a i <a j in any dimension k, and the corresponding persistence diagram is denoted Dg k IC G . The following intrinsicČech distance definition comes from [10]. Here, we work with dimension k = 1. Figure 1: A finite subset of the infinite cover at a fixed radius (left) and its corresponding nerve (right). Definition 3. Given two metric graphs (G 1 , d G 1 ) and (G 2 , d G 2 ), their intrinsicČech distance is d IC (G 1 , G 2 ) := d B (Dg 1 IC G 1 , Dg 1 IC G 2 ). The persistence distortion distance was first introduced in [13]. Given a base point v ∈ |G|, define the geodesic distance function f v : |G| → R where f v (x) = d G (v, x) . Then Dg(f v ) is the union of the 0− and 1−dimensional extended persistence diagrams for f v (see [11] for the details of extended persistence). Equivalently, it is the 0-dimensional levelset zigzag persistence diagram induced by f v [8]. Define Φ : |G| → SpDg, Φ(v) = Dg(f v ), where SpDg denotes the space of persistence diagrams for all points v ∈ |G|. The set Φ(|G|) ⊂ SpDg is the persistence distortion of the metric graph G. Definition 4. Given two metric graphs (G 1 , d G 1 ) and (G 2 , d G 2 ), their persistence distortion distance is d P D (G 1 , G 2 ) := d H (Φ(|G 1 |), Φ(|G 2 |)) where d H denotes the Hausdorff distance. In other words, d P D (G 1 , G 2 ) = max sup D 1 ∈Φ(|G 1 |) inf D 2 ∈Φ(|G 2 |) d B (D 1 , D 2 ), sup D 2 ∈Φ(|G 2 |) inf D 1 ∈Φ(|G 1 |) d B (D 1 , D 2 ) . Note that the diagram Dg(f v ) contains both 0− and 1−dimensional persistence points, but only points of the same dimension are matched under the bottleneck distance. In this paper, we will only focus on the points in the 1-dimensional extended persistence diagrams for the persistence distortion distance computation. Calculating the intrinsicČech distance In this section, we show that the intrinsicČech distance between two metric graphs may be easily computed from knowing the shortest systems of loops for the graphs. We begin with a theorem that characterizes the bottleneck distance between two sets of points in the extended plane. a 1 ), . . . , (0, a n )} and D 2 = {(0, b 1 ), . . . , (0, b n )} be two persistence diagrams with 0 ≤ a 1 ≤ · · · ≤ a n and 0 ≤ b 1 ≤ · · · ≤ b n , respectively. Then Theorem 5. Let D 1 = {(0,d B (D 1 , D 2 ) = n max i=1 |a i − b i |. Proof. To simplify notation, we use the convention that for all i = 1, . . . , n, (0, a i ) = a i , (0, b i ) = b i , and (0, 0) = 0. Let µ be any matching of points in D 1 and D 2 , where each point a i in D 1 is either matched to a unique point b j in D 2 or to the nearest neighbor in the diagonal (and similarly for D 2 ). Assume that C µ is the cost of the matching µ, i.e., the maximum distance between two matched points. Now, let µ * be the matching such that µ * (a i ) = b i for all 0 ≤ i ≤ n. By construction, the cost of this matching is C µ * = n max i=1 |a i − b i |. We claim that the matching cost of µ * is less than or equal to that of µ, i.e., C µ * ≤ C µ . If this is the case, then µ * is the optimal bottleneck matching and therefore d B (D 1 , D 2 ) = C µ * . To show this, we look at where the matchings µ and µ * differ. Note that since all of the off-diagonal points in D 1 and D 2 lie on the y-axis, any such point matched to the diagonal under µ may simply be matched to (0, 0) since this will yield the same value in the 1 −norm. Now, starting with b 1 , let j be the first index where µ(a j ) = b j . Then, we have two cases: (1) µ(a k ) = b j for some k > j (i.e., b j is matched with some a k = a j ); or (2) µ(0) = b j (i.e., b j is matched with the diagonal, or equivalently, to 0). We show that in either case, matching b j with a j instead does not increase the cost of the matching. In the first case, let us also assume that µ(a j ) = b l for some l > j (the situation where µ(a j ) = 0 will be taken care of in the second case). Then, max{|a j − b j |, |a k − b l |} ≤ max{|a j − b l |, |a k − b j |}. That is, if we were to instead pair a j with b j and a k with b l , the cost of the matching would be lower. This can be seen by working through a case analysis on the relative order of a j , a k , b j , and b l along the y-axis. Intuitively, we can think of a j , a k , b j , and b l as the four corners of a trapezoid as in Figure 2. The diagonals of the trapezoid represent the distances under the matching µ, while the legs of the trapezoid represent the distances when we pair a j with b j and a k with b l . The maximum of the lengths of the legs will always be less than the maximum of the lengths of the diagonals. Adjusting the lengths of the top and bottom bases (which amounts to changing the order of a j , a k , b j , and b l along the y-axis) does not change this fact. Therefore, matching b j with a j instead of a k does not increase the cost of the matching. In the second case, if b j is matched to 0, there must be some a k with k ≥ j that is matched to 0, as well. If we were to instead match b j to a k , this does not increase the cost of the matching since max{b j , a k } ≥ |a k − b j | (i.e., the original cost is greater than the new cost). After this rematching, b j is no longer matched to 0 and this reverts to the first case. Similarly, if a j is matched to 0, it may be rematched in a similar manner. By looking at all the pairings where µ and µ * differ (in increasing order of indices), pairing a i with b i instead of µ(a i ) (and similarly, pairing b i with a i rather than what it was paired with under µ) always results in the same or lower cost matching. Therefore, C µ * ≤ C µ for all matchings µ; hence, d B (D 1 , D 2 ) = C µ * = n max i=1 |a i − b i |. To see how this applies to the computation of the intrinsicČech distance between two metric graphs, let G 1 be a metric graph with a shortest system of m loops of lengths 0 < 2t 1 ≤ · · · ≤ 2t m , and let G 2 be a metric graph with a shortest system of n loops of lengths 0 < 2s 1 ≤ · · · ≤ 2s n . Without loss of generality, suppose n ≥ m. From [16], the 1-dimensional intrinsicČech persistence diagrams of G 1 and G 2 are the multisets of points Dg 1 IC G 1 = 0, t 1 2 , . . . , 0, t m 2 and Dg 1 IC G 2 = 0, s 1 2 , . . . , 0, s n 2 . In order to apply Theorem 5, we add n − m copies of the point (0, 0) at the start of the list of points in Dg 1 IC G 1 , i.e., let Dg 1 IC G 1 = 0, t 1 2 , . . . , 0, t n 2 , where t 1 = · · · = t n−m = 0, t n−m+1 = t 1 , . . . , and t n = t m . Corollary 6. Let G 1 and G 2 be as above. Then d IC (G 1 , G 2 ) = n max i=1 |s i − t i | 2 . 4 Relating the intrinsicČech and persistence distortion distances for a bouquet graph and an arbitrary graph Feasible regions in persistence diagrams Our eventual goal for our main theorem (Theorem 11) is to estimate a lower bound for the persistence distortion distance between metric graphs G 1 = (V 1 , E 1 ) and G 2 = (V 2 , E 2 ) so that we can compare it with the intrinsicČech distance between them, given in Corollary 6. A fundamental part of this process relies on the notion of a feasible region for a point in a given persistence diagram lying on the y-axis. Definition 7. The feasible region for a point s := (0, s) ∈ R 2 is defined as F s = {z = (z 1 , z 2 ) : 0 ≤ z 1 ≤ z 2 , s ≤ z 2 ≤ z 1 + s}. s = (0, s) Fss = (0, s) Fs z = (x, y) w [Case 2.2] w [Case 2.1] w [Case 1] t < l a t e x i t s h a 1 _ b a s e 6 4 = " o X 5 d 5 J N F S e R E z i H B m z x X n y L z 5 3 g = " > A A A B 9 X i c b V C 7 T g J B F L 2 L L 8 Q X a m k z E U y s y C 6 N l i Q 2 l p j I I 4 G V z A 4 D T J j d 2 c z c 1 Z A N / 2 F j o T G 2 / o u d f + M s b K H g S S Y 5 O e e e 3 D s n i K U w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 j U o 0 4 y 2 m p N L d g B o u R c R b K F D y b q w 5 D Q P J O 8 H 0 J v M 7 j 1 w b o a J 7 n M X c D + k 4 E i P B K F r p o d p X 1 s 3 C K c 6 r g 3 L F r b k L k H X i 5 a Q C O Z q D 8 l d / q F g S 8 g i Z p M b 0 P D d G P 6 U a B Z N 8 X u o n h s e U T e m Y 9 y y N a M i N n y 6 u n p M L q w z J S G n 7 I i Q L 9 X c i p a E x s z C w k y H F i V n 1 M v E / r 5 f g 6 N p P R R Q n y C O 2 X D R K J E F F s g r I U G j O U M 4 s o U w L e y t h E 6 o p Q 1 t U y Z b g r X 5 5 n b T r N c + t e X f 1 S q O a 1 1 G E M z i H S / D g C h p w C 0 1 o A Q M N z / A K b 8 6 T 8 + K 8 O x / L 0 Y K T Z 0 7 h D 5 z P H 2 3 l k l o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " o X 5 d 5 J N F S e R E z i H B m z x X n y L z 5 3 g = " > A A A B 9 X i c b V C 7 T g J B F L 2 L L 8 Q X a m k z E U y s y C 6 N l i Q 2 l p j I I 4 G V z A 4 D T J j d 2 c z c 1 Z A N / 2 F j o T G 2 / o u d f + M s b K H g S S Y 5 O e e e 3 D s n i K U w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 j U o 0 4 y 2 m p N L d g B o u R c R b K F D y b q w 5 D Q P J O 8 H 0 J v M 7 j 1 w b o a J 7 n M X c D + k 4 E i P B K F r p o d p X 1 s 3 C K c 6 r g 3 L F r b k L k H X i 5 a Q C O Z q D 8 l d / q F g S 8 g i Z p M b 0 P D d G P 6 U a B Z N 8 X u o n h s e U T e m Y 9 y y N a M i N n y 6 u n p M L q w z J S G n 7 I i Q L 9 X c i p a E x s z C w k y H F i V n 1 M v E / r 5 f g 6 N p P R R Q n y C O 2 X D R K J E F F s g r I U G j O U M 4 s o U w L e y t h E 6 o p Q 1 t U y Z b g r X 5 5 n b T r N c + t e X f 1 S q O a 1 1 G E M z i H S / D g C h p w C 0 1 o A Q M N z / A K b 8 6 T 8 + K 8 O x / L 0 Y K T Z 0 7 h D 5 z P H 2 3 l k l o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " o X 5 d 5 J N F S e R E z i H B m z x X n y L z 5 3 g = " > A A A B 9 X i c b V C 7 T g J B F L 2 L L 8 Q X a m k z E U y s y C 6 N l i Q 2 l p j I I 4 G V z A 4 D T J j d 2 c z c 1 Z A N / 2 F j o T G 2 / o u d f + M s b K H g S S Y 5 O e e e 3 D s n i K U w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 j U o 0 4 y 2 m p N L d g B o u R c R b K F D y b q w 5 D Q P J O 8 H 0 J v M 7 j 1 w b o a J 7 n M X c D + k 4 E i P B K F r p o d p X 1 s 3 C K c 6 r g 3 L F r b k L k H X i 5 a Q C O Z q D 8 l d / q F g S 8 g i Z p M b 0 P D d G P 6 U a B Z N 8 X u o n h s e U T e m Y 9 y y N a M i N n y 6 u n p M L q w z J S G n 7 I i Q L 9 X c i p a E x s z C w k y H F i V n 1 M v E / r 5 f g 6 N p P R R Q n y C O 2 X D R K J E F F s g r I U G j O U M 4 s o U w L e y t h E 6 o p Q 1 t U y Z b g r X 5 5 n b T r N c + t e X f 1 S q O a 1 1 G E M z i H S / D g C h p w C 0 1 o A Q M N z / A K b 8 6 T 8 + K 8 O x / L 0 Y K T Z 0 7 h D 5 z P H 2 3 l k l o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " o X 5 d 5 J N F S e R E z i H B m z x X n y L z 5 3 g = " > A A A B 9 X i c b V C 7 T g J B F L 2 L L 8 Q X a m k z E U y s y C 6 N l i Q 2 l p j I I 4 G V z A 4 D T J j d 2 c z c 1 Z A N / 2 F j o T G 2 / o u d f + M s b K H g S S Y 5 O e e e 3 D s n i K U w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 j U o 0 4 y 2 m p N L d g B o u R c R b K F D y b q w 5 D Q P J O 8 H 0 J v M 7 j 1 w b o a J 7 n M X c D + k 4 E i P B K F r p o d p X 1 s 3 C K c 6 r g 3 L F r b k L k H X i 5 a Q C O Z q D 8 l d / q F g S 8 g i Z p M b 0 P D d G P 6 U a B Z N 8 X u o n h s e U T e m Y 9 y y N a M i N n y 6 u n p M L q w z J S G n 7 I i Q L 9 X c i p a E x s z C w k y H F i V n 1 M v E / r 5 f g 6 N p P R R Q n y C O 2 X D R K J E F F s g r I U G j O U M 4 s o U w L e y t h E 6 o p Q 1 t U y Z b g r X 5 5 n b T r N c + t e X f 1 S q O a 1 1 G E M z i H S / D g C h p w C 0 1 o A Q M N z / A K b 8 6 T 8 + K 8 O x / L 0 Y K T Z 0 7 h D 5 z P H 2 3 l k l o = < / l a t e x i t > t < l a t e x i t s h a 1 _ b a s e 6 4 = " o X 5 d 5 J N F S e R E z i H B m z x X n y L z 5 3 g = " > A A A B 9 X i c b V C 7 T g J B F L 2 L L 8 Q X a m k z E U y s y C 6 N l i Q 2 l p j I I 4 G V z A 4 D T J j d 2 c z c 1 Z A N / 2 F j o T G 2 / o u d f + M s b K H g S S Y 5 O e e e 3 D s n i K U w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 j U o 0 4 y 2 m p N L d g B o u R c R b K F D y b q w 5 D Q P J O 8 H 0 J v M 7 j 1 w b o a J 7 n M X c D + k 4 E i P B K F r p o d p X 1 s 3 C K c 6 r g 3 L F r b k L k H X i 5 a Q C O Z q D 8 l d / q F g S 8 g i Z p M b 0 P D d G P 6 U a B Z N 8 X u o n h s e U T e m Y 9 y y N a M i N n y 6 u n p M L q w z J S G n 7 I i Q L 9 X c i p a E x s z C w k y H F i V n 1 M v E / r 5 f g 6 N p P R R Q n y C O 2 X D R K J E F F s g r I U G j O U M 4 s o U w L e y t h E 6 o p Q 1 t U y Z b g r X 5 5 n b T r N c + t e X f 1 S q O a 1 1 G E M z i H S / D g C h p w C 0 1 o A Q M N z / A K b 8 6 T 8 + K 8 O x / L 0 Y K T Z 0 7 h D 5 z P H 2 3 l k l o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " o X 5 d 5 J N F S e R E z i H B m z x X n y L z 5 3 g = " > A A A B 9 X i c b V C 7 T g J B F L 2 L L 8 Q X a m k z E U y s y C 6 N l i Q 2 l p j I I 4 G V z A 4 D T J j d 2 c z c 1 Z A N / 2 F j o T G 2 / o u d f + M s b K H g S S Y 5 O e e e 3 D s n i K U w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 j U o 0 4 y 2 m p N L d g B o u R c R b K F D y b q w 5 D Q P J O 8 H 0 J v M 7 j 1 w b o a J 7 n M X c D + k 4 E i P B K F r p o d p X 1 s 3 C K c 6 r g 3 L F r b k L k H X i 5 a Q C O Z q D 8 l d / q F g S 8 g i Z p M b 0 P D d G P 6 U a B Z N 8 X u o n h s e U T e m Y 9 y y N a M i N n y 6 u n p M L q w z J S G n 7 I i Q L 9 X c i p a E x s z C w k y H F i V n 1 M v E / r 5 f g 6 N p P R R Q n y C O 2 X D R K J E F F s g r I U G j O U M 4 s o U w L e y t h E 6 o p Q 1 t U y Z b g r X 5 5 n b T r N c + t e X f 1 S q O a 1 1 G E M z i H S / D g C h p w C 0 1 o A Q M N z / A K b 8 6 T 8 + K 8 O x / L 0 Y K T Z 0 7 h D 5 z P H 2 3 l k l o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " o X 5 d 5 J N F S e R E z i H B m z x X n y L z 5 3 g = " > A A A B 9 X i c b V C 7 T g J B F L 2 L L 8 Q X a m k z E U y s y C 6 N l i Q 2 l p j I I 4 G V z A 4 D T J j d 2 c z c 1 Z A N / 2 F j o T G 2 / o u d f + M s b K H g S S Y 5 O e e e 3 D s n i K U w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 j U o 0 4 y 2 m p N L d g B o u R c R b K F D y b q w 5 D Q P J O 8 H 0 J v M 7 j 1 w b o a J 7 n M X c D + k 4 E i P B K F r p o d p X 1 s 3 C K c 6 r g 3 L F r b k L k H X i 5 a Q C O Z q D 8 l d / q F g S 8 g i Z p M b 0 P D d G P 6 U a B Z N 8 X u o n h s e U T e m Y 9 y y N a M i N n y 6 u n p M L q w z J S G n 7 I i Q L 9 X c i p a E x s z C w k y H F i V n 1 M v E / r 5 f g 6 N p P R R Q n y C O 2 X D R K J E F F s g r I U G j O U M 4 s o U w L e y t h E 6 o p Q 1 t U y Z b g r X 5 5 n b T r N c + t e X f 1 S q O a 1 1 G E M z i H S / D g C h p w C 0 1 o A Q M N z / A K b 8 6 T 8 + K 8 O x / L 0 Y K T Z 0 7 h D 5 z P H 2 3 l k l o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " o X 5 d 5 J N F S e R E z i H B m z x X n y L z 5 3 g = " > A A A B 9 X i c b V C 7 T g J B F L 2 L L 8 Q X a m k z E U y s y C 6 N l i Q 2 l p j I I 4 G V z A 4 D T J j d 2 c z c 1 Z A N / 2 F j o T G 2 / o u d f + M s b K H g S S Y 5 O e e e 3 D s n i K U w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 j U o 0 4 y 2 m p N L d g B o u R c R b K F D y b q w 5 D Q P J O 8 H 0 J v M 7 j 1 w b o a J 7 n M X c D + k 4 E i P B K F r p o d p X 1 s 3 C K c 6 r g 3 L F r b k L k H X i 5 a Q C O Z q D 8 l d / q F g S 8 g i Z p M b 0 P D d G P 6 U a B Z N 8 X u o n h s e U T e m Y 9 y y N a M i N n y 6 u n p M L q w z J S G n 7 I i Q L 9 X c i p a E x s z C w k y H F i V n 1 M v E / r 5 f g 6 N p P R R Q n y C O 2 X D R K J E F F s g r I U G j O U M 4 s o U w L e y t h E 6 o p Q 1 t U y Z b g r X 5 5 n b T r N c + t e X f 1 S q O a 1 1 G E M z i H S / D g C h p w C 0 1 o A Q M N z / A K b 8 6 T 8 + K 8 O x / L 0 Y K T Z 0 7 h D 5 z P H 2 3 l k l o = < / l a t e x i t > t < l a t e x i t s h a 1 _ b a s e 6 4 = " o X 5 d 5 J N F S e R E z i H B m z x X n y L z 5 3 g = " > A A A B 9 X i c b V C 7 T g J B F L 2 L L 8 Q X a m k z E U y s y C 6 N l i Q 2 l p j I I 4 G V z A 4 D T J j d 2 c z c 1 Z A N / 2 F j o T G 2 / o u d f + M s b K H g S S Y 5 O e e e 3 D s n i K U w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 j U o 0 4 y 2 m p N L d g B o u R c R b K F D y b q w 5 D Q P J O 8 H 0 J v M 7 j 1 w b o a J 7 n M X c D + k 4 E i P B K F r p o d p X 1 s 3 C K c 6 r g 3 L F r b k L k H X i 5 a Q C O Z q D 8 l d / q F g S 8 g i Z p M b 0 P D d G P 6 U a B Z N 8 X u o n h s e U T e m Y 9 y y N a M i N n y 6 u n p M L q w z J S G n 7 I i Q L 9 X c i p a E x s z C w k y H F i V n 1 M v E / r 5 f g 6 N p P R R Q n y C O 2 X D R K J E F F s g r I U G j O U M 4 s o U w L e y t h E 6 o p Q 1 t U y Z b g r X 5 5 n b T r N c + t e X f 1 S q O a 1 1 G E M z i H S / D g C h p w C 0 1 o A Q M N z / A K b 8 6 T 8 + K 8 O x / L 0 Y K T Z 0 7 h D 5 z P H 2 3 l k l o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " o X 5 d 5 J N F S e R E z i H B m z x X n y L z 5 3 g = " > A A A B 9 X i c b V C 7 T g J B F L 2 L L 8 Q X a m k z E U y s y C 6 N l i Q 2 l p j I I 4 G V z A 4 D T J j d 2 c z c 1 Z A N / 2 F j o T G 2 / o u d f + M s b K H g S S Y 5 O e e e 3 D s n i K U w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 j U o 0 4 y 2 m p N L d g B o u R c R b K F D y b q w 5 D Q P J O 8 H 0 J v M 7 j 1 w b o a J 7 n M X c D + k 4 E i P B K F r p o d p X 1 s 3 C K c 6 r g 3 L F r b k L k H X i 5 a Q C O Z q D 8 l d / q F g S 8 g i Z p M b 0 P D d G P 6 U a B Z N 8 X u o n h s e U T e m Y 9 y y N a M i N n y 6 u n p M L q w z J S G n 7 I i Q L 9 X c i p a E x s z C w k y H F i V n 1 M v E / r 5 f g 6 N p P R R Q n y C O 2 X D R K J E F F s g r I U G j O U M 4 s o U w L e y t h E 6 o p Q 1 t U y Z b g r X 5 5 n b T r N c + t e X f 1 S q O a 1 1 G E M z i H S / D g C h p w C 0 1 o A Q M N z / A K b 8 6 T 8 + K 8 O x / L 0 Y K T Z 0 7 h D 5 z P H 2 3 l k l o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " o X 5 d 5 J N F S e R E z i H B m z x X n y L z 5 3 g = " > A A A B 9 X i c b V C 7 T g J B F L 2 L L 8 Q X a m k z E U y s y C 6 N l i Q 2 l p j I I 4 G V z A 4 D T J j d 2 c z c 1 Z A N / 2 F j o T G 2 / o u d f + M s b K H g S S Y 5 O e e e 3 D s n i K U w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 j U o 0 4 y 2 m p N L d g B o u R c R b K F D y b q w 5 D Q P J O 8 H 0 J v M 7 j 1 w b o a J 7 n M X c D + k 4 E i P B K F r p o d p X 1 s 3 C K c 6 r g 3 L F r b k L k H X i 5 a Q C O Z q D 8 l d / q F g S 8 g i Z p M b 0 P D d G P 6 U a B Z N 8 X u o n h s e U T e m Y 9 y y N a M i N n y 6 u n p M L q w z J S G n 7 I i Q L 9 X c i p a E x s z C w k y H F i V n 1 M v E / r 5 f g 6 N p P R R Q n y C O 2 X D R K J E F F s g r I U G j O U M 4 s o U w L e y t h E 6 o p Q 1 t U y Z b g r X 5 5 n b T r N c + t e X f 1 S q O a 1 1 G E M z i H S / D g C h p w C 0 1 o A Q M N z / A K b 8 6 T 8 + K 8 O x / L 0 Y K T Z 0 7 h D 5 z P H 2 3 l k l o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " o X 5 d 5 J N F S e R E z i H B m z x X n y L z 5 3 g = " > A A A B 9 X i c b V C 7 T g J B F L 2 L L 8 Q X a m k z E U y s y C 6 N l i Q 2 l p j I I 4 G V z A 4 D T J j d 2 c z c 1 Z A N / 2 F j o T G 2 / o u d f + M s b K H g S S Y 5 O e e e 3 D s n i K U w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 j U o 0 4 y 2 m p N L d g B o u R c R b K F D y b q w 5 D Q P J O 8 H 0 J v M 7 j 1 w b o a J 7 n M X c D + k 4 E i P B K F r p o d p X 1 s 3 C K c 6 r g 3 L F r b k L k H X i 5 a Q C O Z q D 8 l d / q F g S 8 g i Z p M b 0 P D d G P 6 U a B Z N 8 X u o n h s e U T e m Y 9 y y N a M i N n y 6 u n p M L q w z J S G n 7 I i Q L 9 X c i p a E x s z C w k y H F i V n 1 M v E / r 5 f g 6 N p P R R Q n y C O 2 X D R K J E F F s g r I U G j O U M 4 s o U w L e y t h E 6 o p Q 1 t U y Z b g r X 5 5 n b T r N c + t e X f 1 S q O a 1 1 G E M z i H S / D g C h p w C 0 1 o A Q M N z / A K b 8 6 T 8 + K 8 O x / L 0 Y K T Z 0 7 h D 5 z P H 2 3 l k l o = < / l a t e x i t > z < l a t e x i t s h a 1 _ b a s e 6 4 = " s 5 w / 8 F n + A J E Y F A K P 4 x Q l G U x I 4 p w = " > A A A B 6 n i c b V A 9 T w J B E J 3 D L 8 Q v 1 N J m I 5 h Y k T s a L U l s L D H K R w I X s r f s w Y a 9 v c v u n A l e + A k 2 F h p j 6 y + y 8 9 + 4 w B U K v m S S l / d m M j M v S K Q w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 T Z x q x l s s l r H u B t R w K R R v o U D J u 4 n m N A o k 7 w S T m 7 n f e e T a i F g 9 4 D T h f k R H S o S C U b T S f f W p O i h X 3 J q 7 A F k n X k 4 q k K M 5 K H / 1 h z F L I 6 6 Q S W p M z 3 M T 9 D O q U T D J Z 6 V + a n h C 2 Y S O e M 9 S R S N u / G x x 6 o x c W G V I w l j b U k g W 6 u + J j E b G T K P A d k Y U x 2 b V m 4 v / e b 0 U w 2 s / E y p J k S u 2 X B S m k m B M 5 n + T o d C c o Z x a Q p k W 9 l b C x l R T h j a d k g 3 B W 3 1 5 n b T r N c + t e X f 1 S q O a x 1 G E M z i H S / D g C h p w C 0 1 o A Y M R P M M r v D n S e X H e n Y 9 l a 8 H J Z 0 7 h D 5 z P H 5 d W j T o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " s 5 w / 8 F n + A J E Y F A K P 4 x Q l G U x I 4 p w = " > A A A B 6 n i c b V A 9 T w J B E J 3 D L 8 Q v 1 N J m I 5 h Y k T s a L U l s L D H K R w I X s r f s w Y a 9 v c v u n A l e + A k 2 F h p j 6 y + y 8 9 + 4 w B U K v m S S l / d m M j M v S K Q w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 T Z x q x l s s l r H u B t R w K R R v o U D J u 4 n m N A o k 7 w S T m 7 n f e e T a i F g 9 4 D T h f k R H S o S C U b T S f f W p O i h X 3 J q 7 A F k n X k 4 q k K M 5 K H / 1 h z F L I 6 6 Q S W p M z 3 M T 9 D O q U T D J Z 6 V + a n h C 2 Y S O e M 9 S R S N u / G x x 6 o x c W G V I w l j b U k g W 6 u + J j E b G T K P A d k Y U x 2 b V m 4 v / e b 0 U w 2 s / E y p J k S u 2 X B S m k m B M 5 n + T o d C c o Z x a Q p k W 9 l b C x l R T h j a d k g 3 B W 3 1 5 n b T r N c + t e X f 1 S q O a x 1 G E M z i H S / D g C h p w C 0 1 o A Y M R P M M r v D n S e X H e n Y 9 l a 8 H J Z 0 7 h D 5 z P H 5 d W j T o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " s 5 w / 8 F n + A J E Y F A K P 4 x Q l G U x I 4 p w = " > A A A B 6 n i c b V A 9 T w J B E J 3 D L 8 Q v 1 N J m I 5 h Y k T s a L U l s L D H K R w I X s r f s w Y a 9 v c v u n A l e + A k 2 F h p j 6 y + y 8 9 + 4 w B U K v m S S l / d m M j M v S K Q w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 T Z x q x l s s l r H u B t R w K R R v o U D J u 4 n m N A o k 7 w S T m 7 n f e e T a i F g 9 4 D T h f k R H S o S C U b T S f f W p O i h X 3 J q 7 A F k n X k 4 q k K M 5 K H / 1 h z F L I 6 6 Q S W p M z 3 M T 9 D O q U T D J Z 6 V + a n h C 2 Y S O e M 9 S R S N u / G x x 6 o x c W G V I w l j b U k g W 6 u + J j E b G T K P A d k Y U x 2 b V m 4 v / e b 0 U w 2 s / E y p J k S u 2 X B S m k m B M 5 n + T o d C c o Z x a Q p k W 9 l b C x l R T h j a d k g 3 B W 3 1 5 n b T r N c + t e X f 1 S q O a x 1 G E M z i H S / D g C h p w C 0 1 o A Y M R P M M r v D n S e X H e n Y 9 l a 8 H J Z 0 7 h D 5 z P H 5 d W j T o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " s 5 w / 8 F n + A J E Y F A K P 4 x Q l G U x I 4 p w = " > A A A B 6 n i c b V A 9 T w J B E J 3 D L 8 Q v 1 N J m I 5 h Y k T s a L U l s L D H K R w I X s r f s w Y a 9 v c v u n A l e + A k 2 F h p j 6 y + y 8 An illustration of a feasible region is shown in Figure 3. 9 + 4 w B U K v m S S l / d m M j M v S K Q w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 T Z x q x l s s l r H u B t R w K R R v o U D J u 4 n m N A o k 7 w S T m 7 n f e e T a i F g 9 4 D T h f k R H S o S C U b T S f f W p O i h X 3 J q 7 A F k n X k 4 q k K M 5 K H / 1 h z F L I 6 6 Q S W p M z 3 M T 9 D O q U T D J Z 6 V + a n h C 2 Y S O e M 9 S R S N u / G x x 6 o x c W G V I w l j b U k g W 6 u + J j E b G T K P A d k Y U x 2 b V m 4 v / e b 0 U w 2 s / E y p J k S u 2 X B S m k m B M 5 n + T o d C c o Z x a Q p k W 9 l b C x l R T h j a d k g 3 B W 3 1 5 n b T r N c + t e X f 1 S q O a x 1 G E M z i H S / D g C h p w C 0 1 o A Y M R P M M r v D n S e X H The following lemma establishes an important property of feasible regions that will be used later in the proof of the main theorem. Proof. We proceed with a simple case analysis using the definition of F s . Let z = (z 1 , z 2 ). Case 1: Assume s ≥ t so that ||s − t|| 1 = s − t. By the definition of F s , we have z 2 ≥ s and thus ||z − t|| 1 = z 1 + z 2 − t ≥ z 1 + s − t ≥ s − t = ||s − t|| 1 . Case 2.1: If s < t, then ||s − t|| 1 = t − s. If t ≤ z 2 , then since z 1 ≥ z 2 − s and z 2 ≥ s, ||z − t|| 1 = z 1 + z 2 − t ≥ (z 2 − s) + z 2 − t ≥ t − s + t − t = t − s = ||s − t|| 1 . Case 2.2: If s < t but t > z 2 , then since z 2 ≤ z 1 + s, it follows that ||z − t|| 1 = z 1 + t − z 2 ≥ z 1 + t − (z 1 + s) = t − s = ||s − t|| 1 . The lemma now follows. Properties of the geodesic distance function for an arbitrary metric graph Let G = (V, E) be an arbitrary metric graph with shortest system of loops of lengths 2s 1 , · · · , 2s n . Fix an arbitrary base point v ∈ |G| and consider Dg(f v ), as defined in Section 2.2. Let T v denote the shortest path tree in G rooted at v. We consider the base point v ∈ |G| to be a graph node of G; that is, we add it to V if necessary. We further assume that the graph G is "generic" in the sense that there do not exist two or more shortest paths from the base point v to any graph node of G in V . For any input metric graph G, we can perturb it to be one that is generic within arbitrarily small Gromov-Hausdorff distance. For simplicity, when v is fixed, we shall omit v in our notation and speak of the persistence diagram D := Dg(f v ), the function f := f v , and the shortest path tree T := T v . We present three straightforward observations, the first of which follows immediately from the definition of the shortest path tree and the Extreme Value Theorem. Observation 1. The shortest path tree T of G has |V | − 1 edges, and there are |E| − |V | + 1 non-tree edges. For each non-tree edge e ∈ E \ T , there exists a unique u ∈ e such that f (u) is a local maximum value of f . Note that every feature in the persistence diagram D must be born at a point in the graph that is an up-fork, i.e., a point coupled with a pair of adjacent directions along which the function f is increasing. Since there are no local minimum points of f (except for v itself), these must be vertices in the graph of degree at least 3 (see, e.g., [21]). The final observation relates to points belonging to cycles in G that yield local maximum values of f (see [2]). To delve further into this, let {γ 1 , . . . , γ n } denote the elements of the shortest system of loops for G listed in order of non-decreasing loop length. Proof. Since each γ i k (1 ≤ k ≤ m) is an element of the shortest system of loops for G and i 1 ≤ i 2 ≤ . . . ≤ i m , this implies that s i 1 ≤ · · · ≤ s im , where 2s i k is the length of cycle γ i k in the shortest system of loops of G. v v v u < l a t e x i t s h a 1 _ b a s e 6 4 = " g c P Y R Z u L u I Y V b U 2 x K Z Z O s q Q T Y K I = " > A A A B 6 n i c b V A 9 S w N B E J 2 L X z F + R S 1 t F h P B K t y l 0 U o C N p Y R z Q c k R 9 j b z C V L 9 v a O 3 T 0 h H P k J N h a K 2 P q L 7 P w 3 b p I r N P H B w O O 9 G W b m B Y n g 2 r j u t 1 P Y 2 N z a 3 i n u l v b 2 D w 6 P y s c n b R 2 n i m G L x S J W 3 Y B q F F x i y 3 A j s J s o p F E g s B N M b u d + 5 w m V 5 r F 8 N N M E / Y i O J A 8 5 o 8 Z K D 9 W 0 O i h X 3 J q 7 A F k n X k 4 q k K M 5 K H / 1 h z F L I 5 S G C a p 1 z 3 M T 4 2 d U G c 4 E z k r 9 V G N C 2 Y S O s G e p p B F q P 1 u c O i M X V h m S M F a 2 p C E L 9 f d E R i O t p 1 F g O y N q x n r V m 4 v / e b 3 U h N d + x m W S G p R Assume instead that f (u) < s im . Now, γ in G must contain at least one non-tree edge as it is a cycle. Let e 1 , . . . e = e be all non-tree edges of G with largest function value at most f (u). Assume they contain maximum points u 1 , . . . , u = u, respectively, where the edges and maxima are sorted in order of increasing function value of f . For two points x, y ∈ |T |, let α(x, y) denote the unique tree path from x to y within the shortest path tree. For each j ∈ {1, . . . , }, let e j = (e 0 j , e 1 j ) and let c j denote the cycle c j = α(v, e 1 j ) • e j • α(e 0 j , v). By assumption, since u = u is the point in γ with the largest local maximum value of f and f (u) < s im , it follows that the length of every cycle c j is less than s im . However, the set of cycles {c 1 , . . . , c } form a basis for the subgraph of G spanned by all edges containing only points of function value at most f (u). Therefore, we may represent γ as a linear combination of cycles from the set {c 1 , . . . , c }, i.e., γ may be decomposed into shorter cycles, each of length less than s im = length(γ im ) 2 . This is a contradiction to the fact that γ i 1 , . . . , γ im are elements of the shortest system of loops for G. Hence, we conclude that An example that illustrates the proof of Lemma 9 is shown in Figure 5. Later we will use the following simpler version of Lemma 9, where γ is a single element of the shortest system of loops. f (u) ≥ s im . v c j u < l a t e x i t s h a 1 _ b a s e 6 4 = " g c P Y R Z u L u I Y V b U 2 x K Z Z O s q Q T Y K I = " > A A A B 6 n i c b V A 9 S w N B E J 2 L X z F + R S 1 t F h P B K t y l 0 U o C N p Y R z Q c k R 9 j b z C V L 9 v a O 3 T 0 h H P k J N h a K 2 P q L 7 P w 3 b p I r N P H B w O O 9 G W b m B Y n g 2 r j u t 1 P Y 2 N z a 3 i n u l v b 2 D w 6 P y s c n b R 2 n i m G L x S J W 3 Y B q F F x i y 3 A j s J s o p F E g s B N M b u d + 5 w m V 5 r F 8 N N M E / Y i O J A 8 5 o 8 Z K D 9 W 0 O i h X 3 J q 7 A F k n X k 4 q k K M 5 K H / 1 h z F L I 5 S G C a p 1 z 3 M T 4 2 d U G c 4 E z k r 9 V G N C 2 Y S O s G e p p B F q P 1 u c O i M X V h m S M F a 2 p C E L 9 f d E R i O t p 1 F g O y N q x n r V m 4 v / e b 3 U h N d + x m W S G p R s u S h M B T E x m f 9 N h l w h M 2 J q C W W K 2 1 s J G 1 N F m b H p l G w I 3 u r L 6 6 R d r 3 l u z b u v V x o 3 e R x F O I N z u A Q P r q A B d 9 C E F j A Y w T O 8 w p s j n B f n 3 f l Y t h a c f O Y U / s D 5 / A G X j 4 1 P < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " g c P Y R Z u L u I Y V b U 2 x K Z Z O s q Q T Y K I = " > A A A B 6 n i c b V A 9 S w N B E J 2 L X z F + R S 1 t F h P B K t y l 0 U o C N p Y R z Q c k R 9 j b z C V L 9 v y U Q = " > A A A B 7 H i c b V A 9 T w J B E J 3 D L 8 Q v 1 N J m I 5 h Y k T s a r Q y J j S U m H p D A h e w t A 6 z s 7 V 1 2 9 0 z I h d 9 g Y 6 E x t v 4 g O / + N C 1 y h 4 E s m e X l v J j P z w k R w b V z 3 2 y l s b G 5 t 7 x R 3 S 3 v 7 B 4 d H 5 e O T l o 5 T x d B n s Y h V J 6 Q a B Z f o G 2 4 E d h K F N A o F t s P J 7 d x v P 6 H S P J Y P Z p p g E N G R 5 E P O q L G S X 8 X + Y 7 V f r r g 1 d w G y T r y c V C B H s 1 / + 6 g 1 i l k Y o D R N U 6 6 7 n J i b I q D K c C Z y V e q n G h L I J H W H X U k k j 1 E G 2 O H Z G L q w y I M N Y 2 Z K G L N T f E x m N t J 5 G o e 2 M q B n r V W 8 u / u d 1 U z O 8 D j I u k 9 S g Z M t F w 1 Q Q E 5 P 5 5 2 T A F T I j p p Z Q p r i 9 l b A x V Z Q Z m 0 / J h u C t v r x O W v W a 5 9 a 8 + 3 q l c Z P H U Y Q z O I d L 8 O A K G n A H T f C B A Y d n e I U 3 R z o v z r v z s W w t O P n M K f y B 8 / k D + / 6 O H A = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " 7 V Q i h J M 1 f Q y i g j x d r f 7 5 3 Z b 3 y U Q = " > A A A B 7 H i c b V A 9 T w J B E J 3 D L 8 Q v 1 N J m I 5 h Y k T s a r Q y J j S U m H p D A h e w t A 6 z s 7 V 1 2 9 0 z I h d 9 g Y 6 E x t v 4 g O / + N C 1 y h 4 E s m e X l v J j P z w k R w b V z 3 2 y l s b G 5 t 7 x R 3 S 3 v 7 B 4 d H 5 e O T l o 5 T x d B n s Y h V J 6 Q a B Z f o G 2 4 E d h K F N A o F t s P J 7 d x v P 6 H S P J Y P Z p p g E N G R 5 E P O q L G S X 8 X + Y 7 V f r r g 1 d w G y T r y c V C B H s 1 / + 6 g 1 i l k Y o D R N U 6 6 7 n J i b I q D K c C Z y V e q n G h L I J H W H X U k k j 1 E G 2 O H Z G L q w y I M N Y 2 Z K G L N T f E x m N t J 5 G o e 2 M q B n r V W 8 u / u d 1 U z O 8 D j I u k 9 S g Z M t F w 1 Q Q E 5 P 5 5 2 T A F T I j p p Z Q p r i 9 l b A x V Z Q Z m 0 / J h u C t v r x O W v W a 5 9 a 8 + 3 q l c Z P H U Y Q z O I d L 8 O A K G n A H T f C B A Y d n e I U 3 R z o v z r v z s W w t O P n M K f y B 8 / k D + / 6 O H A = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " 7 V Q i h J M 1 f Q y i g j x d r f 7 5 3 Z b 3 y U Q = " > A A A B 7 H i c b V A 9 T w J B E J 3 D L 8 Q v 1 N J m I 5 h Y k T s a r Q y J j S U m H p D A h e w t A 6 z s 7 V 1 2 9 0 z I h d 9 g Y 6 E x t v 4 g O / + N C 1 y h 4 E s m e X l v J j P z w k R w b V z 3 2 y l s b G 5 t 7 x R 3 S 3 v 7 B 4 d H 5 e O T l o 5 T x d B n s Y h V J 6 Q a B Z f o G 2 4 E d h K F N A o F t s P J 7 d x v P 6 H S P J Y P Z p p g E N G R 5 E P O q L G S X 8 X + Y 7 V f r r g 1 d w G y T r y c V C B H s 1 / + 6 g 1 i l k Y o D R N U 6 6 7 n J i b I q D K c C Z y V e q n G h L I J H W H X U k k j 1 E G 2 O H Z G L q w y I M N Y 2 Z K G L N T f E x m N t J 5 G o e 2 M q B n r V W 8 u / u d 1 U z O 8 D j I u k 9 S g Z M t F w 1 Q Q E 5 P 5 5 2 T A F T I j p p Z Q p r i 9 l b A x V Z Q Z m 0 / J h u C t v r x O W v W a 5 9 a 8 + 3 q l c Z P H U Y Q z O I d L 8 O A K G n A H T f C B A Y d n e I U 3 R z o v z r v z s W w t O P n M K f y B 8 / k D + / 6 O H A = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " 7 V Q i h J M 1 f Q y i g j x d r f 7 5 3 Z b 3 y U Q = " > A A A B 7 H i c b V A 9 T w J B E J 3 D L 8 Q v 1 N J m I 5 h Y k T s a r Q y J j S U m H p D A h e w t A 6 z s 7 V 1 2 9 0 z I h d 9 g Y 6 E x t v 4 g O / + N C 1 y h 4 E s m e X l v J j P z w k R w b V z 3 2 y l s b G 5 t 7 x R 3 S 3 v 7 B 4 d H 5 e O T l o 5 T x d B n s Y h V J 6 Q a B Z f o G 2 4 E d h K F N A o F t s P J 7 d x v P 6 H S P J Y P Z p p g E N G R 5 E P O q L G S X 8 X + Y 7 V f r r g 1 d w G y T r y c V C B H s 1 / + 6 g 1 i l k Y o D R N U 6 6 7 n J i b I q D K c C Z y V e q n G h L I J H W H X U k k j 1 E G 2 O H Z G L q w y I M N Y 2 Z K G L N T f E x m N t J 5 G o e 2 M q B n r V W 8 u / u d 1 U z O 8 D j I u k 9 S g Z M t F w 1 Q Q E 5 P 5 5 2 T A F T I j p p Z Q p r i 9 l b A x V Z Q Z m 0 / J h u C t v r x O W v W a 5 9 a 8 + 3 q l c Z P H U Y Q z O I d L 8 O A K G n A H T f C B A Y d n e I U 3 R z o v z r v z s W w t O P n M K f y B 8 / k D + / 6 O H Corollary 10. Let γ be an element of the shortest system of loops for G with a length 2s, and let u denote the point in any edge of γ with largest maximum value of f . Then f (u) ≥ s. The main theorem and its proof We are now ready to establish a comparison of the intrinsicČech and persistence distortion distances between a bouquet metric graph and an arbitrary metric graph. Theorem 11. Let G 1 and G 2 be finite metric graphs such that G 1 is a bouquet graph and G 2 is arbitrary. Then d IC (G 1 , G 2 ) ≤ 1 2 d P D (G 1 , G 2 ). Proof. Let G 1 be a bouquet graph consisting of m cycles of lengths 0 < 2t 1 ≤ . . . ≤ 2t m , all sharing one common point o ∈ |G 1 |. Let G 2 be an arbitrary metric graph with shortest system of loops consisting of n loops of lengths 2s 1 , · · · , 2s n listed in non-decreasing order. In what follows, we suppose n ≥ m; the case when m ≥ n proceeds similarly. As before, we obtain a sequence of length n, 2t 1 ≤ 2t 2 · · · ≤ 2t n (where t 1 = · · · = t n−m = 0, t n−m+1 = t 1 , · · · , and t n = t m ). Let f and g denote the geodesic distance functions on G 1 and G 2 , respectively. First, as in Corollary 6, the intrinsicČech distance between G 1 and G 2 , denoted by δ, is δ := d IC (G 1 , G 2 ) = n max i=1 |s i − t i | 2 .(1) Second, note that the persistence diagram D 1 := Dg(f o ) with respect to the base point o is D 1 = {(0, t 1 ), · · · , (0, t n )} (of course, this may include some copies of (0, 0) if m < n). Next, fix an arbitrary base point v ∈ |G 2 | and consider the persistence diagram D 2 := Dg(g v ). Consider the abstract persistence diagram D := {(0, s 1 ), · · · , (0, s n )} = {s 1 , . . . , s n } that consists only of points on the y-axis at the s i values. Unless G 2 is also a bouquet graph, D is not necessarily in Φ(|G 2 |). Nevertheless, we will use this persistence diagram as a point of comparison and relate points in D 2 to D . Notice that a consequence of Theorem 5 is that d B (D 1 , D ) = n max i=1 |s i − t i | = 2δ.(2) In order to accomplish our objective of relating points in D 2 with points in the ideal diagram D , we need the following lemma relating to feasible regions, which were introduced in Section 4.1. Lemma 12. Let D = {z 1 , . . . , z n } be an arbitrary persistence diagram such that z i ∈ F s i . Then d B (D 1 , D ) ≤ d B (D 1 , D ). Proof. Consider the optimal bottleneck matching between D 1 and D . According to Lemma 8, if the point t j = (0, t j ) ∈ D 1 is matched to z i ∈ D under this optimal matching, the matching of s i = (0, s i ) ∈ D to t j will yield a smaller distance. In other words, the induced bottleneck matching between D 1 and D , which is equal to 2δ, can only be smaller than d B (D 1 , D ). The outline of the remainder of the proof of Theorem 11 is as follows. Theorem 13 shows that one can assign points in D 2 to the points in D in such a way that the condition in Lemma 12 is satisfied. The fact that one can assign points in the fixed persistence diagram D 2 to the distinct feasible regions F s i relies on the series of structural observations and results in Section 4.2, along with an application of Hall's marriage theorem. Finally, the inequality in Lemma 12 and the definition of the persistence distortion distance imply that 2δ = d B (D 1 , D ) ≤ inf v∈|G 2 | d B (D 1 , D 2 ) ≤ d P D (G 1 , G 2 ),(3) which, together with (1), completes the proof of Theorem 11. The following theorem establishes the existence of a one-to-one correspondence between points in D and points in D 2 . The goal is to construct a bipartite graph G = (D , D 2 , E), where there is an edgeê ∈ E from s i ∈ D to z ∈ D 2 if and only if z ∈ F s i . To prove the theorem, we invoke Hall's marriage theorem, which requires showing that for any subset S of points in D , the number of neighbors of S in D 2 is at least |S|. Theorem 13. The graph G contains a perfect matching. Proof. For simplicity, let T = T v and g = g v . First, note that there is a one-to-one correspondence Ψ : E 2 \ T → D 2 between the set of non-tree edges in G 2 (each of which contains a unique maximum point of g) and the set of points in D 2 . In particular, from Observations 1 and 2, the death-time of each point in D 2 uniquely corresponds to a local maximum u e within a non-tree edge e of G 2 . Fix an arbitrary subset S ⊆ D with |S| = a. In order to apply Hall's marriage theorem, we must show that there are at least a neighbors of S in G. We achieve this via an iterative procedure which we now describe. The procedure begins at step k = 0 and will end after a iterations. Elements in S = {s i 1 , . . . , s ia } are processed in non-decreasing order of their values, which also means that i 1 < i 2 < · · · < i a . At the start of the k-th iteration, we will have processed the first k elements of S, denoted S k = {s i 1 , . . . , s i k }, where for each s := s i h ∈ S k that we have processed (1 ≤ h ≤ k), we have maintained the following three invariances: Invariance 1:s is associated to a unique edge es ∈ E 2 \ T containing a unique maximum u es such that Ψ(es) ∈ D 2 is a neighbor ofs. We say that es and u es are marked bys. Invariance 2: :s is also associated to a cycle γ h = γ i h + γ (where the sum ranges over all belonging to some index set J h ⊂ {1, . . . , i h − 1}), such that e s contains the point in γ h with the largest value of g. Invariance 3: : height( γ h ) ≤ s i h , where height(γ) = max x∈γ g(x) − min x∈γ g(x) represents the height (i.e., the maximal difference in the g function values) of a given loop γ. Set S k = S \ S k = {s i k+1 , . . . , s ia }, denoting the remaining elements from S to be processed. Our goal is to identify a new neighbor in D 2 for element s i k+1 from S k satisfying the three invariances. Once we have done so, we will then set S k+1 = S k ∪ {s i k+1 } and move on to the next iteration in the procedure. Note that s i k+1 corresponds to an element γ i k+1 of the shortest system of loops for G 2 . Let e be the edge in γ i k+1 containing the maximum u e of highest g function value among all edges in γ i k+1 . There are now two possible cases to consider, and we will demonstrate how to obtain a new neighbor for s i k+1 in either case. In the first case, suppose u e is not yet marked by a previous element in S. In this case, e s i k+1 = e and γ i k+1 = γ i k+1 . We claim that the point (p e , g(u e )) in the persistence diagram D 2 corresponding to the maximum u e is contained in the feasible region F s i k+1 . In other words, s i k+1 ≤ g(u e ) ≤ p e + s i k+1 . Indeed, by Lemma 9, s i k+1 ≤ g(u e ), and by Observation 3, g(u e ) − s i k+1 ≤ lowest(γ i k+1 ) ≤ p e , where lowest(γ i k+1 ) := min x∈γ i k+1 g(x). Thus, (p e , g(u e )) ∈ D 2 is a new neighbor for s i k+1 ∈ S since it is contained in F s i k+1 . Consequently, we mark e and u e by s i k+1 and continue with the next iteration. In the second case, the maximum point u e has already been marked by a previous element s j 1 ∈ S k and been associated to a cycle γ j 1 . Observe that s j 1 ≤ s i k+1 since our procedure processes elements of S in non-decreasing order of their values (and thus j 1 < i k+1 ). We must now identify an edge other than e for s i k+1 satisfying the three invariance properties. To this end, let γ 1 = γ i k+1 + γ j 1 , and let e 1 be the edge containing the maximum in γ 1 with largest function value. If e 1 is unmarked, we set e s i k+1 = e 1 . Otherwise, if e 1 is marked by some cycle γ j 2 , we construct the loop γ 2 = γ 1 + γ j 2 = γ i k+1 + γ j 1 + γ j 2 . We continue this process until we find γ η = γ i k+1 + γ j 1 + γ j 2 + . . . + γ jη such that the edge e η containing the point of maximum function value of γ η is not marked. Once we arrive at this point, we set γ i k+1 = γ η and e s i k+1 = e η , so that the edge e η and corresponding maximum u eη are marked by s i k+1 . The reason that the procedure outlined above must indeed terminate is as follows. Each time a new γ jν is added to a cycle γ j ν−1 (for ν ∈ {1, . . . , η}), it is because the edge containing the maximum point of γ j ν−1 with largest function value is marked by s jν . Note that j ν = j β for ν = β (as during the procedure, the edge e i containing the maximum function value in the cycle γ i are all distinct), each j ν < i k+1 , and s jν ∈ S k . Furthermore, Invariance 2 guarantees that γ η cannot be empty, as each cycle γ jν can be written as a linear combination of elements in the shortest system of loops with indices at most j ν . As j ν < i k+1 , the cycle γ = γ j 1 + γ j 2 + . . . + γ jη can be represented as a linear combination of basis cycles with indices strictly smaller than i k+1 . In other words, γ i k+1 and γ must be linearly independent, and thus γ η = γ i k+1 + γ cannot be empty. Again, j ν = j β for ν = β and each j ν < i k+1 , and thus it follows that after at most k iterations, we will obtain a cycle whose highest valued maximum and corresponding edge are not yet marked. Now, we must show that the three invariances are satisfied as a result of the process described in this second case. To begin, we point out that Invariance 2 holds by construction. Next, the following lemma establishes Invariance 3. Lemma 14. For γ i k+1 = γ η = γ i k+1 + γ j 1 + γ j 2 + . . . + γ jη as above, height( γ i k+1 ) ≤ s i k+1 . Proof. Set γ 0 = γ i k+1 , and for ν ∈ {1, . . . , η}, set γ ν = γ i k+1 + γ j 1 + · · · + γ jν . Using induction, we will show that height( γ ν ) ≤ s i k+1 for any ν ∈ {0, . . . , η}. The inequality obviously holds for ν = 0. Suppose it holds for all ν ≤ ρ < η, and consider ν = ρ + 1 where γ ρ+1 = γ ρ + γ j ρ+1 . The cycle γ j ρ+1 is added as the edge e ρ of γ ρ containing the current maximum point of highest value of g has already been marked by s j ρ+1 with j ρ+1 < i k+1 . By Invariance 2, e ρ must also be the edge in γ j ρ+1 containing the point of maximum g function value, which we denote by g(e ρ ). Therefore, after the addition of γ ρ and γ j ρ+1 , (i) highest( γ ρ+1 ) := max x∈ γ ρ+1 g(x) ≤ g v (e ρ ), and (ii) lowest( γ ρ+1 ) := min x∈ γ ρ+1 g(x) ≥ min{ lowest( γ ρ ), lowest( γ j ρ+1 ) }. By the induction hypothesis, height( γ ρ ) ≤ s i k+1 , while by Invariance 3, height( γ j ρ+1 ) ≤ s j ρ+1 ≤ s i k+1 . By (ii) of equation (4), it then follows that lowest( γ ρ+1 ) ≥ min{g(e ρ ) − height( γ ρ ), g(e ρ ) − height( γ j ρ+1 )} ≥ g(e ρ ) − s i k+1 . Combining this with (i) of equation (4), we have that height( γ ρ+1 ) ≤ s i k+1 . The lemma then follows by induction. Finally, we show that Invariance 1 also holds. Since γ i k+1 = γ η = γ i k+1 + γ , with γ defined as above, by Lemma 9, we have that g(u eη ) ≥ s i k+1 . Suppose u eη is paired with some graph node w so that p eη = g(w). As the height of γ i k+1 is at most s i k+1 (Lemma 14), combined with Observation 3, we have that g(u eη ) − s i k+1 ≤ lowest( γ i k+1 ) ≤ p eη . This implies that the point (p eη , g(u eη )) ∈ F s i k+1 , establishing Invariance 1. We continue the process described above until k = a. At each iteration, when we process s i k , we add a new neighbor for elements in S. In the end, after processing all of the a elements in S, we find a neighbors for S, and the total number of neighbors in G of elements in S can only be larger. Since this holds for any subset S of D , the condition for Hall's theorem is satisfied for the bipartite graph G. This implies that there exists a perfect matching in G, completing the proof of Theorem 13. Theorem 11 now follows from Lemma 12 and equation (1). Discussion and future work In this paper, we compare the discriminative capabilities of the intrinsicČech and persistence distortion distances, which are based on topological signatures of metric graphs. The intrinsicČech signature arises from the intrinsicČech filtration of a metric graph, and the persistence distortion signature is based on the set of persistence diagrams arising from sublevel set filtrations of geodesic distance functions from all base points in a given metric graph. A map from a metric graph to these topological signatures is not injective: two different metric graphs may map to the same signature. However, each signature captures structural information of a graph and serves as a type of topological summary. Understanding the relationship between the intrinsicČech and persistence distortion distances enables one to better understand the discriminative powers of such summaries. We conjecture that the intrinsicČech distance is less discriminative than the persistence distortion distance for general metric graphs G 1 and G 2 , so that there exists a constant c ≥ 1 with d IC (G 1 , G 2 ) ≤ c · d P D (G 1 , G 2 ). This statement is trivially true in the case when both graphs are trees as the intrinsicČech distance is 0 while the persistence distortion distance is not. We establish a sharper version of the conjectured inequality in the case when one of the graphs is a bouquet graph and the other is arbitrary, as well as in the case when both graphs are obtained via wedges of cycles and edges. The methods of proof in Theorem 11 and Proposition 17 rely on explicitly knowing the forms of the persistence diagrams for the geodesic distance function in the case of a bouquet graph or a tree of loops. Therefore, these methods do not readily carry over to the most general setting for arbitrary metric graphs. Nevertheless, we believe that the relationship between the intrinsicČech and persistence distortion distances should hold for arbitrary finite metric graphs. Intuitively, the intrinsicČech signature only captures the sizes of the shortest loops in a metric graph, whereas the persistence distortion signature takes into consideration the relative positions of such loops and their interactions with one another. As one example application relating the intrinsicČech and persistence distortion summaries (and hence, distances), the work of Pirashvili, et al. [22] considers how the topological structure of chemical compounds relates to solubility in water, which is of fundamental importance in modern drug discovery. Analysis with the topological tool mapper [23] reveals that compounds with a smaller number of cycles are more soluble. The number of cycles, as well as cycle lengths, is naturally encoded in the intrinsicČech summary. In addition, these authors also use a discrete persistence distortion summary -where only the graph nodes, i.e., the atoms, serve as base points -to show that nearby compounds have similar levels of solubility. Although we conjecture that the intrinsič Cech distance is less discriminative then the persistence distortion distance, it might be sufficient in this particular analysis since solubility is highly correlated with the number of cycles of a chemical compound, that is, with the intrinsicČech summary [16]. It would be interesting to investigate other applications of the intrinsicČech and persistence distortion summaries in the context of data sets modeled by metric graphs. In addition, recall from the definition of the persistence distortion distance the map Φ : |G| → SpDg, Φ(v) = Dg(f v ). The map Φ is interesting in its own right. For instance, what can be said about the set Φ(|G|) in the space of persistence diagrams for a given G? Given only the set Φ(|G|) ⊂ SpDg, what information can one recover about the graph G? Oudot and Solomon [21] show that there is a dense subset of metric graphs (in the Gromov-Hausdorff topology, and indeed an open dense set in the so-called fibered topology) on which their barcode transform via the map Φ is globally injective up to isometry. They also prove its local injectivity on the space of metric graphs. Another question of interest is, how does the map Φ induce a stratification in the space of persistence diagrams? Finally, it would also be worthwhile to compare the discriminative capacities of the persistence distortion and intrinsicČech distances to other graph distances, such as the interleaving and functional distortion distances in the special case of Reeb graphs.
19,665
1812.05282
2904221347
Metric graphs are meaningful objects for modeling complex structures that arise in many real-world applications, such as road networks, river systems, earthquake faults, blood vessels, and filamentary structures in galaxies. To study metric graphs in the context of comparison, we are interested in determining the relative discriminative capabilities of two topology-based distances between a pair of arbitrary finite metric graphs: the persistence distortion distance and the intrinsic Cech distance. We explicitly show how to compute the intrinsic Cech distance between two metric graphs based solely on knowledge of the shortest systems of loops for the graphs. Our main theorem establishes an inequality between the intrinsic Cech and persistence distortion distances in the case when one of the graphs is a bouquet graph and the other is arbitrary. The relationship also holds when both graphs are constructed via wedge sums of cycles and edges.
Recently, several distances for comparing metric graphs have been proposed based on ideas from computational topology. In the case of a special type of metric graph called a Reeb graph, these distances include: the functional distortion distance @cite_5 , the combinatorial edit distance @cite_24 , the interleaving distance @cite_3 , and its variant in the setting of merge trees @cite_26 . In particular, the functional distortion distance can be considered as a variation of the Gromov-Hausdorff distance between two metric spaces @cite_5 . The interleaving distance is defined via algebraic topology and utilizes the equivalence between Reeb graphs and cosheaves @cite_3 . For metric graphs in general, both the persistence distortion distance @cite_6 and the intrinsic distance @cite_18 take into consideration the structure of metric graphs, independent of their geometric embeddings, by treating them as continuous metric spaces. In @cite_10 , Oudot and Solomon point out that since compact geodesic spaces can be approximated by finite metric graphs in the Gromov--Hausdorff sense @cite_13 (see also the recent work of M 'emoli and Okutan @cite_16 ), one can study potentially complicated length spaces by studying the persistence distortion of a sequence of approximating graphs.
{ "abstract": [ "", "", "", "Metric graphs are ubiquitous in science and engineering. For example, many data are drawn from hidden spaces that are graph-like, such as the cosmic web. A metric graph offers one of the simplest yet still meaningful ways to represent the non-linear structure hidden behind the data. In this paper, we propose a new distance between two finite metric graphs, called the persistence-distortion distance, which draws upon a topological idea. This topological perspective along with the metric space viewpoint provide a new angle to the graph matching problem. Our persistence-distortion distance has two properties not shared by previous methods: First, it is stable against the perturbations of the input graph metrics. Second, it is a continuous distance measure, in the sense that it is defined on an alignment of the underlying spaces of input graphs, instead of merely their nodes. This makes our persistence-distortion distance robust against, for example, different discretizations of the same underlying graph. Despite considering the input graphs as continuous spaces, that is, taking all points into account, we show that we can compute the persistence-distortion distance in polynomial time. The time complexity for the discrete case where only graph nodes are considered is much faster.", "", "Reeb graphs are structural descriptors that capture shape properties of a topological space from the perspective of a chosen function. In this work, we define a combinatorial distance for Reeb graphs of orientable surfaces in terms of the cost necessary to transform one graph into another by edit operations. The main contributions of this paper are the stability property and the optimality of this edit distance. More precisely, the stability result states that changes in the Reeb graphs, measured by the edit distance, are as small as changes in the functions, measured by the maximum norm. The optimality result states that the edit distance discriminates Reeb graphs better than any other distance for Reeb graphs of surfaces satisfying the stability property.", "We propose a metric for Reeb graphs, called the functional distortion distance. Under this distance, the Reeb graph is stable against small changes of input functions. At the same time, it remains discriminative at differentiating input functions. In particular, the main result is that the functional distortion distance between two Reeb graphs is bounded from below by the bottleneck distance between both the ordinary and extended persistence diagrams for appropriate dimensions. As an application of our results, we analyze a natural simplification scheme for Reeb graphs, and show that persistent features in Reeb graph remains persistent under simplification. Understanding the stability of important features of the Reeb graph under simplification is an interesting problem on its own right, and critical to the practical usage of Reeb graphs.", "A standard result in metric geometry is that every compact geodesic metric space can be approximated arbitrarily well by finite metric graphs in the Gromov-Hausdorff sense. It is well known that the first Betti number of the approximating graphs may blow up as the approximation gets finer. In our work, given a compact geodesic metric space @math , we define a sequence @math of non-negative real numbers by @math By construction, and the above result, this is a non-increasing sequence with limit @math . We study this sequence and its rates of decay with @math . We also identify a precise relationship between the sequence and the first Vietoris-Rips persistence barcode of @math . Furthermore, we specifically analyze @math and find upper and lower bounds based on hyperbolicity and other metric invariants. As a consequence of the tools we develop, our work also provides a Gromov-Hausdorff stability result for the Reeb construction on geodesic metric spaces with respect to the function given by distance to a reference point.", "Stable topological invariants are a cornerstone of persistence theory and applied topology, but their discriminative properties are often poorly-understood. In this paper we investigate the injectivity of a rich homology-based invariant first defined in dey2015comparing which we think of as embedding a metric graph in the barcode space." ], "cite_N": [ "@cite_13", "@cite_18", "@cite_26", "@cite_6", "@cite_3", "@cite_24", "@cite_5", "@cite_16", "@cite_10" ], "mid": [ "", "", "", "1511069251", "", "1581051117", "2962902468", "2890370060", "2807222275" ] }
The Relationship Between the IntrinsicČech and Persistence Distortion Distances for Metric Graphs *
When working with graph-like data equipped with a notion of distance, a very useful means of capturing existing geometric and topological relationships within the data is via a metric graph. Given an ordinary graph G = (V, E) and a length function on the edges, one may view G as a metric space with the shortest path metric in any geometric realization. Metric graphs are used to model a variety of real-world data sets, such as road networks, river systems, earthquake faults, blood vessels, and filamentary structures in galaxies [1,24,25]. Given these practical applications, it is natural to ask how to compare two metric graphs in a meaningful way. Such a comparison is important to understand the stability of these structures in the noisy setting. One way to do this is to check whether there is a bijection between the two input graphs as part of a graph isomorphism problem [3]. Another way is to define, compute, and compare various distances on the space of graphs. In this paper, we are interested in determining the discriminative capabilities of two distances that arise from computational topology: the persistence distortion distance and the intrinsicČech distance. If two distances d 1 and d 2 on the space of metric graphs satisfy an inequality d 1 (G 1 , G 2 ) ≤ c · d 2 (G 1 , G 2 ) (for some constant c > 0 and any pair of graphs G 1 and G 2 ), this means that d 2 has greater discriminative capacity for differentiating between two input graphs. For instance, if d 1 (G 1 , G 2 ) = 0 and d 2 (G 1 , G 2 ) > 0, then d 2 has a better discriminative power than d 1 . Related work Well-known methods for comparing graphs using distance measures include combinatorial (e.g., graph edit distance [27]) and spectral (e.g., eigenvalue decomposition [26]) approaches. Graph edit distance minimizes the cost of transforming one graph to another via a set of elementary operators such as node/edge insertions/deletions, while spectral approaches optimize objective functions based on properties of the graph spectra. Recently, several distances for comparing metric graphs have been proposed based on ideas from computational topology. In the case of a special type of metric graph called a Reeb graph, these distances include: the functional distortion distance [4], the combinatorial edit distance [15], the interleaving distance [12], and its variant in the setting of merge trees [19]. In particular, the functional distortion distance can be considered as a variation of the Gromov-Hausdorff distance between two metric spaces [4]. The interleaving distance is defined via algebraic topology and utilizes the equivalence between Reeb graphs and cosheaves [12]. For metric graphs in general, both the persistence distortion distance [13] and the intrinsicČech distance [10] take into consideration the structure of metric graphs, independent of their geometric embeddings, by treating them as continuous metric spaces. In [21], Oudot and Solomon point out that since compact geodesic spaces can be approximated by finite metric graphs in the Gromov-Hausdorff sense [6] (see also the recent work of Mémoli and Okutan [18]), one can study potentially complicated length spaces by studying the persistence distortion of a sequence of approximating graphs. In the context of comparing the relative discriminative capabilities of these distances, Bauer, Ge, and Wang [4] show that the functional distortion distance between two Reeb graphs is bounded from below by the bottleneck distance between the persistence diagrams of the Reeb graphs. Bauer, Munch, and Wang [5] establish a strong equivalence between the functional distortion distance and the interleaving distance on the space of all Reeb graphs, which implies the two distances are within a constant factor of one another. Carrière and Oudot [9] consider the intrinsic versions of the aforementioned distances and prove that they are all globally equivalent. They also establish a lower bound for the bottleneck distance in terms of a constant multiple of the functional distortion distance. In [13], Dey, Shi, and Wang show that the persistence distortion distance is stable with respect to changes to input metric graphs as measured by the Gromov-Hausdorff distance. In other words, the persistence distortion distance is bounded above by a constant factor of the Gromov-Hausdorff distance. Furthermore, the intrinsicČech distance is also bounded from above by the Gromov-Hausdorff distance for general metric spaces [10]. Our contribution The main focus of this paper is relating two specific topological distances between general metric graphs G 1 and G 2 : the intrinsicČech distance and the persistence distortion distance. Both of these can be viewed as distances between topological signatures or summaries of G 1 and G 2 . Indeed, in the case of the intrinsicČech distance, a metric graph (G, d G ) is mapped to the persistence diagram Dg 1 IC G induced by the so-called intrinsicČech filtration IC G , and we may think of Dg 1 IC G as the signature of G. The intrinsicČech distance d IC (G 1 , G 2 ) between two metric graphs G 1 and G 2 is the bottleneck distance between these signatures, denoted d B (Dg 1 IC G 1 , Dg 1 IC G 2 ). For the persistence distortion distance, each metric graph G is mapped to a set Φ(G) of persistence diagrams, which is the signature of the graph G in this case. The persistence distortion distance d P D (G 1 , G 2 ) between G 1 and G 2 is measured by the Hausdorff distance between these image sets or signatures. See Section 2 for the definition of Φ, along with more detailed definitions of these two distances. Our objective is to determine the relative discriminative capacities of such signatures. We conjecture that the persistence distortion distance is more discriminative than the intrinsicČech distance. Conjecture 1. d IC ≤ c · d P D for some constant c > 0. It is known from [16] that Dg 1 IC G depends only on the lengths of the shortest system of loops in G, and thus the persistence distortion distance appears to be more discriminative, intuitively. We show in Section 3 that the intrinsicČech distance between two arbitrary finite metric graphs is determined solely by the difference in these shortest cycle lengths; see Theorem 5 for a precise statement. This further implies that the intrinsicČech distance between two arbitrary metric trees is always 0. In contrast, the persistence distortion distance takes relative positions of loops as well as branches into account, and is nonzero in the case of two trees. In other words, the conjecture holds for metric trees. We make progress toward proving the conjecture in greater generality in this paper. Theorem 11 establishes an inequality between the intrinsicČech and persistence distortion distances for two finite metric graphs in the case when one of the graphs is a bouquet graph and the other is arbitrary. In this case, the constant c = 1/2 so that the inequality is sharper than what is conjectured. The theorem and proof appear in Section 4, and we conclude that section by proving that Conjecture 1 also holds when both graphs are constructed by taking wedge sums of cycles and edges. While this does not yet prove the conjecture for arbitrary metric graphs, our work provides the first non-trivial relationship between these two meaningful topological distances. Our proofs also provide insights on the map Φ from a metric graph into the space of persistence diagrams as utilized in the definition of the persistence distortion distance. This map Φ is of interest itself; indeed, see the recent study of this map in [21]. In general, we believe that this direction of establishing qualitative understanding of topological signatures and their corresponding distances is interesting and valuable for use in applications. We leave the proof of the conjecture for arbitrary metric graphs as an open problem and give a brief discussion on some future directions in Section 5. Persistent homology and metric graphs We begin with a brief summary of persistent homology and how it can be utilized in the context of metric graphs. For background on homology and simplicial complexes, we refer the reader to [17,20], and for further details on persistent homology, see, e.g., [7,14]. In persistent homology, one studies the changing homology of an increasing sequence of subspaces of a topological space X. One (typical) way to obtain a filtration of X is to take a continuous function f : X → R and construct the sublevel set filtration, ∅ = X a 0 ⊆ X a 1 ⊆ . . . ⊆ X am = X, by writing X a i = f −1 (−∞, a i ) for the sublevel set defined by the value a i . The inclusions {X a i → X a j } 0≤i<j≤m induce the persistence module H k (X a 0 ) → H k (X a 1 ) → . . . → H k (X am ) in any homological dimension k by applying the homology functor with coefficients in some field. Another way to obtain a filtration is to build a sequence of simplicial complexes on a set of points using, for instance, the intrinsič Cech filtration [10] discussed in Section 2.2. Elements of each homology group may then be tracked through the filtration and recorded in a persistence diagram, with one diagram for each k. A persistence diagram is a multiset of points (a i , a j ) in the extended plane (R ∪ ±∞) 2 , where each point (a i , a j ) corresponds to a homological element that appears for the first time (is "born") at H k (X a i ) and which disappears ("dies") at H k (X a j ). A persistence diagram also includes the infinitely many points along the diagonal line y = x. The usual mantra for persistence is that points close to the diagonal are likely to represent noise, while points further from the diagonal may encode more robust topological features. In this paper, we are interested in summarizing the topological structure of a finite metric graph, specifically in homological dimension k = 1. Given a graph G = (V, E), where V and E denote the vertex and edge sets, respectively, as well as a length function, length : E → R ≥0 , on edges in E, a finite metric graph (|G|, d G ) is a metric space where |G| is a geometric realization of G and d G is defined as in [13]. Namely, if e and |e| denote an edge and its image in the geometric realization, we define α : [0, length(e)] → |e| to be the arclength parametrization, so that d G (u, v) = |α −1 (v) − α −1 (u)| for any u, v ∈ |e|. This definition may then be extended to any two points in |G| by restricting a given path from one point to another to edges in G, adding up these lengths, then taking the distance to be the minimum length of any such path. In this way, all points along an edge are points in a metric graph, not just the original graph's vertices. A system of loops of G refers to a set of cycles whose associated homology classes form a minimal generating set for the 1-dimensional (singular) homology group of G. The length-sequence of a system of loops is the sequence of lengths of elements in this set listed in non-decreasing order. Thus, a system of loops of G is shortest if its length-sequence is lexicographically smallest among all possible systems of loops of G. One particular class of metric graphs we will be working with are bouquet graphs. These are metric graphs containing a single vertex with a number of self-loops of various lengths attached to it. IntrinsicČech and persistence distortion distances In this section, we recall the distances between metric graphs that are being explored in this work. We note that both are actually pseudo-distances because it can be the case that d(G 1 , G 2 ) = 0 when G 1 = G 2 . However, for ease of exposition, we will refer to them simply as distances in this paper. Both rely on the bottleneck distance on the space of persistence diagrams, a version of which we now state. Definition 2. Let X and Y be persistence diagrams with µ : X → Y a bijection. The bottleneck distance between X and Y is d B (X, Y ) := inf µ:X→Y sup x∈X ||x − µ(x)|| 1 . Although this definition differs from the standard version of the bottleneck distance, which uses ||x−µ(x)|| ∞ rather than ||x−µ(x)|| 1 , the two are related via the inequalities ||x|| ∞ ≤ ||x|| 1 ≤ 2||x|| ∞ . Next, let (G, d G ) be a metric graph with geometric realization |G|. Define the intrinsic ball B(x, a i ) = {y ∈ |G| : d G (x, y) ≤ a i } for any x ∈ |G|, as well as the uncountable open cover U a i = {B(x, a i ) : x ∈ |G|}. We useČech(a i ) to denote the nerve of the cover U a i , referred to as the intrinsicČech complex. See Figure 1 for an illustration. Then {Čech(a i ) →Čech(a j )} 0≤a i <a j is the intrinsicČech filtration inducing the intrinsicČech persistence module {H k (Čech(a i )) → H k (Čech(a j ))} 0≤a i <a j in any dimension k, and the corresponding persistence diagram is denoted Dg k IC G . The following intrinsicČech distance definition comes from [10]. Here, we work with dimension k = 1. Figure 1: A finite subset of the infinite cover at a fixed radius (left) and its corresponding nerve (right). Definition 3. Given two metric graphs (G 1 , d G 1 ) and (G 2 , d G 2 ), their intrinsicČech distance is d IC (G 1 , G 2 ) := d B (Dg 1 IC G 1 , Dg 1 IC G 2 ). The persistence distortion distance was first introduced in [13]. Given a base point v ∈ |G|, define the geodesic distance function f v : |G| → R where f v (x) = d G (v, x) . Then Dg(f v ) is the union of the 0− and 1−dimensional extended persistence diagrams for f v (see [11] for the details of extended persistence). Equivalently, it is the 0-dimensional levelset zigzag persistence diagram induced by f v [8]. Define Φ : |G| → SpDg, Φ(v) = Dg(f v ), where SpDg denotes the space of persistence diagrams for all points v ∈ |G|. The set Φ(|G|) ⊂ SpDg is the persistence distortion of the metric graph G. Definition 4. Given two metric graphs (G 1 , d G 1 ) and (G 2 , d G 2 ), their persistence distortion distance is d P D (G 1 , G 2 ) := d H (Φ(|G 1 |), Φ(|G 2 |)) where d H denotes the Hausdorff distance. In other words, d P D (G 1 , G 2 ) = max sup D 1 ∈Φ(|G 1 |) inf D 2 ∈Φ(|G 2 |) d B (D 1 , D 2 ), sup D 2 ∈Φ(|G 2 |) inf D 1 ∈Φ(|G 1 |) d B (D 1 , D 2 ) . Note that the diagram Dg(f v ) contains both 0− and 1−dimensional persistence points, but only points of the same dimension are matched under the bottleneck distance. In this paper, we will only focus on the points in the 1-dimensional extended persistence diagrams for the persistence distortion distance computation. Calculating the intrinsicČech distance In this section, we show that the intrinsicČech distance between two metric graphs may be easily computed from knowing the shortest systems of loops for the graphs. We begin with a theorem that characterizes the bottleneck distance between two sets of points in the extended plane. a 1 ), . . . , (0, a n )} and D 2 = {(0, b 1 ), . . . , (0, b n )} be two persistence diagrams with 0 ≤ a 1 ≤ · · · ≤ a n and 0 ≤ b 1 ≤ · · · ≤ b n , respectively. Then Theorem 5. Let D 1 = {(0,d B (D 1 , D 2 ) = n max i=1 |a i − b i |. Proof. To simplify notation, we use the convention that for all i = 1, . . . , n, (0, a i ) = a i , (0, b i ) = b i , and (0, 0) = 0. Let µ be any matching of points in D 1 and D 2 , where each point a i in D 1 is either matched to a unique point b j in D 2 or to the nearest neighbor in the diagonal (and similarly for D 2 ). Assume that C µ is the cost of the matching µ, i.e., the maximum distance between two matched points. Now, let µ * be the matching such that µ * (a i ) = b i for all 0 ≤ i ≤ n. By construction, the cost of this matching is C µ * = n max i=1 |a i − b i |. We claim that the matching cost of µ * is less than or equal to that of µ, i.e., C µ * ≤ C µ . If this is the case, then µ * is the optimal bottleneck matching and therefore d B (D 1 , D 2 ) = C µ * . To show this, we look at where the matchings µ and µ * differ. Note that since all of the off-diagonal points in D 1 and D 2 lie on the y-axis, any such point matched to the diagonal under µ may simply be matched to (0, 0) since this will yield the same value in the 1 −norm. Now, starting with b 1 , let j be the first index where µ(a j ) = b j . Then, we have two cases: (1) µ(a k ) = b j for some k > j (i.e., b j is matched with some a k = a j ); or (2) µ(0) = b j (i.e., b j is matched with the diagonal, or equivalently, to 0). We show that in either case, matching b j with a j instead does not increase the cost of the matching. In the first case, let us also assume that µ(a j ) = b l for some l > j (the situation where µ(a j ) = 0 will be taken care of in the second case). Then, max{|a j − b j |, |a k − b l |} ≤ max{|a j − b l |, |a k − b j |}. That is, if we were to instead pair a j with b j and a k with b l , the cost of the matching would be lower. This can be seen by working through a case analysis on the relative order of a j , a k , b j , and b l along the y-axis. Intuitively, we can think of a j , a k , b j , and b l as the four corners of a trapezoid as in Figure 2. The diagonals of the trapezoid represent the distances under the matching µ, while the legs of the trapezoid represent the distances when we pair a j with b j and a k with b l . The maximum of the lengths of the legs will always be less than the maximum of the lengths of the diagonals. Adjusting the lengths of the top and bottom bases (which amounts to changing the order of a j , a k , b j , and b l along the y-axis) does not change this fact. Therefore, matching b j with a j instead of a k does not increase the cost of the matching. In the second case, if b j is matched to 0, there must be some a k with k ≥ j that is matched to 0, as well. If we were to instead match b j to a k , this does not increase the cost of the matching since max{b j , a k } ≥ |a k − b j | (i.e., the original cost is greater than the new cost). After this rematching, b j is no longer matched to 0 and this reverts to the first case. Similarly, if a j is matched to 0, it may be rematched in a similar manner. By looking at all the pairings where µ and µ * differ (in increasing order of indices), pairing a i with b i instead of µ(a i ) (and similarly, pairing b i with a i rather than what it was paired with under µ) always results in the same or lower cost matching. Therefore, C µ * ≤ C µ for all matchings µ; hence, d B (D 1 , D 2 ) = C µ * = n max i=1 |a i − b i |. To see how this applies to the computation of the intrinsicČech distance between two metric graphs, let G 1 be a metric graph with a shortest system of m loops of lengths 0 < 2t 1 ≤ · · · ≤ 2t m , and let G 2 be a metric graph with a shortest system of n loops of lengths 0 < 2s 1 ≤ · · · ≤ 2s n . Without loss of generality, suppose n ≥ m. From [16], the 1-dimensional intrinsicČech persistence diagrams of G 1 and G 2 are the multisets of points Dg 1 IC G 1 = 0, t 1 2 , . . . , 0, t m 2 and Dg 1 IC G 2 = 0, s 1 2 , . . . , 0, s n 2 . In order to apply Theorem 5, we add n − m copies of the point (0, 0) at the start of the list of points in Dg 1 IC G 1 , i.e., let Dg 1 IC G 1 = 0, t 1 2 , . . . , 0, t n 2 , where t 1 = · · · = t n−m = 0, t n−m+1 = t 1 , . . . , and t n = t m . Corollary 6. Let G 1 and G 2 be as above. Then d IC (G 1 , G 2 ) = n max i=1 |s i − t i | 2 . 4 Relating the intrinsicČech and persistence distortion distances for a bouquet graph and an arbitrary graph Feasible regions in persistence diagrams Our eventual goal for our main theorem (Theorem 11) is to estimate a lower bound for the persistence distortion distance between metric graphs G 1 = (V 1 , E 1 ) and G 2 = (V 2 , E 2 ) so that we can compare it with the intrinsicČech distance between them, given in Corollary 6. A fundamental part of this process relies on the notion of a feasible region for a point in a given persistence diagram lying on the y-axis. Definition 7. The feasible region for a point s := (0, s) ∈ R 2 is defined as F s = {z = (z 1 , z 2 ) : 0 ≤ z 1 ≤ z 2 , s ≤ z 2 ≤ z 1 + s}. s = (0, s) Fss = (0, s) Fs z = (x, y) w [Case 2.2] w [Case 2.1] w [Case 1] t < l a t e x i t s h a 1 _ b a s e 6 4 = " o X 5 d 5 J N F S e R E z i H B m z x X n y L z 5 3 g = " > A A A B 9 X i c b V C 7 T g J B F L 2 L L 8 Q X a m k z E U y s y C 6 N l i Q 2 l p j I I 4 G V z A 4 D T J j d 2 c z c 1 Z A N / 2 F j o T G 2 / o u d f + M s b K H g S S Y 5 O e e e 3 D s n i K U w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 j U o 0 4 y 2 m p N L d g B o u R c R b K F D y b q w 5 D Q P J O 8 H 0 J v M 7 j 1 w b o a J 7 n M X c D + k 4 E i P B K F r p o d p X 1 s 3 C K c 6 r g 3 L F r b k L k H X i 5 a Q C O Z q D 8 l d / q F g S 8 g i Z p M b 0 P D d G P 6 U a B Z N 8 X u o n h s e U T e m Y 9 y y N a M i N n y 6 u n p M L q w z J S G n 7 I i Q L 9 X c i p a E x s z C w k y H F i V n 1 M v E / r 5 f g 6 N p P R R Q n y C O 2 X D R K J E F F s g r I U G j O U M 4 s o U w L e y t h E 6 o p Q 1 t U y Z b g r X 5 5 n b T r N c + t e X f 1 S q O a 1 1 G E M z i H S / D g C h p w C 0 1 o A Q M N z / A K b 8 6 T 8 + K 8 O x / L 0 Y K T Z 0 7 h D 5 z P H 2 3 l k l o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " o X 5 d 5 J N F S e R E z i H B m z x X n y L z 5 3 g = " > A A A B 9 X i c b V C 7 T g J B F L 2 L L 8 Q X a m k z E U y s y C 6 N l i Q 2 l p j I I 4 G V z A 4 D T J j d 2 c z c 1 Z A N / 2 F j o T G 2 / o u d f + M s b K H g S S Y 5 O e e e 3 D s n i K U w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 j U o 0 4 y 2 m p N L d g B o u R c R b K F D y b q w 5 D Q P J O 8 H 0 J v M 7 j 1 w b o a J 7 n M X c D + k 4 E i P B K F r p o d p X 1 s 3 C K c 6 r g 3 L F r b k L k H X i 5 a Q C O Z q D 8 l d / q F g S 8 g i Z p M b 0 P D d G P 6 U a B Z N 8 X u o n h s e U T e m Y 9 y y N a M i N n y 6 u n p M L q w z J S G n 7 I i Q L 9 X c i p a E x s z C w k y H F i V n 1 M v E / r 5 f g 6 N p P R R Q n y C O 2 X D R K J E F F s g r I U G j O U M 4 s o U w L e y t h E 6 o p Q 1 t U y Z b g r X 5 5 n b T r N c + t e X f 1 S q O a 1 1 G E M z i H S / D g C h p w C 0 1 o A Q M N z / A K b 8 6 T 8 + K 8 O x / L 0 Y K T Z 0 7 h D 5 z P H 2 3 l k l o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " o X 5 d 5 J N F S e R E z i H B m z x X n y L z 5 3 g = " > A A A B 9 X i c b V C 7 T g J B F L 2 L L 8 Q X a m k z E U y s y C 6 N l i Q 2 l p j I I 4 G V z A 4 D T J j d 2 c z c 1 Z A N / 2 F j o T G 2 / o u d f + M s b K H g S S Y 5 O e e e 3 D s n i K U w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 j U o 0 4 y 2 m p N L d g B o u R c R b K F D y b q w 5 D Q P J O 8 H 0 J v M 7 j 1 w b o a J 7 n M X c D + k 4 E i P B K F r p o d p X 1 s 3 C K c 6 r g 3 L F r b k L k H X i 5 a Q C O Z q D 8 l d / q F g S 8 g i Z p M b 0 P D d G P 6 U a B Z N 8 X u o n h s e U T e m Y 9 y y N a M i N n y 6 u n p M L q w z J S G n 7 I i Q L 9 X c i p a E x s z C w k y H F i V n 1 M v E / r 5 f g 6 N p P R R Q n y C O 2 X D R K J E F F s g r I U G j O U M 4 s o U w L e y t h E 6 o p Q 1 t U y Z b g r X 5 5 n b T r N c + t e X f 1 S q O a 1 1 G E M z i H S / D g C h p w C 0 1 o A Q M N z / A K b 8 6 T 8 + K 8 O x / L 0 Y K T Z 0 7 h D 5 z P H 2 3 l k l o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " o X 5 d 5 J N F S e R E z i H B m z x X n y L z 5 3 g = " > A A A B 9 X i c b V C 7 T g J B F L 2 L L 8 Q X a m k z E U y s y C 6 N l i Q 2 l p j I I 4 G V z A 4 D T J j d 2 c z c 1 Z A N / 2 F j o T G 2 / o u d f + M s b K H g S S Y 5 O e e e 3 D s n i K U w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 j U o 0 4 y 2 m p N L d g B o u R c R b K F D y b q w 5 D Q P J O 8 H 0 J v M 7 j 1 w b o a J 7 n M X c D + k 4 E i P B K F r p o d p X 1 s 3 C K c 6 r g 3 L F r b k L k H X i 5 a Q C O Z q D 8 l d / q F g S 8 g i Z p M b 0 P D d G P 6 U a B Z N 8 X u o n h s e U T e m Y 9 y y N a M i N n y 6 u n p M L q w z J S G n 7 I i Q L 9 X c i p a E x s z C w k y H F i V n 1 M v E / r 5 f g 6 N p P R R Q n y C O 2 X D R K J E F F s g r I U G j O U M 4 s o U w L e y t h E 6 o p Q 1 t U y Z b g r X 5 5 n b T r N c + t e X f 1 S q O a 1 1 G E M z i H S / D g C h p w C 0 1 o A Q M N z / A K b 8 6 T 8 + K 8 O x / L 0 Y K T Z 0 7 h D 5 z P H 2 3 l k l o = < / l a t e x i t > t < l a t e x i t s h a 1 _ b a s e 6 4 = " o X 5 d 5 J N F S e R E z i H B m z x X n y L z 5 3 g = " > A A A B 9 X i c b V C 7 T g J B F L 2 L L 8 Q X a m k z E U y s y C 6 N l i Q 2 l p j I I 4 G V z A 4 D T J j d 2 c z c 1 Z A N / 2 F j o T G 2 / o u d f + M s b K H g S S Y 5 O e e e 3 D s n i K U w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 j U o 0 4 y 2 m p N L d g B o u R c R b K F D y b q w 5 D Q P J O 8 H 0 J v M 7 j 1 w b o a J 7 n M X c D + k 4 E i P B K F r p o d p X 1 s 3 C K c 6 r g 3 L F r b k L k H X i 5 a Q C O Z q D 8 l d / q F g S 8 g i Z p M b 0 P D d G P 6 U a B Z N 8 X u o n h s e U T e m Y 9 y y N a M i N n y 6 u n p M L q w z J S G n 7 I i Q L 9 X c i p a E x s z C w k y H F i V n 1 M v E / r 5 f g 6 N p P R R Q n y C O 2 X D R K J E F F s g r I U G j O U M 4 s o U w L e y t h E 6 o p Q 1 t U y Z b g r X 5 5 n b T r N c + t e X f 1 S q O a 1 1 G E M z i H S / D g C h p w C 0 1 o A Q M N z / A K b 8 6 T 8 + K 8 O x / L 0 Y K T Z 0 7 h D 5 z P H 2 3 l k l o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " o X 5 d 5 J N F S e R E z i H B m z x X n y L z 5 3 g = " > A A A B 9 X i c b V C 7 T g J B F L 2 L L 8 Q X a m k z E U y s y C 6 N l i Q 2 l p j I I 4 G V z A 4 D T J j d 2 c z c 1 Z A N / 2 F j o T G 2 / o u d f + M s b K H g S S Y 5 O e e e 3 D s n i K U w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 j U o 0 4 y 2 m p N L d g B o u R c R b K F D y b q w 5 D Q P J O 8 H 0 J v M 7 j 1 w b o a J 7 n M X c D + k 4 E i P B K F r p o d p X 1 s 3 C K c 6 r g 3 L F r b k L k H X i 5 a Q C O Z q D 8 l d / q F g S 8 g i Z p M b 0 P D d G P 6 U a B Z N 8 X u o n h s e U T e m Y 9 y y N a M i N n y 6 u n p M L q w z J S G n 7 I i Q L 9 X c i p a E x s z C w k y H F i V n 1 M v E / r 5 f g 6 N p P R R Q n y C O 2 X D R K J E F F s g r I U G j O U M 4 s o U w L e y t h E 6 o p Q 1 t U y Z b g r X 5 5 n b T r N c + t e X f 1 S q O a 1 1 G E M z i H S / D g C h p w C 0 1 o A Q M N z / A K b 8 6 T 8 + K 8 O x / L 0 Y K T Z 0 7 h D 5 z P H 2 3 l k l o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " o X 5 d 5 J N F S e R E z i H B m z x X n y L z 5 3 g = " > A A A B 9 X i c b V C 7 T g J B F L 2 L L 8 Q X a m k z E U y s y C 6 N l i Q 2 l p j I I 4 G V z A 4 D T J j d 2 c z c 1 Z A N / 2 F j o T G 2 / o u d f + M s b K H g S S Y 5 O e e e 3 D s n i K U w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 j U o 0 4 y 2 m p N L d g B o u R c R b K F D y b q w 5 D Q P J O 8 H 0 J v M 7 j 1 w b o a J 7 n M X c D + k 4 E i P B K F r p o d p X 1 s 3 C K c 6 r g 3 L F r b k L k H X i 5 a Q C O Z q D 8 l d / q F g S 8 g i Z p M b 0 P D d G P 6 U a B Z N 8 X u o n h s e U T e m Y 9 y y N a M i N n y 6 u n p M L q w z J S G n 7 I i Q L 9 X c i p a E x s z C w k y H F i V n 1 M v E / r 5 f g 6 N p P R R Q n y C O 2 X D R K J E F F s g r I U G j O U M 4 s o U w L e y t h E 6 o p Q 1 t U y Z b g r X 5 5 n b T r N c + t e X f 1 S q O a 1 1 G E M z i H S / D g C h p w C 0 1 o A Q M N z / A K b 8 6 T 8 + K 8 O x / L 0 Y K T Z 0 7 h D 5 z P H 2 3 l k l o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " o X 5 d 5 J N F S e R E z i H B m z x X n y L z 5 3 g = " > A A A B 9 X i c b V C 7 T g J B F L 2 L L 8 Q X a m k z E U y s y C 6 N l i Q 2 l p j I I 4 G V z A 4 D T J j d 2 c z c 1 Z A N / 2 F j o T G 2 / o u d f + M s b K H g S S Y 5 O e e e 3 D s n i K U w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 j U o 0 4 y 2 m p N L d g B o u R c R b K F D y b q w 5 D Q P J O 8 H 0 J v M 7 j 1 w b o a J 7 n M X c D + k 4 E i P B K F r p o d p X 1 s 3 C K c 6 r g 3 L F r b k L k H X i 5 a Q C O Z q D 8 l d / q F g S 8 g i Z p M b 0 P D d G P 6 U a B Z N 8 X u o n h s e U T e m Y 9 y y N a M i N n y 6 u n p M L q w z J S G n 7 I i Q L 9 X c i p a E x s z C w k y H F i V n 1 M v E / r 5 f g 6 N p P R R Q n y C O 2 X D R K J E F F s g r I U G j O U M 4 s o U w L e y t h E 6 o p Q 1 t U y Z b g r X 5 5 n b T r N c + t e X f 1 S q O a 1 1 G E M z i H S / D g C h p w C 0 1 o A Q M N z / A K b 8 6 T 8 + K 8 O x / L 0 Y K T Z 0 7 h D 5 z P H 2 3 l k l o = < / l a t e x i t > t < l a t e x i t s h a 1 _ b a s e 6 4 = " o X 5 d 5 J N F S e R E z i H B m z x X n y L z 5 3 g = " > A A A B 9 X i c b V C 7 T g J B F L 2 L L 8 Q X a m k z E U y s y C 6 N l i Q 2 l p j I I 4 G V z A 4 D T J j d 2 c z c 1 Z A N / 2 F j o T G 2 / o u d f + M s b K H g S S Y 5 O e e e 3 D s n i K U w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 j U o 0 4 y 2 m p N L d g B o u R c R b K F D y b q w 5 D Q P J O 8 H 0 J v M 7 j 1 w b o a J 7 n M X c D + k 4 E i P B K F r p o d p X 1 s 3 C K c 6 r g 3 L F r b k L k H X i 5 a Q C O Z q D 8 l d / q F g S 8 g i Z p M b 0 P D d G P 6 U a B Z N 8 X u o n h s e U T e m Y 9 y y N a M i N n y 6 u n p M L q w z J S G n 7 I i Q L 9 X c i p a E x s z C w k y H F i V n 1 M v E / r 5 f g 6 N p P R R Q n y C O 2 X D R K J E F F s g r I U G j O U M 4 s o U w L e y t h E 6 o p Q 1 t U y Z b g r X 5 5 n b T r N c + t e X f 1 S q O a 1 1 G E M z i H S / D g C h p w C 0 1 o A Q M N z / A K b 8 6 T 8 + K 8 O x / L 0 Y K T Z 0 7 h D 5 z P H 2 3 l k l o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " o X 5 d 5 J N F S e R E z i H B m z x X n y L z 5 3 g = " > A A A B 9 X i c b V C 7 T g J B F L 2 L L 8 Q X a m k z E U y s y C 6 N l i Q 2 l p j I I 4 G V z A 4 D T J j d 2 c z c 1 Z A N / 2 F j o T G 2 / o u d f + M s b K H g S S Y 5 O e e e 3 D s n i K U w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 j U o 0 4 y 2 m p N L d g B o u R c R b K F D y b q w 5 D Q P J O 8 H 0 J v M 7 j 1 w b o a J 7 n M X c D + k 4 E i P B K F r p o d p X 1 s 3 C K c 6 r g 3 L F r b k L k H X i 5 a Q C O Z q D 8 l d / q F g S 8 g i Z p M b 0 P D d G P 6 U a B Z N 8 X u o n h s e U T e m Y 9 y y N a M i N n y 6 u n p M L q w z J S G n 7 I i Q L 9 X c i p a E x s z C w k y H F i V n 1 M v E / r 5 f g 6 N p P R R Q n y C O 2 X D R K J E F F s g r I U G j O U M 4 s o U w L e y t h E 6 o p Q 1 t U y Z b g r X 5 5 n b T r N c + t e X f 1 S q O a 1 1 G E M z i H S / D g C h p w C 0 1 o A Q M N z / A K b 8 6 T 8 + K 8 O x / L 0 Y K T Z 0 7 h D 5 z P H 2 3 l k l o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " o X 5 d 5 J N F S e R E z i H B m z x X n y L z 5 3 g = " > A A A B 9 X i c b V C 7 T g J B F L 2 L L 8 Q X a m k z E U y s y C 6 N l i Q 2 l p j I I 4 G V z A 4 D T J j d 2 c z c 1 Z A N / 2 F j o T G 2 / o u d f + M s b K H g S S Y 5 O e e e 3 D s n i K U w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 j U o 0 4 y 2 m p N L d g B o u R c R b K F D y b q w 5 D Q P J O 8 H 0 J v M 7 j 1 w b o a J 7 n M X c D + k 4 E i P B K F r p o d p X 1 s 3 C K c 6 r g 3 L F r b k L k H X i 5 a Q C O Z q D 8 l d / q F g S 8 g i Z p M b 0 P D d G P 6 U a B Z N 8 X u o n h s e U T e m Y 9 y y N a M i N n y 6 u n p M L q w z J S G n 7 I i Q L 9 X c i p a E x s z C w k y H F i V n 1 M v E / r 5 f g 6 N p P R R Q n y C O 2 X D R K J E F F s g r I U G j O U M 4 s o U w L e y t h E 6 o p Q 1 t U y Z b g r X 5 5 n b T r N c + t e X f 1 S q O a 1 1 G E M z i H S / D g C h p w C 0 1 o A Q M N z / A K b 8 6 T 8 + K 8 O x / L 0 Y K T Z 0 7 h D 5 z P H 2 3 l k l o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " o X 5 d 5 J N F S e R E z i H B m z x X n y L z 5 3 g = " > A A A B 9 X i c b V C 7 T g J B F L 2 L L 8 Q X a m k z E U y s y C 6 N l i Q 2 l p j I I 4 G V z A 4 D T J j d 2 c z c 1 Z A N / 2 F j o T G 2 / o u d f + M s b K H g S S Y 5 O e e e 3 D s n i K U w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 j U o 0 4 y 2 m p N L d g B o u R c R b K F D y b q w 5 D Q P J O 8 H 0 J v M 7 j 1 w b o a J 7 n M X c D + k 4 E i P B K F r p o d p X 1 s 3 C K c 6 r g 3 L F r b k L k H X i 5 a Q C O Z q D 8 l d / q F g S 8 g i Z p M b 0 P D d G P 6 U a B Z N 8 X u o n h s e U T e m Y 9 y y N a M i N n y 6 u n p M L q w z J S G n 7 I i Q L 9 X c i p a E x s z C w k y H F i V n 1 M v E / r 5 f g 6 N p P R R Q n y C O 2 X D R K J E F F s g r I U G j O U M 4 s o U w L e y t h E 6 o p Q 1 t U y Z b g r X 5 5 n b T r N c + t e X f 1 S q O a 1 1 G E M z i H S / D g C h p w C 0 1 o A Q M N z / A K b 8 6 T 8 + K 8 O x / L 0 Y K T Z 0 7 h D 5 z P H 2 3 l k l o = < / l a t e x i t > z < l a t e x i t s h a 1 _ b a s e 6 4 = " s 5 w / 8 F n + A J E Y F A K P 4 x Q l G U x I 4 p w = " > A A A B 6 n i c b V A 9 T w J B E J 3 D L 8 Q v 1 N J m I 5 h Y k T s a L U l s L D H K R w I X s r f s w Y a 9 v c v u n A l e + A k 2 F h p j 6 y + y 8 9 + 4 w B U K v m S S l / d m M j M v S K Q w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 T Z x q x l s s l r H u B t R w K R R v o U D J u 4 n m N A o k 7 w S T m 7 n f e e T a i F g 9 4 D T h f k R H S o S C U b T S f f W p O i h X 3 J q 7 A F k n X k 4 q k K M 5 K H / 1 h z F L I 6 6 Q S W p M z 3 M T 9 D O q U T D J Z 6 V + a n h C 2 Y S O e M 9 S R S N u / G x x 6 o x c W G V I w l j b U k g W 6 u + J j E b G T K P A d k Y U x 2 b V m 4 v / e b 0 U w 2 s / E y p J k S u 2 X B S m k m B M 5 n + T o d C c o Z x a Q p k W 9 l b C x l R T h j a d k g 3 B W 3 1 5 n b T r N c + t e X f 1 S q O a x 1 G E M z i H S / D g C h p w C 0 1 o A Y M R P M M r v D n S e X H e n Y 9 l a 8 H J Z 0 7 h D 5 z P H 5 d W j T o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " s 5 w / 8 F n + A J E Y F A K P 4 x Q l G U x I 4 p w = " > A A A B 6 n i c b V A 9 T w J B E J 3 D L 8 Q v 1 N J m I 5 h Y k T s a L U l s L D H K R w I X s r f s w Y a 9 v c v u n A l e + A k 2 F h p j 6 y + y 8 9 + 4 w B U K v m S S l / d m M j M v S K Q w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 T Z x q x l s s l r H u B t R w K R R v o U D J u 4 n m N A o k 7 w S T m 7 n f e e T a i F g 9 4 D T h f k R H S o S C U b T S f f W p O i h X 3 J q 7 A F k n X k 4 q k K M 5 K H / 1 h z F L I 6 6 Q S W p M z 3 M T 9 D O q U T D J Z 6 V + a n h C 2 Y S O e M 9 S R S N u / G x x 6 o x c W G V I w l j b U k g W 6 u + J j E b G T K P A d k Y U x 2 b V m 4 v / e b 0 U w 2 s / E y p J k S u 2 X B S m k m B M 5 n + T o d C c o Z x a Q p k W 9 l b C x l R T h j a d k g 3 B W 3 1 5 n b T r N c + t e X f 1 S q O a x 1 G E M z i H S / D g C h p w C 0 1 o A Y M R P M M r v D n S e X H e n Y 9 l a 8 H J Z 0 7 h D 5 z P H 5 d W j T o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " s 5 w / 8 F n + A J E Y F A K P 4 x Q l G U x I 4 p w = " > A A A B 6 n i c b V A 9 T w J B E J 3 D L 8 Q v 1 N J m I 5 h Y k T s a L U l s L D H K R w I X s r f s w Y a 9 v c v u n A l e + A k 2 F h p j 6 y + y 8 9 + 4 w B U K v m S S l / d m M j M v S K Q w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 T Z x q x l s s l r H u B t R w K R R v o U D J u 4 n m N A o k 7 w S T m 7 n f e e T a i F g 9 4 D T h f k R H S o S C U b T S f f W p O i h X 3 J q 7 A F k n X k 4 q k K M 5 K H / 1 h z F L I 6 6 Q S W p M z 3 M T 9 D O q U T D J Z 6 V + a n h C 2 Y S O e M 9 S R S N u / G x x 6 o x c W G V I w l j b U k g W 6 u + J j E b G T K P A d k Y U x 2 b V m 4 v / e b 0 U w 2 s / E y p J k S u 2 X B S m k m B M 5 n + T o d C c o Z x a Q p k W 9 l b C x l R T h j a d k g 3 B W 3 1 5 n b T r N c + t e X f 1 S q O a x 1 G E M z i H S / D g C h p w C 0 1 o A Y M R P M M r v D n S e X H e n Y 9 l a 8 H J Z 0 7 h D 5 z P H 5 d W j T o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " s 5 w / 8 F n + A J E Y F A K P 4 x Q l G U x I 4 p w = " > A A A B 6 n i c b V A 9 T w J B E J 3 D L 8 Q v 1 N J m I 5 h Y k T s a L U l s L D H K R w I X s r f s w Y a 9 v c v u n A l e + A k 2 F h p j 6 y + y 8 An illustration of a feasible region is shown in Figure 3. 9 + 4 w B U K v m S S l / d m M j M v S K Q w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 T Z x q x l s s l r H u B t R w K R R v o U D J u 4 n m N A o k 7 w S T m 7 n f e e T a i F g 9 4 D T h f k R H S o S C U b T S f f W p O i h X 3 J q 7 A F k n X k 4 q k K M 5 K H / 1 h z F L I 6 6 Q S W p M z 3 M T 9 D O q U T D J Z 6 V + a n h C 2 Y S O e M 9 S R S N u / G x x 6 o x c W G V I w l j b U k g W 6 u + J j E b G T K P A d k Y U x 2 b V m 4 v / e b 0 U w 2 s / E y p J k S u 2 X B S m k m B M 5 n + T o d C c o Z x a Q p k W 9 l b C x l R T h j a d k g 3 B W 3 1 5 n b T r N c + t e X f 1 S q O a x 1 G E M z i H S / D g C h p w C 0 1 o A Y M R P M M r v D n S e X H The following lemma establishes an important property of feasible regions that will be used later in the proof of the main theorem. Proof. We proceed with a simple case analysis using the definition of F s . Let z = (z 1 , z 2 ). Case 1: Assume s ≥ t so that ||s − t|| 1 = s − t. By the definition of F s , we have z 2 ≥ s and thus ||z − t|| 1 = z 1 + z 2 − t ≥ z 1 + s − t ≥ s − t = ||s − t|| 1 . Case 2.1: If s < t, then ||s − t|| 1 = t − s. If t ≤ z 2 , then since z 1 ≥ z 2 − s and z 2 ≥ s, ||z − t|| 1 = z 1 + z 2 − t ≥ (z 2 − s) + z 2 − t ≥ t − s + t − t = t − s = ||s − t|| 1 . Case 2.2: If s < t but t > z 2 , then since z 2 ≤ z 1 + s, it follows that ||z − t|| 1 = z 1 + t − z 2 ≥ z 1 + t − (z 1 + s) = t − s = ||s − t|| 1 . The lemma now follows. Properties of the geodesic distance function for an arbitrary metric graph Let G = (V, E) be an arbitrary metric graph with shortest system of loops of lengths 2s 1 , · · · , 2s n . Fix an arbitrary base point v ∈ |G| and consider Dg(f v ), as defined in Section 2.2. Let T v denote the shortest path tree in G rooted at v. We consider the base point v ∈ |G| to be a graph node of G; that is, we add it to V if necessary. We further assume that the graph G is "generic" in the sense that there do not exist two or more shortest paths from the base point v to any graph node of G in V . For any input metric graph G, we can perturb it to be one that is generic within arbitrarily small Gromov-Hausdorff distance. For simplicity, when v is fixed, we shall omit v in our notation and speak of the persistence diagram D := Dg(f v ), the function f := f v , and the shortest path tree T := T v . We present three straightforward observations, the first of which follows immediately from the definition of the shortest path tree and the Extreme Value Theorem. Observation 1. The shortest path tree T of G has |V | − 1 edges, and there are |E| − |V | + 1 non-tree edges. For each non-tree edge e ∈ E \ T , there exists a unique u ∈ e such that f (u) is a local maximum value of f . Note that every feature in the persistence diagram D must be born at a point in the graph that is an up-fork, i.e., a point coupled with a pair of adjacent directions along which the function f is increasing. Since there are no local minimum points of f (except for v itself), these must be vertices in the graph of degree at least 3 (see, e.g., [21]). The final observation relates to points belonging to cycles in G that yield local maximum values of f (see [2]). To delve further into this, let {γ 1 , . . . , γ n } denote the elements of the shortest system of loops for G listed in order of non-decreasing loop length. Proof. Since each γ i k (1 ≤ k ≤ m) is an element of the shortest system of loops for G and i 1 ≤ i 2 ≤ . . . ≤ i m , this implies that s i 1 ≤ · · · ≤ s im , where 2s i k is the length of cycle γ i k in the shortest system of loops of G. v v v u < l a t e x i t s h a 1 _ b a s e 6 4 = " g c P Y R Z u L u I Y V b U 2 x K Z Z O s q Q T Y K I = " > A A A B 6 n i c b V A 9 S w N B E J 2 L X z F + R S 1 t F h P B K t y l 0 U o C N p Y R z Q c k R 9 j b z C V L 9 v a O 3 T 0 h H P k J N h a K 2 P q L 7 P w 3 b p I r N P H B w O O 9 G W b m B Y n g 2 r j u t 1 P Y 2 N z a 3 i n u l v b 2 D w 6 P y s c n b R 2 n i m G L x S J W 3 Y B q F F x i y 3 A j s J s o p F E g s B N M b u d + 5 w m V 5 r F 8 N N M E / Y i O J A 8 5 o 8 Z K D 9 W 0 O i h X 3 J q 7 A F k n X k 4 q k K M 5 K H / 1 h z F L I 5 S G C a p 1 z 3 M T 4 2 d U G c 4 E z k r 9 V G N C 2 Y S O s G e p p B F q P 1 u c O i M X V h m S M F a 2 p C E L 9 f d E R i O t p 1 F g O y N q x n r V m 4 v / e b 3 U h N d + x m W S G p R Assume instead that f (u) < s im . Now, γ in G must contain at least one non-tree edge as it is a cycle. Let e 1 , . . . e = e be all non-tree edges of G with largest function value at most f (u). Assume they contain maximum points u 1 , . . . , u = u, respectively, where the edges and maxima are sorted in order of increasing function value of f . For two points x, y ∈ |T |, let α(x, y) denote the unique tree path from x to y within the shortest path tree. For each j ∈ {1, . . . , }, let e j = (e 0 j , e 1 j ) and let c j denote the cycle c j = α(v, e 1 j ) • e j • α(e 0 j , v). By assumption, since u = u is the point in γ with the largest local maximum value of f and f (u) < s im , it follows that the length of every cycle c j is less than s im . However, the set of cycles {c 1 , . . . , c } form a basis for the subgraph of G spanned by all edges containing only points of function value at most f (u). Therefore, we may represent γ as a linear combination of cycles from the set {c 1 , . . . , c }, i.e., γ may be decomposed into shorter cycles, each of length less than s im = length(γ im ) 2 . This is a contradiction to the fact that γ i 1 , . . . , γ im are elements of the shortest system of loops for G. Hence, we conclude that An example that illustrates the proof of Lemma 9 is shown in Figure 5. Later we will use the following simpler version of Lemma 9, where γ is a single element of the shortest system of loops. f (u) ≥ s im . v c j u < l a t e x i t s h a 1 _ b a s e 6 4 = " g c P Y R Z u L u I Y V b U 2 x K Z Z O s q Q T Y K I = " > A A A B 6 n i c b V A 9 S w N B E J 2 L X z F + R S 1 t F h P B K t y l 0 U o C N p Y R z Q c k R 9 j b z C V L 9 v a O 3 T 0 h H P k J N h a K 2 P q L 7 P w 3 b p I r N P H B w O O 9 G W b m B Y n g 2 r j u t 1 P Y 2 N z a 3 i n u l v b 2 D w 6 P y s c n b R 2 n i m G L x S J W 3 Y B q F F x i y 3 A j s J s o p F E g s B N M b u d + 5 w m V 5 r F 8 N N M E / Y i O J A 8 5 o 8 Z K D 9 W 0 O i h X 3 J q 7 A F k n X k 4 q k K M 5 K H / 1 h z F L I 5 S G C a p 1 z 3 M T 4 2 d U G c 4 E z k r 9 V G N C 2 Y S O s G e p p B F q P 1 u c O i M X V h m S M F a 2 p C E L 9 f d E R i O t p 1 F g O y N q x n r V m 4 v / e b 3 U h N d + x m W S G p R s u S h M B T E x m f 9 N h l w h M 2 J q C W W K 2 1 s J G 1 N F m b H p l G w I 3 u r L 6 6 R d r 3 l u z b u v V x o 3 e R x F O I N z u A Q P r q A B d 9 C E F j A Y w T O 8 w p s j n B f n 3 f l Y t h a c f O Y U / s D 5 / A G X j 4 1 P < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " g c P Y R Z u L u I Y V b U 2 x K Z Z O s q Q T Y K I = " > A A A B 6 n i c b V A 9 S w N B E J 2 L X z F + R S 1 t F h P B K t y l 0 U o C N p Y R z Q c k R 9 j b z C V L 9 v y U Q = " > A A A B 7 H i c b V A 9 T w J B E J 3 D L 8 Q v 1 N J m I 5 h Y k T s a r Q y J j S U m H p D A h e w t A 6 z s 7 V 1 2 9 0 z I h d 9 g Y 6 E x t v 4 g O / + N C 1 y h 4 E s m e X l v J j P z w k R w b V z 3 2 y l s b G 5 t 7 x R 3 S 3 v 7 B 4 d H 5 e O T l o 5 T x d B n s Y h V J 6 Q a B Z f o G 2 4 E d h K F N A o F t s P J 7 d x v P 6 H S P J Y P Z p p g E N G R 5 E P O q L G S X 8 X + Y 7 V f r r g 1 d w G y T r y c V C B H s 1 / + 6 g 1 i l k Y o D R N U 6 6 7 n J i b I q D K c C Z y V e q n G h L I J H W H X U k k j 1 E G 2 O H Z G L q w y I M N Y 2 Z K G L N T f E x m N t J 5 G o e 2 M q B n r V W 8 u / u d 1 U z O 8 D j I u k 9 S g Z M t F w 1 Q Q E 5 P 5 5 2 T A F T I j p p Z Q p r i 9 l b A x V Z Q Z m 0 / J h u C t v r x O W v W a 5 9 a 8 + 3 q l c Z P H U Y Q z O I d L 8 O A K G n A H T f C B A Y d n e I U 3 R z o v z r v z s W w t O P n M K f y B 8 / k D + / 6 O H A = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " 7 V Q i h J M 1 f Q y i g j x d r f 7 5 3 Z b 3 y U Q = " > A A A B 7 H i c b V A 9 T w J B E J 3 D L 8 Q v 1 N J m I 5 h Y k T s a r Q y J j S U m H p D A h e w t A 6 z s 7 V 1 2 9 0 z I h d 9 g Y 6 E x t v 4 g O / + N C 1 y h 4 E s m e X l v J j P z w k R w b V z 3 2 y l s b G 5 t 7 x R 3 S 3 v 7 B 4 d H 5 e O T l o 5 T x d B n s Y h V J 6 Q a B Z f o G 2 4 E d h K F N A o F t s P J 7 d x v P 6 H S P J Y P Z p p g E N G R 5 E P O q L G S X 8 X + Y 7 V f r r g 1 d w G y T r y c V C B H s 1 / + 6 g 1 i l k Y o D R N U 6 6 7 n J i b I q D K c C Z y V e q n G h L I J H W H X U k k j 1 E G 2 O H Z G L q w y I M N Y 2 Z K G L N T f E x m N t J 5 G o e 2 M q B n r V W 8 u / u d 1 U z O 8 D j I u k 9 S g Z M t F w 1 Q Q E 5 P 5 5 2 T A F T I j p p Z Q p r i 9 l b A x V Z Q Z m 0 / J h u C t v r x O W v W a 5 9 a 8 + 3 q l c Z P H U Y Q z O I d L 8 O A K G n A H T f C B A Y d n e I U 3 R z o v z r v z s W w t O P n M K f y B 8 / k D + / 6 O H A = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " 7 V Q i h J M 1 f Q y i g j x d r f 7 5 3 Z b 3 y U Q = " > A A A B 7 H i c b V A 9 T w J B E J 3 D L 8 Q v 1 N J m I 5 h Y k T s a r Q y J j S U m H p D A h e w t A 6 z s 7 V 1 2 9 0 z I h d 9 g Y 6 E x t v 4 g O / + N C 1 y h 4 E s m e X l v J j P z w k R w b V z 3 2 y l s b G 5 t 7 x R 3 S 3 v 7 B 4 d H 5 e O T l o 5 T x d B n s Y h V J 6 Q a B Z f o G 2 4 E d h K F N A o F t s P J 7 d x v P 6 H S P J Y P Z p p g E N G R 5 E P O q L G S X 8 X + Y 7 V f r r g 1 d w G y T r y c V C B H s 1 / + 6 g 1 i l k Y o D R N U 6 6 7 n J i b I q D K c C Z y V e q n G h L I J H W H X U k k j 1 E G 2 O H Z G L q w y I M N Y 2 Z K G L N T f E x m N t J 5 G o e 2 M q B n r V W 8 u / u d 1 U z O 8 D j I u k 9 S g Z M t F w 1 Q Q E 5 P 5 5 2 T A F T I j p p Z Q p r i 9 l b A x V Z Q Z m 0 / J h u C t v r x O W v W a 5 9 a 8 + 3 q l c Z P H U Y Q z O I d L 8 O A K G n A H T f C B A Y d n e I U 3 R z o v z r v z s W w t O P n M K f y B 8 / k D + / 6 O H A = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " 7 V Q i h J M 1 f Q y i g j x d r f 7 5 3 Z b 3 y U Q = " > A A A B 7 H i c b V A 9 T w J B E J 3 D L 8 Q v 1 N J m I 5 h Y k T s a r Q y J j S U m H p D A h e w t A 6 z s 7 V 1 2 9 0 z I h d 9 g Y 6 E x t v 4 g O / + N C 1 y h 4 E s m e X l v J j P z w k R w b V z 3 2 y l s b G 5 t 7 x R 3 S 3 v 7 B 4 d H 5 e O T l o 5 T x d B n s Y h V J 6 Q a B Z f o G 2 4 E d h K F N A o F t s P J 7 d x v P 6 H S P J Y P Z p p g E N G R 5 E P O q L G S X 8 X + Y 7 V f r r g 1 d w G y T r y c V C B H s 1 / + 6 g 1 i l k Y o D R N U 6 6 7 n J i b I q D K c C Z y V e q n G h L I J H W H X U k k j 1 E G 2 O H Z G L q w y I M N Y 2 Z K G L N T f E x m N t J 5 G o e 2 M q B n r V W 8 u / u d 1 U z O 8 D j I u k 9 S g Z M t F w 1 Q Q E 5 P 5 5 2 T A F T I j p p Z Q p r i 9 l b A x V Z Q Z m 0 / J h u C t v r x O W v W a 5 9 a 8 + 3 q l c Z P H U Y Q z O I d L 8 O A K G n A H T f C B A Y d n e I U 3 R z o v z r v z s W w t O P n M K f y B 8 / k D + / 6 O H Corollary 10. Let γ be an element of the shortest system of loops for G with a length 2s, and let u denote the point in any edge of γ with largest maximum value of f . Then f (u) ≥ s. The main theorem and its proof We are now ready to establish a comparison of the intrinsicČech and persistence distortion distances between a bouquet metric graph and an arbitrary metric graph. Theorem 11. Let G 1 and G 2 be finite metric graphs such that G 1 is a bouquet graph and G 2 is arbitrary. Then d IC (G 1 , G 2 ) ≤ 1 2 d P D (G 1 , G 2 ). Proof. Let G 1 be a bouquet graph consisting of m cycles of lengths 0 < 2t 1 ≤ . . . ≤ 2t m , all sharing one common point o ∈ |G 1 |. Let G 2 be an arbitrary metric graph with shortest system of loops consisting of n loops of lengths 2s 1 , · · · , 2s n listed in non-decreasing order. In what follows, we suppose n ≥ m; the case when m ≥ n proceeds similarly. As before, we obtain a sequence of length n, 2t 1 ≤ 2t 2 · · · ≤ 2t n (where t 1 = · · · = t n−m = 0, t n−m+1 = t 1 , · · · , and t n = t m ). Let f and g denote the geodesic distance functions on G 1 and G 2 , respectively. First, as in Corollary 6, the intrinsicČech distance between G 1 and G 2 , denoted by δ, is δ := d IC (G 1 , G 2 ) = n max i=1 |s i − t i | 2 .(1) Second, note that the persistence diagram D 1 := Dg(f o ) with respect to the base point o is D 1 = {(0, t 1 ), · · · , (0, t n )} (of course, this may include some copies of (0, 0) if m < n). Next, fix an arbitrary base point v ∈ |G 2 | and consider the persistence diagram D 2 := Dg(g v ). Consider the abstract persistence diagram D := {(0, s 1 ), · · · , (0, s n )} = {s 1 , . . . , s n } that consists only of points on the y-axis at the s i values. Unless G 2 is also a bouquet graph, D is not necessarily in Φ(|G 2 |). Nevertheless, we will use this persistence diagram as a point of comparison and relate points in D 2 to D . Notice that a consequence of Theorem 5 is that d B (D 1 , D ) = n max i=1 |s i − t i | = 2δ.(2) In order to accomplish our objective of relating points in D 2 with points in the ideal diagram D , we need the following lemma relating to feasible regions, which were introduced in Section 4.1. Lemma 12. Let D = {z 1 , . . . , z n } be an arbitrary persistence diagram such that z i ∈ F s i . Then d B (D 1 , D ) ≤ d B (D 1 , D ). Proof. Consider the optimal bottleneck matching between D 1 and D . According to Lemma 8, if the point t j = (0, t j ) ∈ D 1 is matched to z i ∈ D under this optimal matching, the matching of s i = (0, s i ) ∈ D to t j will yield a smaller distance. In other words, the induced bottleneck matching between D 1 and D , which is equal to 2δ, can only be smaller than d B (D 1 , D ). The outline of the remainder of the proof of Theorem 11 is as follows. Theorem 13 shows that one can assign points in D 2 to the points in D in such a way that the condition in Lemma 12 is satisfied. The fact that one can assign points in the fixed persistence diagram D 2 to the distinct feasible regions F s i relies on the series of structural observations and results in Section 4.2, along with an application of Hall's marriage theorem. Finally, the inequality in Lemma 12 and the definition of the persistence distortion distance imply that 2δ = d B (D 1 , D ) ≤ inf v∈|G 2 | d B (D 1 , D 2 ) ≤ d P D (G 1 , G 2 ),(3) which, together with (1), completes the proof of Theorem 11. The following theorem establishes the existence of a one-to-one correspondence between points in D and points in D 2 . The goal is to construct a bipartite graph G = (D , D 2 , E), where there is an edgeê ∈ E from s i ∈ D to z ∈ D 2 if and only if z ∈ F s i . To prove the theorem, we invoke Hall's marriage theorem, which requires showing that for any subset S of points in D , the number of neighbors of S in D 2 is at least |S|. Theorem 13. The graph G contains a perfect matching. Proof. For simplicity, let T = T v and g = g v . First, note that there is a one-to-one correspondence Ψ : E 2 \ T → D 2 between the set of non-tree edges in G 2 (each of which contains a unique maximum point of g) and the set of points in D 2 . In particular, from Observations 1 and 2, the death-time of each point in D 2 uniquely corresponds to a local maximum u e within a non-tree edge e of G 2 . Fix an arbitrary subset S ⊆ D with |S| = a. In order to apply Hall's marriage theorem, we must show that there are at least a neighbors of S in G. We achieve this via an iterative procedure which we now describe. The procedure begins at step k = 0 and will end after a iterations. Elements in S = {s i 1 , . . . , s ia } are processed in non-decreasing order of their values, which also means that i 1 < i 2 < · · · < i a . At the start of the k-th iteration, we will have processed the first k elements of S, denoted S k = {s i 1 , . . . , s i k }, where for each s := s i h ∈ S k that we have processed (1 ≤ h ≤ k), we have maintained the following three invariances: Invariance 1:s is associated to a unique edge es ∈ E 2 \ T containing a unique maximum u es such that Ψ(es) ∈ D 2 is a neighbor ofs. We say that es and u es are marked bys. Invariance 2: :s is also associated to a cycle γ h = γ i h + γ (where the sum ranges over all belonging to some index set J h ⊂ {1, . . . , i h − 1}), such that e s contains the point in γ h with the largest value of g. Invariance 3: : height( γ h ) ≤ s i h , where height(γ) = max x∈γ g(x) − min x∈γ g(x) represents the height (i.e., the maximal difference in the g function values) of a given loop γ. Set S k = S \ S k = {s i k+1 , . . . , s ia }, denoting the remaining elements from S to be processed. Our goal is to identify a new neighbor in D 2 for element s i k+1 from S k satisfying the three invariances. Once we have done so, we will then set S k+1 = S k ∪ {s i k+1 } and move on to the next iteration in the procedure. Note that s i k+1 corresponds to an element γ i k+1 of the shortest system of loops for G 2 . Let e be the edge in γ i k+1 containing the maximum u e of highest g function value among all edges in γ i k+1 . There are now two possible cases to consider, and we will demonstrate how to obtain a new neighbor for s i k+1 in either case. In the first case, suppose u e is not yet marked by a previous element in S. In this case, e s i k+1 = e and γ i k+1 = γ i k+1 . We claim that the point (p e , g(u e )) in the persistence diagram D 2 corresponding to the maximum u e is contained in the feasible region F s i k+1 . In other words, s i k+1 ≤ g(u e ) ≤ p e + s i k+1 . Indeed, by Lemma 9, s i k+1 ≤ g(u e ), and by Observation 3, g(u e ) − s i k+1 ≤ lowest(γ i k+1 ) ≤ p e , where lowest(γ i k+1 ) := min x∈γ i k+1 g(x). Thus, (p e , g(u e )) ∈ D 2 is a new neighbor for s i k+1 ∈ S since it is contained in F s i k+1 . Consequently, we mark e and u e by s i k+1 and continue with the next iteration. In the second case, the maximum point u e has already been marked by a previous element s j 1 ∈ S k and been associated to a cycle γ j 1 . Observe that s j 1 ≤ s i k+1 since our procedure processes elements of S in non-decreasing order of their values (and thus j 1 < i k+1 ). We must now identify an edge other than e for s i k+1 satisfying the three invariance properties. To this end, let γ 1 = γ i k+1 + γ j 1 , and let e 1 be the edge containing the maximum in γ 1 with largest function value. If e 1 is unmarked, we set e s i k+1 = e 1 . Otherwise, if e 1 is marked by some cycle γ j 2 , we construct the loop γ 2 = γ 1 + γ j 2 = γ i k+1 + γ j 1 + γ j 2 . We continue this process until we find γ η = γ i k+1 + γ j 1 + γ j 2 + . . . + γ jη such that the edge e η containing the point of maximum function value of γ η is not marked. Once we arrive at this point, we set γ i k+1 = γ η and e s i k+1 = e η , so that the edge e η and corresponding maximum u eη are marked by s i k+1 . The reason that the procedure outlined above must indeed terminate is as follows. Each time a new γ jν is added to a cycle γ j ν−1 (for ν ∈ {1, . . . , η}), it is because the edge containing the maximum point of γ j ν−1 with largest function value is marked by s jν . Note that j ν = j β for ν = β (as during the procedure, the edge e i containing the maximum function value in the cycle γ i are all distinct), each j ν < i k+1 , and s jν ∈ S k . Furthermore, Invariance 2 guarantees that γ η cannot be empty, as each cycle γ jν can be written as a linear combination of elements in the shortest system of loops with indices at most j ν . As j ν < i k+1 , the cycle γ = γ j 1 + γ j 2 + . . . + γ jη can be represented as a linear combination of basis cycles with indices strictly smaller than i k+1 . In other words, γ i k+1 and γ must be linearly independent, and thus γ η = γ i k+1 + γ cannot be empty. Again, j ν = j β for ν = β and each j ν < i k+1 , and thus it follows that after at most k iterations, we will obtain a cycle whose highest valued maximum and corresponding edge are not yet marked. Now, we must show that the three invariances are satisfied as a result of the process described in this second case. To begin, we point out that Invariance 2 holds by construction. Next, the following lemma establishes Invariance 3. Lemma 14. For γ i k+1 = γ η = γ i k+1 + γ j 1 + γ j 2 + . . . + γ jη as above, height( γ i k+1 ) ≤ s i k+1 . Proof. Set γ 0 = γ i k+1 , and for ν ∈ {1, . . . , η}, set γ ν = γ i k+1 + γ j 1 + · · · + γ jν . Using induction, we will show that height( γ ν ) ≤ s i k+1 for any ν ∈ {0, . . . , η}. The inequality obviously holds for ν = 0. Suppose it holds for all ν ≤ ρ < η, and consider ν = ρ + 1 where γ ρ+1 = γ ρ + γ j ρ+1 . The cycle γ j ρ+1 is added as the edge e ρ of γ ρ containing the current maximum point of highest value of g has already been marked by s j ρ+1 with j ρ+1 < i k+1 . By Invariance 2, e ρ must also be the edge in γ j ρ+1 containing the point of maximum g function value, which we denote by g(e ρ ). Therefore, after the addition of γ ρ and γ j ρ+1 , (i) highest( γ ρ+1 ) := max x∈ γ ρ+1 g(x) ≤ g v (e ρ ), and (ii) lowest( γ ρ+1 ) := min x∈ γ ρ+1 g(x) ≥ min{ lowest( γ ρ ), lowest( γ j ρ+1 ) }. By the induction hypothesis, height( γ ρ ) ≤ s i k+1 , while by Invariance 3, height( γ j ρ+1 ) ≤ s j ρ+1 ≤ s i k+1 . By (ii) of equation (4), it then follows that lowest( γ ρ+1 ) ≥ min{g(e ρ ) − height( γ ρ ), g(e ρ ) − height( γ j ρ+1 )} ≥ g(e ρ ) − s i k+1 . Combining this with (i) of equation (4), we have that height( γ ρ+1 ) ≤ s i k+1 . The lemma then follows by induction. Finally, we show that Invariance 1 also holds. Since γ i k+1 = γ η = γ i k+1 + γ , with γ defined as above, by Lemma 9, we have that g(u eη ) ≥ s i k+1 . Suppose u eη is paired with some graph node w so that p eη = g(w). As the height of γ i k+1 is at most s i k+1 (Lemma 14), combined with Observation 3, we have that g(u eη ) − s i k+1 ≤ lowest( γ i k+1 ) ≤ p eη . This implies that the point (p eη , g(u eη )) ∈ F s i k+1 , establishing Invariance 1. We continue the process described above until k = a. At each iteration, when we process s i k , we add a new neighbor for elements in S. In the end, after processing all of the a elements in S, we find a neighbors for S, and the total number of neighbors in G of elements in S can only be larger. Since this holds for any subset S of D , the condition for Hall's theorem is satisfied for the bipartite graph G. This implies that there exists a perfect matching in G, completing the proof of Theorem 13. Theorem 11 now follows from Lemma 12 and equation (1). Discussion and future work In this paper, we compare the discriminative capabilities of the intrinsicČech and persistence distortion distances, which are based on topological signatures of metric graphs. The intrinsicČech signature arises from the intrinsicČech filtration of a metric graph, and the persistence distortion signature is based on the set of persistence diagrams arising from sublevel set filtrations of geodesic distance functions from all base points in a given metric graph. A map from a metric graph to these topological signatures is not injective: two different metric graphs may map to the same signature. However, each signature captures structural information of a graph and serves as a type of topological summary. Understanding the relationship between the intrinsicČech and persistence distortion distances enables one to better understand the discriminative powers of such summaries. We conjecture that the intrinsicČech distance is less discriminative than the persistence distortion distance for general metric graphs G 1 and G 2 , so that there exists a constant c ≥ 1 with d IC (G 1 , G 2 ) ≤ c · d P D (G 1 , G 2 ). This statement is trivially true in the case when both graphs are trees as the intrinsicČech distance is 0 while the persistence distortion distance is not. We establish a sharper version of the conjectured inequality in the case when one of the graphs is a bouquet graph and the other is arbitrary, as well as in the case when both graphs are obtained via wedges of cycles and edges. The methods of proof in Theorem 11 and Proposition 17 rely on explicitly knowing the forms of the persistence diagrams for the geodesic distance function in the case of a bouquet graph or a tree of loops. Therefore, these methods do not readily carry over to the most general setting for arbitrary metric graphs. Nevertheless, we believe that the relationship between the intrinsicČech and persistence distortion distances should hold for arbitrary finite metric graphs. Intuitively, the intrinsicČech signature only captures the sizes of the shortest loops in a metric graph, whereas the persistence distortion signature takes into consideration the relative positions of such loops and their interactions with one another. As one example application relating the intrinsicČech and persistence distortion summaries (and hence, distances), the work of Pirashvili, et al. [22] considers how the topological structure of chemical compounds relates to solubility in water, which is of fundamental importance in modern drug discovery. Analysis with the topological tool mapper [23] reveals that compounds with a smaller number of cycles are more soluble. The number of cycles, as well as cycle lengths, is naturally encoded in the intrinsicČech summary. In addition, these authors also use a discrete persistence distortion summary -where only the graph nodes, i.e., the atoms, serve as base points -to show that nearby compounds have similar levels of solubility. Although we conjecture that the intrinsič Cech distance is less discriminative then the persistence distortion distance, it might be sufficient in this particular analysis since solubility is highly correlated with the number of cycles of a chemical compound, that is, with the intrinsicČech summary [16]. It would be interesting to investigate other applications of the intrinsicČech and persistence distortion summaries in the context of data sets modeled by metric graphs. In addition, recall from the definition of the persistence distortion distance the map Φ : |G| → SpDg, Φ(v) = Dg(f v ). The map Φ is interesting in its own right. For instance, what can be said about the set Φ(|G|) in the space of persistence diagrams for a given G? Given only the set Φ(|G|) ⊂ SpDg, what information can one recover about the graph G? Oudot and Solomon [21] show that there is a dense subset of metric graphs (in the Gromov-Hausdorff topology, and indeed an open dense set in the so-called fibered topology) on which their barcode transform via the map Φ is globally injective up to isometry. They also prove its local injectivity on the space of metric graphs. Another question of interest is, how does the map Φ induce a stratification in the space of persistence diagrams? Finally, it would also be worthwhile to compare the discriminative capacities of the persistence distortion and intrinsicČech distances to other graph distances, such as the interleaving and functional distortion distances in the special case of Reeb graphs.
19,665
1812.05282
2904221347
Metric graphs are meaningful objects for modeling complex structures that arise in many real-world applications, such as road networks, river systems, earthquake faults, blood vessels, and filamentary structures in galaxies. To study metric graphs in the context of comparison, we are interested in determining the relative discriminative capabilities of two topology-based distances between a pair of arbitrary finite metric graphs: the persistence distortion distance and the intrinsic Cech distance. We explicitly show how to compute the intrinsic Cech distance between two metric graphs based solely on knowledge of the shortest systems of loops for the graphs. Our main theorem establishes an inequality between the intrinsic Cech and persistence distortion distances in the case when one of the graphs is a bouquet graph and the other is arbitrary. The relationship also holds when both graphs are constructed via wedge sums of cycles and edges.
In the context of comparing the relative discriminative capabilities of these distances, Bauer, Ge, and Wang @cite_5 show that the functional distortion distance between two Reeb graphs is bounded from below by the bottleneck distance between the persistence diagrams of the Reeb graphs. Bauer, Munch, and Wang @cite_0 establish a strong equivalence between the functional distortion distance and the interleaving distance on the space of all Reeb graphs, which implies the two distances are within a constant factor of one another. Carri ere and Oudot @cite_9 consider the intrinsic versions of the aforementioned distances and prove that they are all globally equivalent. They also establish a lower bound for the bottleneck distance in terms of a constant multiple of the functional distortion distance. In @cite_6 , Dey, Shi, and Wang show that the persistence distortion distance is stable with respect to changes to input metric graphs as measured by the Gromov-Hausdorff distance. In other words, the persistence distortion distance is bounded above by a constant factor of the Gromov-Hausdorff distance. Furthermore, the intrinsic distance is also bounded from above by the Gromov-Hausdorff distance for general metric spaces @cite_18 .
{ "abstract": [ "", "As graphical summaries for topological spaces and maps, Reeb graphs are common objects in the computer graphics or topological data analysis literature. Defining good metrics between these objects has become an important question for applications, where it matters to quantify the extent by which two given Reeb graphs differ. Recent contributions emphasize this aspect, proposing novel distances such as functional distortion or interleaving that are provably more discriminative than the so-called bottleneck distance, being true metrics whereas the latter is only a pseudo-metric. Their main drawback compared to the bottleneck distance is to be comparatively hard (if at all possible) to evaluate. Here we take the opposite view on the problem and show that the bottleneck distance is in fact good enough locally, in the sense that it is able to discriminate a Reeb graph from any other Reeb graph in a small enough neighborhood, as efficiently as the other metrics do. This suggests considering the intrinsic metrics induced by these distances, which turn out to be all globally equivalent. This novel viewpoint on the study of Reeb graphs has a potential impact on applications, where one may not only be interested in discriminating between data but also in interpolating between them.", "Metric graphs are ubiquitous in science and engineering. For example, many data are drawn from hidden spaces that are graph-like, such as the cosmic web. A metric graph offers one of the simplest yet still meaningful ways to represent the non-linear structure hidden behind the data. In this paper, we propose a new distance between two finite metric graphs, called the persistence-distortion distance, which draws upon a topological idea. This topological perspective along with the metric space viewpoint provide a new angle to the graph matching problem. Our persistence-distortion distance has two properties not shared by previous methods: First, it is stable against the perturbations of the input graph metrics. Second, it is a continuous distance measure, in the sense that it is defined on an alignment of the underlying spaces of input graphs, instead of merely their nodes. This makes our persistence-distortion distance robust against, for example, different discretizations of the same underlying graph. Despite considering the input graphs as continuous spaces, that is, taking all points into account, we show that we can compute the persistence-distortion distance in polynomial time. The time complexity for the discrete case where only graph nodes are considered is much faster.", "The Reeb graph is a construction that studies a topological space through the lens of a real valued function. It has been commonly used in applications, however its use on real data means that it is desirable and increasingly necessary to have methods for comparison of Reeb graphs. Recently, several metrics on the set of Reeb graphs have been proposed. In this paper, we focus on two: the functional distortion distance and the interleaving distance. The former is based on the Gromov-Hausdorff distance, while the latter utilizes the equivalence between Reeb graphs and a particular class of cosheaves. However, both are defined by constructing a near-isomorphism between the two graphs of study. In this paper, we show that the two metrics are strongly equivalent on the space of Reeb graphs. Our result also implies the bottleneck stability for persistence diagrams in terms of the Reeb graph interleaving distance.", "We propose a metric for Reeb graphs, called the functional distortion distance. Under this distance, the Reeb graph is stable against small changes of input functions. At the same time, it remains discriminative at differentiating input functions. In particular, the main result is that the functional distortion distance between two Reeb graphs is bounded from below by the bottleneck distance between both the ordinary and extended persistence diagrams for appropriate dimensions. As an application of our results, we analyze a natural simplification scheme for Reeb graphs, and show that persistent features in Reeb graph remains persistent under simplification. Understanding the stability of important features of the Reeb graph under simplification is an interesting problem on its own right, and critical to the practical usage of Reeb graphs." ], "cite_N": [ "@cite_18", "@cite_9", "@cite_6", "@cite_0", "@cite_5" ], "mid": [ "", "2963212950", "1511069251", "1756475526", "2962902468" ] }
The Relationship Between the IntrinsicČech and Persistence Distortion Distances for Metric Graphs *
When working with graph-like data equipped with a notion of distance, a very useful means of capturing existing geometric and topological relationships within the data is via a metric graph. Given an ordinary graph G = (V, E) and a length function on the edges, one may view G as a metric space with the shortest path metric in any geometric realization. Metric graphs are used to model a variety of real-world data sets, such as road networks, river systems, earthquake faults, blood vessels, and filamentary structures in galaxies [1,24,25]. Given these practical applications, it is natural to ask how to compare two metric graphs in a meaningful way. Such a comparison is important to understand the stability of these structures in the noisy setting. One way to do this is to check whether there is a bijection between the two input graphs as part of a graph isomorphism problem [3]. Another way is to define, compute, and compare various distances on the space of graphs. In this paper, we are interested in determining the discriminative capabilities of two distances that arise from computational topology: the persistence distortion distance and the intrinsicČech distance. If two distances d 1 and d 2 on the space of metric graphs satisfy an inequality d 1 (G 1 , G 2 ) ≤ c · d 2 (G 1 , G 2 ) (for some constant c > 0 and any pair of graphs G 1 and G 2 ), this means that d 2 has greater discriminative capacity for differentiating between two input graphs. For instance, if d 1 (G 1 , G 2 ) = 0 and d 2 (G 1 , G 2 ) > 0, then d 2 has a better discriminative power than d 1 . Related work Well-known methods for comparing graphs using distance measures include combinatorial (e.g., graph edit distance [27]) and spectral (e.g., eigenvalue decomposition [26]) approaches. Graph edit distance minimizes the cost of transforming one graph to another via a set of elementary operators such as node/edge insertions/deletions, while spectral approaches optimize objective functions based on properties of the graph spectra. Recently, several distances for comparing metric graphs have been proposed based on ideas from computational topology. In the case of a special type of metric graph called a Reeb graph, these distances include: the functional distortion distance [4], the combinatorial edit distance [15], the interleaving distance [12], and its variant in the setting of merge trees [19]. In particular, the functional distortion distance can be considered as a variation of the Gromov-Hausdorff distance between two metric spaces [4]. The interleaving distance is defined via algebraic topology and utilizes the equivalence between Reeb graphs and cosheaves [12]. For metric graphs in general, both the persistence distortion distance [13] and the intrinsicČech distance [10] take into consideration the structure of metric graphs, independent of their geometric embeddings, by treating them as continuous metric spaces. In [21], Oudot and Solomon point out that since compact geodesic spaces can be approximated by finite metric graphs in the Gromov-Hausdorff sense [6] (see also the recent work of Mémoli and Okutan [18]), one can study potentially complicated length spaces by studying the persistence distortion of a sequence of approximating graphs. In the context of comparing the relative discriminative capabilities of these distances, Bauer, Ge, and Wang [4] show that the functional distortion distance between two Reeb graphs is bounded from below by the bottleneck distance between the persistence diagrams of the Reeb graphs. Bauer, Munch, and Wang [5] establish a strong equivalence between the functional distortion distance and the interleaving distance on the space of all Reeb graphs, which implies the two distances are within a constant factor of one another. Carrière and Oudot [9] consider the intrinsic versions of the aforementioned distances and prove that they are all globally equivalent. They also establish a lower bound for the bottleneck distance in terms of a constant multiple of the functional distortion distance. In [13], Dey, Shi, and Wang show that the persistence distortion distance is stable with respect to changes to input metric graphs as measured by the Gromov-Hausdorff distance. In other words, the persistence distortion distance is bounded above by a constant factor of the Gromov-Hausdorff distance. Furthermore, the intrinsicČech distance is also bounded from above by the Gromov-Hausdorff distance for general metric spaces [10]. Our contribution The main focus of this paper is relating two specific topological distances between general metric graphs G 1 and G 2 : the intrinsicČech distance and the persistence distortion distance. Both of these can be viewed as distances between topological signatures or summaries of G 1 and G 2 . Indeed, in the case of the intrinsicČech distance, a metric graph (G, d G ) is mapped to the persistence diagram Dg 1 IC G induced by the so-called intrinsicČech filtration IC G , and we may think of Dg 1 IC G as the signature of G. The intrinsicČech distance d IC (G 1 , G 2 ) between two metric graphs G 1 and G 2 is the bottleneck distance between these signatures, denoted d B (Dg 1 IC G 1 , Dg 1 IC G 2 ). For the persistence distortion distance, each metric graph G is mapped to a set Φ(G) of persistence diagrams, which is the signature of the graph G in this case. The persistence distortion distance d P D (G 1 , G 2 ) between G 1 and G 2 is measured by the Hausdorff distance between these image sets or signatures. See Section 2 for the definition of Φ, along with more detailed definitions of these two distances. Our objective is to determine the relative discriminative capacities of such signatures. We conjecture that the persistence distortion distance is more discriminative than the intrinsicČech distance. Conjecture 1. d IC ≤ c · d P D for some constant c > 0. It is known from [16] that Dg 1 IC G depends only on the lengths of the shortest system of loops in G, and thus the persistence distortion distance appears to be more discriminative, intuitively. We show in Section 3 that the intrinsicČech distance between two arbitrary finite metric graphs is determined solely by the difference in these shortest cycle lengths; see Theorem 5 for a precise statement. This further implies that the intrinsicČech distance between two arbitrary metric trees is always 0. In contrast, the persistence distortion distance takes relative positions of loops as well as branches into account, and is nonzero in the case of two trees. In other words, the conjecture holds for metric trees. We make progress toward proving the conjecture in greater generality in this paper. Theorem 11 establishes an inequality between the intrinsicČech and persistence distortion distances for two finite metric graphs in the case when one of the graphs is a bouquet graph and the other is arbitrary. In this case, the constant c = 1/2 so that the inequality is sharper than what is conjectured. The theorem and proof appear in Section 4, and we conclude that section by proving that Conjecture 1 also holds when both graphs are constructed by taking wedge sums of cycles and edges. While this does not yet prove the conjecture for arbitrary metric graphs, our work provides the first non-trivial relationship between these two meaningful topological distances. Our proofs also provide insights on the map Φ from a metric graph into the space of persistence diagrams as utilized in the definition of the persistence distortion distance. This map Φ is of interest itself; indeed, see the recent study of this map in [21]. In general, we believe that this direction of establishing qualitative understanding of topological signatures and their corresponding distances is interesting and valuable for use in applications. We leave the proof of the conjecture for arbitrary metric graphs as an open problem and give a brief discussion on some future directions in Section 5. Persistent homology and metric graphs We begin with a brief summary of persistent homology and how it can be utilized in the context of metric graphs. For background on homology and simplicial complexes, we refer the reader to [17,20], and for further details on persistent homology, see, e.g., [7,14]. In persistent homology, one studies the changing homology of an increasing sequence of subspaces of a topological space X. One (typical) way to obtain a filtration of X is to take a continuous function f : X → R and construct the sublevel set filtration, ∅ = X a 0 ⊆ X a 1 ⊆ . . . ⊆ X am = X, by writing X a i = f −1 (−∞, a i ) for the sublevel set defined by the value a i . The inclusions {X a i → X a j } 0≤i<j≤m induce the persistence module H k (X a 0 ) → H k (X a 1 ) → . . . → H k (X am ) in any homological dimension k by applying the homology functor with coefficients in some field. Another way to obtain a filtration is to build a sequence of simplicial complexes on a set of points using, for instance, the intrinsič Cech filtration [10] discussed in Section 2.2. Elements of each homology group may then be tracked through the filtration and recorded in a persistence diagram, with one diagram for each k. A persistence diagram is a multiset of points (a i , a j ) in the extended plane (R ∪ ±∞) 2 , where each point (a i , a j ) corresponds to a homological element that appears for the first time (is "born") at H k (X a i ) and which disappears ("dies") at H k (X a j ). A persistence diagram also includes the infinitely many points along the diagonal line y = x. The usual mantra for persistence is that points close to the diagonal are likely to represent noise, while points further from the diagonal may encode more robust topological features. In this paper, we are interested in summarizing the topological structure of a finite metric graph, specifically in homological dimension k = 1. Given a graph G = (V, E), where V and E denote the vertex and edge sets, respectively, as well as a length function, length : E → R ≥0 , on edges in E, a finite metric graph (|G|, d G ) is a metric space where |G| is a geometric realization of G and d G is defined as in [13]. Namely, if e and |e| denote an edge and its image in the geometric realization, we define α : [0, length(e)] → |e| to be the arclength parametrization, so that d G (u, v) = |α −1 (v) − α −1 (u)| for any u, v ∈ |e|. This definition may then be extended to any two points in |G| by restricting a given path from one point to another to edges in G, adding up these lengths, then taking the distance to be the minimum length of any such path. In this way, all points along an edge are points in a metric graph, not just the original graph's vertices. A system of loops of G refers to a set of cycles whose associated homology classes form a minimal generating set for the 1-dimensional (singular) homology group of G. The length-sequence of a system of loops is the sequence of lengths of elements in this set listed in non-decreasing order. Thus, a system of loops of G is shortest if its length-sequence is lexicographically smallest among all possible systems of loops of G. One particular class of metric graphs we will be working with are bouquet graphs. These are metric graphs containing a single vertex with a number of self-loops of various lengths attached to it. IntrinsicČech and persistence distortion distances In this section, we recall the distances between metric graphs that are being explored in this work. We note that both are actually pseudo-distances because it can be the case that d(G 1 , G 2 ) = 0 when G 1 = G 2 . However, for ease of exposition, we will refer to them simply as distances in this paper. Both rely on the bottleneck distance on the space of persistence diagrams, a version of which we now state. Definition 2. Let X and Y be persistence diagrams with µ : X → Y a bijection. The bottleneck distance between X and Y is d B (X, Y ) := inf µ:X→Y sup x∈X ||x − µ(x)|| 1 . Although this definition differs from the standard version of the bottleneck distance, which uses ||x−µ(x)|| ∞ rather than ||x−µ(x)|| 1 , the two are related via the inequalities ||x|| ∞ ≤ ||x|| 1 ≤ 2||x|| ∞ . Next, let (G, d G ) be a metric graph with geometric realization |G|. Define the intrinsic ball B(x, a i ) = {y ∈ |G| : d G (x, y) ≤ a i } for any x ∈ |G|, as well as the uncountable open cover U a i = {B(x, a i ) : x ∈ |G|}. We useČech(a i ) to denote the nerve of the cover U a i , referred to as the intrinsicČech complex. See Figure 1 for an illustration. Then {Čech(a i ) →Čech(a j )} 0≤a i <a j is the intrinsicČech filtration inducing the intrinsicČech persistence module {H k (Čech(a i )) → H k (Čech(a j ))} 0≤a i <a j in any dimension k, and the corresponding persistence diagram is denoted Dg k IC G . The following intrinsicČech distance definition comes from [10]. Here, we work with dimension k = 1. Figure 1: A finite subset of the infinite cover at a fixed radius (left) and its corresponding nerve (right). Definition 3. Given two metric graphs (G 1 , d G 1 ) and (G 2 , d G 2 ), their intrinsicČech distance is d IC (G 1 , G 2 ) := d B (Dg 1 IC G 1 , Dg 1 IC G 2 ). The persistence distortion distance was first introduced in [13]. Given a base point v ∈ |G|, define the geodesic distance function f v : |G| → R where f v (x) = d G (v, x) . Then Dg(f v ) is the union of the 0− and 1−dimensional extended persistence diagrams for f v (see [11] for the details of extended persistence). Equivalently, it is the 0-dimensional levelset zigzag persistence diagram induced by f v [8]. Define Φ : |G| → SpDg, Φ(v) = Dg(f v ), where SpDg denotes the space of persistence diagrams for all points v ∈ |G|. The set Φ(|G|) ⊂ SpDg is the persistence distortion of the metric graph G. Definition 4. Given two metric graphs (G 1 , d G 1 ) and (G 2 , d G 2 ), their persistence distortion distance is d P D (G 1 , G 2 ) := d H (Φ(|G 1 |), Φ(|G 2 |)) where d H denotes the Hausdorff distance. In other words, d P D (G 1 , G 2 ) = max sup D 1 ∈Φ(|G 1 |) inf D 2 ∈Φ(|G 2 |) d B (D 1 , D 2 ), sup D 2 ∈Φ(|G 2 |) inf D 1 ∈Φ(|G 1 |) d B (D 1 , D 2 ) . Note that the diagram Dg(f v ) contains both 0− and 1−dimensional persistence points, but only points of the same dimension are matched under the bottleneck distance. In this paper, we will only focus on the points in the 1-dimensional extended persistence diagrams for the persistence distortion distance computation. Calculating the intrinsicČech distance In this section, we show that the intrinsicČech distance between two metric graphs may be easily computed from knowing the shortest systems of loops for the graphs. We begin with a theorem that characterizes the bottleneck distance between two sets of points in the extended plane. a 1 ), . . . , (0, a n )} and D 2 = {(0, b 1 ), . . . , (0, b n )} be two persistence diagrams with 0 ≤ a 1 ≤ · · · ≤ a n and 0 ≤ b 1 ≤ · · · ≤ b n , respectively. Then Theorem 5. Let D 1 = {(0,d B (D 1 , D 2 ) = n max i=1 |a i − b i |. Proof. To simplify notation, we use the convention that for all i = 1, . . . , n, (0, a i ) = a i , (0, b i ) = b i , and (0, 0) = 0. Let µ be any matching of points in D 1 and D 2 , where each point a i in D 1 is either matched to a unique point b j in D 2 or to the nearest neighbor in the diagonal (and similarly for D 2 ). Assume that C µ is the cost of the matching µ, i.e., the maximum distance between two matched points. Now, let µ * be the matching such that µ * (a i ) = b i for all 0 ≤ i ≤ n. By construction, the cost of this matching is C µ * = n max i=1 |a i − b i |. We claim that the matching cost of µ * is less than or equal to that of µ, i.e., C µ * ≤ C µ . If this is the case, then µ * is the optimal bottleneck matching and therefore d B (D 1 , D 2 ) = C µ * . To show this, we look at where the matchings µ and µ * differ. Note that since all of the off-diagonal points in D 1 and D 2 lie on the y-axis, any such point matched to the diagonal under µ may simply be matched to (0, 0) since this will yield the same value in the 1 −norm. Now, starting with b 1 , let j be the first index where µ(a j ) = b j . Then, we have two cases: (1) µ(a k ) = b j for some k > j (i.e., b j is matched with some a k = a j ); or (2) µ(0) = b j (i.e., b j is matched with the diagonal, or equivalently, to 0). We show that in either case, matching b j with a j instead does not increase the cost of the matching. In the first case, let us also assume that µ(a j ) = b l for some l > j (the situation where µ(a j ) = 0 will be taken care of in the second case). Then, max{|a j − b j |, |a k − b l |} ≤ max{|a j − b l |, |a k − b j |}. That is, if we were to instead pair a j with b j and a k with b l , the cost of the matching would be lower. This can be seen by working through a case analysis on the relative order of a j , a k , b j , and b l along the y-axis. Intuitively, we can think of a j , a k , b j , and b l as the four corners of a trapezoid as in Figure 2. The diagonals of the trapezoid represent the distances under the matching µ, while the legs of the trapezoid represent the distances when we pair a j with b j and a k with b l . The maximum of the lengths of the legs will always be less than the maximum of the lengths of the diagonals. Adjusting the lengths of the top and bottom bases (which amounts to changing the order of a j , a k , b j , and b l along the y-axis) does not change this fact. Therefore, matching b j with a j instead of a k does not increase the cost of the matching. In the second case, if b j is matched to 0, there must be some a k with k ≥ j that is matched to 0, as well. If we were to instead match b j to a k , this does not increase the cost of the matching since max{b j , a k } ≥ |a k − b j | (i.e., the original cost is greater than the new cost). After this rematching, b j is no longer matched to 0 and this reverts to the first case. Similarly, if a j is matched to 0, it may be rematched in a similar manner. By looking at all the pairings where µ and µ * differ (in increasing order of indices), pairing a i with b i instead of µ(a i ) (and similarly, pairing b i with a i rather than what it was paired with under µ) always results in the same or lower cost matching. Therefore, C µ * ≤ C µ for all matchings µ; hence, d B (D 1 , D 2 ) = C µ * = n max i=1 |a i − b i |. To see how this applies to the computation of the intrinsicČech distance between two metric graphs, let G 1 be a metric graph with a shortest system of m loops of lengths 0 < 2t 1 ≤ · · · ≤ 2t m , and let G 2 be a metric graph with a shortest system of n loops of lengths 0 < 2s 1 ≤ · · · ≤ 2s n . Without loss of generality, suppose n ≥ m. From [16], the 1-dimensional intrinsicČech persistence diagrams of G 1 and G 2 are the multisets of points Dg 1 IC G 1 = 0, t 1 2 , . . . , 0, t m 2 and Dg 1 IC G 2 = 0, s 1 2 , . . . , 0, s n 2 . In order to apply Theorem 5, we add n − m copies of the point (0, 0) at the start of the list of points in Dg 1 IC G 1 , i.e., let Dg 1 IC G 1 = 0, t 1 2 , . . . , 0, t n 2 , where t 1 = · · · = t n−m = 0, t n−m+1 = t 1 , . . . , and t n = t m . Corollary 6. Let G 1 and G 2 be as above. Then d IC (G 1 , G 2 ) = n max i=1 |s i − t i | 2 . 4 Relating the intrinsicČech and persistence distortion distances for a bouquet graph and an arbitrary graph Feasible regions in persistence diagrams Our eventual goal for our main theorem (Theorem 11) is to estimate a lower bound for the persistence distortion distance between metric graphs G 1 = (V 1 , E 1 ) and G 2 = (V 2 , E 2 ) so that we can compare it with the intrinsicČech distance between them, given in Corollary 6. A fundamental part of this process relies on the notion of a feasible region for a point in a given persistence diagram lying on the y-axis. Definition 7. The feasible region for a point s := (0, s) ∈ R 2 is defined as F s = {z = (z 1 , z 2 ) : 0 ≤ z 1 ≤ z 2 , s ≤ z 2 ≤ z 1 + s}. s = (0, s) Fss = (0, s) Fs z = (x, y) w [Case 2.2] w [Case 2.1] w [Case 1] t < l a t e x i t s h a 1 _ b a s e 6 4 = " o X 5 d 5 J N F S e R E z i H B m z x X n y L z 5 3 g = " > A A A B 9 X i c b V C 7 T g J B F L 2 L L 8 Q X a m k z E U y s y C 6 N l i Q 2 l p j I I 4 G V z A 4 D T J j d 2 c z c 1 Z A N / 2 F j o T G 2 / o u d f + M s b K H g S S Y 5 O e e e 3 D s n i K U w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 j U o 0 4 y 2 m p N L d g B o u R c R b K F D y b q w 5 D Q P J O 8 H 0 J v M 7 j 1 w b o a J 7 n M X c D + k 4 E i P B K F r p o d p X 1 s 3 C K c 6 r g 3 L F r b k L k H X i 5 a Q C O Z q D 8 l d / q F g S 8 g i Z p M b 0 P D d G P 6 U a B Z N 8 X u o n h s e U T e m Y 9 y y N a M i N n y 6 u n p M L q w z J S G n 7 I i Q L 9 X c i p a E x s z C w k y H F i V n 1 M v E / r 5 f g 6 N p P R R Q n y C O 2 X D R K J E F F s g r I U G j O U M 4 s o U w L e y t h E 6 o p Q 1 t U y Z b g r X 5 5 n b T r N c + t e X f 1 S q O a 1 1 G E M z i H S / D g C h p w C 0 1 o A Q M N z / A K b 8 6 T 8 + K 8 O x / L 0 Y K T Z 0 7 h D 5 z P H 2 3 l k l o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " o X 5 d 5 J N F S e R E z i H B m z x X n y L z 5 3 g = " > A A A B 9 X i c b V C 7 T g J B F L 2 L L 8 Q X a m k z E U y s y C 6 N l i Q 2 l p j I I 4 G V z A 4 D T J j d 2 c z c 1 Z A N / 2 F j o T G 2 / o u d f + M s b K H g S S Y 5 O e e e 3 D s n i K U w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 j U o 0 4 y 2 m p N L d g B o u R c R b K F D y b q w 5 D Q P J O 8 H 0 J v M 7 j 1 w b o a J 7 n M X c D + k 4 E i P B K F r p o d p X 1 s 3 C K c 6 r g 3 L F r b k L k H X i 5 a Q C O Z q D 8 l d / q F g S 8 g i Z p M b 0 P D d G P 6 U a B Z N 8 X u o n h s e U T e m Y 9 y y N a M i N n y 6 u n p M L q w z J S G n 7 I i Q L 9 X c i p a E x s z C w k y H F i V n 1 M v E / r 5 f g 6 N p P R R Q n y C O 2 X D R K J E F F s g r I U G j O U M 4 s o U w L e y t h E 6 o p Q 1 t U y Z b g r X 5 5 n b T r N c + t e X f 1 S q O a 1 1 G E M z i H S / D g C h p w C 0 1 o A Q M N z / A K b 8 6 T 8 + K 8 O x / L 0 Y K T Z 0 7 h D 5 z P H 2 3 l k l o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " o X 5 d 5 J N F S e R E z i H B m z x X n y L z 5 3 g = " > A A A B 9 X i c b V C 7 T g J B F L 2 L L 8 Q X a m k z E U y s y C 6 N l i Q 2 l p j I I 4 G V z A 4 D T J j d 2 c z c 1 Z A N / 2 F j o T G 2 / o u d f + M s b K H g S S Y 5 O e e e 3 D s n i K U w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 j U o 0 4 y 2 m p N L d g B o u R c R b K F D y b q w 5 D Q P J O 8 H 0 J v M 7 j 1 w b o a J 7 n M X c D + k 4 E i P B K F r p o d p X 1 s 3 C K c 6 r g 3 L F r b k L k H X i 5 a Q C O Z q D 8 l d / q F g S 8 g i Z p M b 0 P D d G P 6 U a B Z N 8 X u o n h s e U T e m Y 9 y y N a M i N n y 6 u n p M L q w z J S G n 7 I i Q L 9 X c i p a E x s z C w k y H F i V n 1 M v E / r 5 f g 6 N p P R R Q n y C O 2 X D R K J E F F s g r I U G j O U M 4 s o U w L e y t h E 6 o p Q 1 t U y Z b g r X 5 5 n b T r N c + t e X f 1 S q O a 1 1 G E M z i H S / D g C h p w C 0 1 o A Q M N z / A K b 8 6 T 8 + K 8 O x / L 0 Y K T Z 0 7 h D 5 z P H 2 3 l k l o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " o X 5 d 5 J N F S e R E z i H B m z x X n y L z 5 3 g = " > A A A B 9 X i c b V C 7 T g J B F L 2 L L 8 Q X a m k z E U y s y C 6 N l i Q 2 l p j I I 4 G V z A 4 D T J j d 2 c z c 1 Z A N / 2 F j o T G 2 / o u d f + M s b K H g S S Y 5 O e e e 3 D s n i K U w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 j U o 0 4 y 2 m p N L d g B o u R c R b K F D y b q w 5 D Q P J O 8 H 0 J v M 7 j 1 w b o a J 7 n M X c D + k 4 E i P B K F r p o d p X 1 s 3 C K c 6 r g 3 L F r b k L k H X i 5 a Q C O Z q D 8 l d / q F g S 8 g i Z p M b 0 P D d G P 6 U a B Z N 8 X u o n h s e U T e m Y 9 y y N a M i N n y 6 u n p M L q w z J S G n 7 I i Q L 9 X c i p a E x s z C w k y H F i V n 1 M v E / r 5 f g 6 N p P R R Q n y C O 2 X D R K J E F F s g r I U G j O U M 4 s o U w L e y t h E 6 o p Q 1 t U y Z b g r X 5 5 n b T r N c + t e X f 1 S q O a 1 1 G E M z i H S / D g C h p w C 0 1 o A Q M N z / A K b 8 6 T 8 + K 8 O x / L 0 Y K T Z 0 7 h D 5 z P H 2 3 l k l o = < / l a t e x i t > t < l a t e x i t s h a 1 _ b a s e 6 4 = " o X 5 d 5 J N F S e R E z i H B m z x X n y L z 5 3 g = " > A A A B 9 X i c b V C 7 T g J B F L 2 L L 8 Q X a m k z E U y s y C 6 N l i Q 2 l p j I I 4 G V z A 4 D T J j d 2 c z c 1 Z A N / 2 F j o T G 2 / o u d f + M s b K H g S S Y 5 O e e e 3 D s n i K U w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 j U o 0 4 y 2 m p N L d g B o u R c R b K F D y b q w 5 D Q P J O 8 H 0 J v M 7 j 1 w b o a J 7 n M X c D + k 4 E i P B K F r p o d p X 1 s 3 C K c 6 r g 3 L F r b k L k H X i 5 a Q C O Z q D 8 l d / q F g S 8 g i Z p M b 0 P D d G P 6 U a B Z N 8 X u o n h s e U T e m Y 9 y y N a M i N n y 6 u n p M L q w z J S G n 7 I i Q L 9 X c i p a E x s z C w k y H F i V n 1 M v E / r 5 f g 6 N p P R R Q n y C O 2 X D R K J E F F s g r I U G j O U M 4 s o U w L e y t h E 6 o p Q 1 t U y Z b g r X 5 5 n b T r N c + t e X f 1 S q O a 1 1 G E M z i H S / D g C h p w C 0 1 o A Q M N z / A K b 8 6 T 8 + K 8 O x / L 0 Y K T Z 0 7 h D 5 z P H 2 3 l k l o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " o X 5 d 5 J N F S e R E z i H B m z x X n y L z 5 3 g = " > A A A B 9 X i c b V C 7 T g J B F L 2 L L 8 Q X a m k z E U y s y C 6 N l i Q 2 l p j I I 4 G V z A 4 D T J j d 2 c z c 1 Z A N / 2 F j o T G 2 / o u d f + M s b K H g S S Y 5 O e e e 3 D s n i K U w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 j U o 0 4 y 2 m p N L d g B o u R c R b K F D y b q w 5 D Q P J O 8 H 0 J v M 7 j 1 w b o a J 7 n M X c D + k 4 E i P B K F r p o d p X 1 s 3 C K c 6 r g 3 L F r b k L k H X i 5 a Q C O Z q D 8 l d / q F g S 8 g i Z p M b 0 P D d G P 6 U a B Z N 8 X u o n h s e U T e m Y 9 y y N a M i N n y 6 u n p M L q w z J S G n 7 I i Q L 9 X c i p a E x s z C w k y H F i V n 1 M v E / r 5 f g 6 N p P R R Q n y C O 2 X D R K J E F F s g r I U G j O U M 4 s o U w L e y t h E 6 o p Q 1 t U y Z b g r X 5 5 n b T r N c + t e X f 1 S q O a 1 1 G E M z i H S / D g C h p w C 0 1 o A Q M N z / A K b 8 6 T 8 + K 8 O x / L 0 Y K T Z 0 7 h D 5 z P H 2 3 l k l o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " o X 5 d 5 J N F S e R E z i H B m z x X n y L z 5 3 g = " > A A A B 9 X i c b V C 7 T g J B F L 2 L L 8 Q X a m k z E U y s y C 6 N l i Q 2 l p j I I 4 G V z A 4 D T J j d 2 c z c 1 Z A N / 2 F j o T G 2 / o u d f + M s b K H g S S Y 5 O e e e 3 D s n i K U w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 j U o 0 4 y 2 m p N L d g B o u R c R b K F D y b q w 5 D Q P J O 8 H 0 J v M 7 j 1 w b o a J 7 n M X c D + k 4 E i P B K F r p o d p X 1 s 3 C K c 6 r g 3 L F r b k L k H X i 5 a Q C O Z q D 8 l d / q F g S 8 g i Z p M b 0 P D d G P 6 U a B Z N 8 X u o n h s e U T e m Y 9 y y N a M i N n y 6 u n p M L q w z J S G n 7 I i Q L 9 X c i p a E x s z C w k y H F i V n 1 M v E / r 5 f g 6 N p P R R Q n y C O 2 X D R K J E F F s g r I U G j O U M 4 s o U w L e y t h E 6 o p Q 1 t U y Z b g r X 5 5 n b T r N c + t e X f 1 S q O a 1 1 G E M z i H S / D g C h p w C 0 1 o A Q M N z / A K b 8 6 T 8 + K 8 O x / L 0 Y K T Z 0 7 h D 5 z P H 2 3 l k l o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " o X 5 d 5 J N F S e R E z i H B m z x X n y L z 5 3 g = " > A A A B 9 X i c b V C 7 T g J B F L 2 L L 8 Q X a m k z E U y s y C 6 N l i Q 2 l p j I I 4 G V z A 4 D T J j d 2 c z c 1 Z A N / 2 F j o T G 2 / o u d f + M s b K H g S S Y 5 O e e e 3 D s n i K U w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 j U o 0 4 y 2 m p N L d g B o u R c R b K F D y b q w 5 D Q P J O 8 H 0 J v M 7 j 1 w b o a J 7 n M X c D + k 4 E i P B K F r p o d p X 1 s 3 C K c 6 r g 3 L F r b k L k H X i 5 a Q C O Z q D 8 l d / q F g S 8 g i Z p M b 0 P D d G P 6 U a B Z N 8 X u o n h s e U T e m Y 9 y y N a M i N n y 6 u n p M L q w z J S G n 7 I i Q L 9 X c i p a E x s z C w k y H F i V n 1 M v E / r 5 f g 6 N p P R R Q n y C O 2 X D R K J E F F s g r I U G j O U M 4 s o U w L e y t h E 6 o p Q 1 t U y Z b g r X 5 5 n b T r N c + t e X f 1 S q O a 1 1 G E M z i H S / D g C h p w C 0 1 o A Q M N z / A K b 8 6 T 8 + K 8 O x / L 0 Y K T Z 0 7 h D 5 z P H 2 3 l k l o = < / l a t e x i t > t < l a t e x i t s h a 1 _ b a s e 6 4 = " o X 5 d 5 J N F S e R E z i H B m z x X n y L z 5 3 g = " > A A A B 9 X i c b V C 7 T g J B F L 2 L L 8 Q X a m k z E U y s y C 6 N l i Q 2 l p j I I 4 G V z A 4 D T J j d 2 c z c 1 Z A N / 2 F j o T G 2 / o u d f + M s b K H g S S Y 5 O e e e 3 D s n i K U w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 j U o 0 4 y 2 m p N L d g B o u R c R b K F D y b q w 5 D Q P J O 8 H 0 J v M 7 j 1 w b o a J 7 n M X c D + k 4 E i P B K F r p o d p X 1 s 3 C K c 6 r g 3 L F r b k L k H X i 5 a Q C O Z q D 8 l d / q F g S 8 g i Z p M b 0 P D d G P 6 U a B Z N 8 X u o n h s e U T e m Y 9 y y N a M i N n y 6 u n p M L q w z J S G n 7 I i Q L 9 X c i p a E x s z C w k y H F i V n 1 M v E / r 5 f g 6 N p P R R Q n y C O 2 X D R K J E F F s g r I U G j O U M 4 s o U w L e y t h E 6 o p Q 1 t U y Z b g r X 5 5 n b T r N c + t e X f 1 S q O a 1 1 G E M z i H S / D g C h p w C 0 1 o A Q M N z / A K b 8 6 T 8 + K 8 O x / L 0 Y K T Z 0 7 h D 5 z P H 2 3 l k l o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " o X 5 d 5 J N F S e R E z i H B m z x X n y L z 5 3 g = " > A A A B 9 X i c b V C 7 T g J B F L 2 L L 8 Q X a m k z E U y s y C 6 N l i Q 2 l p j I I 4 G V z A 4 D T J j d 2 c z c 1 Z A N / 2 F j o T G 2 / o u d f + M s b K H g S S Y 5 O e e e 3 D s n i K U w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 j U o 0 4 y 2 m p N L d g B o u R c R b K F D y b q w 5 D Q P J O 8 H 0 J v M 7 j 1 w b o a J 7 n M X c D + k 4 E i P B K F r p o d p X 1 s 3 C K c 6 r g 3 L F r b k L k H X i 5 a Q C O Z q D 8 l d / q F g S 8 g i Z p M b 0 P D d G P 6 U a B Z N 8 X u o n h s e U T e m Y 9 y y N a M i N n y 6 u n p M L q w z J S G n 7 I i Q L 9 X c i p a E x s z C w k y H F i V n 1 M v E / r 5 f g 6 N p P R R Q n y C O 2 X D R K J E F F s g r I U G j O U M 4 s o U w L e y t h E 6 o p Q 1 t U y Z b g r X 5 5 n b T r N c + t e X f 1 S q O a 1 1 G E M z i H S / D g C h p w C 0 1 o A Q M N z / A K b 8 6 T 8 + K 8 O x / L 0 Y K T Z 0 7 h D 5 z P H 2 3 l k l o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " o X 5 d 5 J N F S e R E z i H B m z x X n y L z 5 3 g = " > A A A B 9 X i c b V C 7 T g J B F L 2 L L 8 Q X a m k z E U y s y C 6 N l i Q 2 l p j I I 4 G V z A 4 D T J j d 2 c z c 1 Z A N / 2 F j o T G 2 / o u d f + M s b K H g S S Y 5 O e e e 3 D s n i K U w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 j U o 0 4 y 2 m p N L d g B o u R c R b K F D y b q w 5 D Q P J O 8 H 0 J v M 7 j 1 w b o a J 7 n M X c D + k 4 E i P B K F r p o d p X 1 s 3 C K c 6 r g 3 L F r b k L k H X i 5 a Q C O Z q D 8 l d / q F g S 8 g i Z p M b 0 P D d G P 6 U a B Z N 8 X u o n h s e U T e m Y 9 y y N a M i N n y 6 u n p M L q w z J S G n 7 I i Q L 9 X c i p a E x s z C w k y H F i V n 1 M v E / r 5 f g 6 N p P R R Q n y C O 2 X D R K J E F F s g r I U G j O U M 4 s o U w L e y t h E 6 o p Q 1 t U y Z b g r X 5 5 n b T r N c + t e X f 1 S q O a 1 1 G E M z i H S / D g C h p w C 0 1 o A Q M N z / A K b 8 6 T 8 + K 8 O x / L 0 Y K T Z 0 7 h D 5 z P H 2 3 l k l o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " o X 5 d 5 J N F S e R E z i H B m z x X n y L z 5 3 g = " > A A A B 9 X i c b V C 7 T g J B F L 2 L L 8 Q X a m k z E U y s y C 6 N l i Q 2 l p j I I 4 G V z A 4 D T J j d 2 c z c 1 Z A N / 2 F j o T G 2 / o u d f + M s b K H g S S Y 5 O e e e 3 D s n i K U w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 j U o 0 4 y 2 m p N L d g B o u R c R b K F D y b q w 5 D Q P J O 8 H 0 J v M 7 j 1 w b o a J 7 n M X c D + k 4 E i P B K F r p o d p X 1 s 3 C K c 6 r g 3 L F r b k L k H X i 5 a Q C O Z q D 8 l d / q F g S 8 g i Z p M b 0 P D d G P 6 U a B Z N 8 X u o n h s e U T e m Y 9 y y N a M i N n y 6 u n p M L q w z J S G n 7 I i Q L 9 X c i p a E x s z C w k y H F i V n 1 M v E / r 5 f g 6 N p P R R Q n y C O 2 X D R K J E F F s g r I U G j O U M 4 s o U w L e y t h E 6 o p Q 1 t U y Z b g r X 5 5 n b T r N c + t e X f 1 S q O a 1 1 G E M z i H S / D g C h p w C 0 1 o A Q M N z / A K b 8 6 T 8 + K 8 O x / L 0 Y K T Z 0 7 h D 5 z P H 2 3 l k l o = < / l a t e x i t > z < l a t e x i t s h a 1 _ b a s e 6 4 = " s 5 w / 8 F n + A J E Y F A K P 4 x Q l G U x I 4 p w = " > A A A B 6 n i c b V A 9 T w J B E J 3 D L 8 Q v 1 N J m I 5 h Y k T s a L U l s L D H K R w I X s r f s w Y a 9 v c v u n A l e + A k 2 F h p j 6 y + y 8 9 + 4 w B U K v m S S l / d m M j M v S K Q w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 T Z x q x l s s l r H u B t R w K R R v o U D J u 4 n m N A o k 7 w S T m 7 n f e e T a i F g 9 4 D T h f k R H S o S C U b T S f f W p O i h X 3 J q 7 A F k n X k 4 q k K M 5 K H / 1 h z F L I 6 6 Q S W p M z 3 M T 9 D O q U T D J Z 6 V + a n h C 2 Y S O e M 9 S R S N u / G x x 6 o x c W G V I w l j b U k g W 6 u + J j E b G T K P A d k Y U x 2 b V m 4 v / e b 0 U w 2 s / E y p J k S u 2 X B S m k m B M 5 n + T o d C c o Z x a Q p k W 9 l b C x l R T h j a d k g 3 B W 3 1 5 n b T r N c + t e X f 1 S q O a x 1 G E M z i H S / D g C h p w C 0 1 o A Y M R P M M r v D n S e X H e n Y 9 l a 8 H J Z 0 7 h D 5 z P H 5 d W j T o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " s 5 w / 8 F n + A J E Y F A K P 4 x Q l G U x I 4 p w = " > A A A B 6 n i c b V A 9 T w J B E J 3 D L 8 Q v 1 N J m I 5 h Y k T s a L U l s L D H K R w I X s r f s w Y a 9 v c v u n A l e + A k 2 F h p j 6 y + y 8 9 + 4 w B U K v m S S l / d m M j M v S K Q w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 T Z x q x l s s l r H u B t R w K R R v o U D J u 4 n m N A o k 7 w S T m 7 n f e e T a i F g 9 4 D T h f k R H S o S C U b T S f f W p O i h X 3 J q 7 A F k n X k 4 q k K M 5 K H / 1 h z F L I 6 6 Q S W p M z 3 M T 9 D O q U T D J Z 6 V + a n h C 2 Y S O e M 9 S R S N u / G x x 6 o x c W G V I w l j b U k g W 6 u + J j E b G T K P A d k Y U x 2 b V m 4 v / e b 0 U w 2 s / E y p J k S u 2 X B S m k m B M 5 n + T o d C c o Z x a Q p k W 9 l b C x l R T h j a d k g 3 B W 3 1 5 n b T r N c + t e X f 1 S q O a x 1 G E M z i H S / D g C h p w C 0 1 o A Y M R P M M r v D n S e X H e n Y 9 l a 8 H J Z 0 7 h D 5 z P H 5 d W j T o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " s 5 w / 8 F n + A J E Y F A K P 4 x Q l G U x I 4 p w = " > A A A B 6 n i c b V A 9 T w J B E J 3 D L 8 Q v 1 N J m I 5 h Y k T s a L U l s L D H K R w I X s r f s w Y a 9 v c v u n A l e + A k 2 F h p j 6 y + y 8 9 + 4 w B U K v m S S l / d m M j M v S K Q w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 T Z x q x l s s l r H u B t R w K R R v o U D J u 4 n m N A o k 7 w S T m 7 n f e e T a i F g 9 4 D T h f k R H S o S C U b T S f f W p O i h X 3 J q 7 A F k n X k 4 q k K M 5 K H / 1 h z F L I 6 6 Q S W p M z 3 M T 9 D O q U T D J Z 6 V + a n h C 2 Y S O e M 9 S R S N u / G x x 6 o x c W G V I w l j b U k g W 6 u + J j E b G T K P A d k Y U x 2 b V m 4 v / e b 0 U w 2 s / E y p J k S u 2 X B S m k m B M 5 n + T o d C c o Z x a Q p k W 9 l b C x l R T h j a d k g 3 B W 3 1 5 n b T r N c + t e X f 1 S q O a x 1 G E M z i H S / D g C h p w C 0 1 o A Y M R P M M r v D n S e X H e n Y 9 l a 8 H J Z 0 7 h D 5 z P H 5 d W j T o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " s 5 w / 8 F n + A J E Y F A K P 4 x Q l G U x I 4 p w = " > A A A B 6 n i c b V A 9 T w J B E J 3 D L 8 Q v 1 N J m I 5 h Y k T s a L U l s L D H K R w I X s r f s w Y a 9 v c v u n A l e + A k 2 F h p j 6 y + y 8 An illustration of a feasible region is shown in Figure 3. 9 + 4 w B U K v m S S l / d m M j M v S K Q w 6 L r f T m F j c 2 t 7 p 7 h b 2 t s / O D w q H 5 + 0 T Z x q x l s s l r H u B t R w K R R v o U D J u 4 n m N A o k 7 w S T m 7 n f e e T a i F g 9 4 D T h f k R H S o S C U b T S f f W p O i h X 3 J q 7 A F k n X k 4 q k K M 5 K H / 1 h z F L I 6 6 Q S W p M z 3 M T 9 D O q U T D J Z 6 V + a n h C 2 Y S O e M 9 S R S N u / G x x 6 o x c W G V I w l j b U k g W 6 u + J j E b G T K P A d k Y U x 2 b V m 4 v / e b 0 U w 2 s / E y p J k S u 2 X B S m k m B M 5 n + T o d C c o Z x a Q p k W 9 l b C x l R T h j a d k g 3 B W 3 1 5 n b T r N c + t e X f 1 S q O a x 1 G E M z i H S / D g C h p w C 0 1 o A Y M R P M M r v D n S e X H The following lemma establishes an important property of feasible regions that will be used later in the proof of the main theorem. Proof. We proceed with a simple case analysis using the definition of F s . Let z = (z 1 , z 2 ). Case 1: Assume s ≥ t so that ||s − t|| 1 = s − t. By the definition of F s , we have z 2 ≥ s and thus ||z − t|| 1 = z 1 + z 2 − t ≥ z 1 + s − t ≥ s − t = ||s − t|| 1 . Case 2.1: If s < t, then ||s − t|| 1 = t − s. If t ≤ z 2 , then since z 1 ≥ z 2 − s and z 2 ≥ s, ||z − t|| 1 = z 1 + z 2 − t ≥ (z 2 − s) + z 2 − t ≥ t − s + t − t = t − s = ||s − t|| 1 . Case 2.2: If s < t but t > z 2 , then since z 2 ≤ z 1 + s, it follows that ||z − t|| 1 = z 1 + t − z 2 ≥ z 1 + t − (z 1 + s) = t − s = ||s − t|| 1 . The lemma now follows. Properties of the geodesic distance function for an arbitrary metric graph Let G = (V, E) be an arbitrary metric graph with shortest system of loops of lengths 2s 1 , · · · , 2s n . Fix an arbitrary base point v ∈ |G| and consider Dg(f v ), as defined in Section 2.2. Let T v denote the shortest path tree in G rooted at v. We consider the base point v ∈ |G| to be a graph node of G; that is, we add it to V if necessary. We further assume that the graph G is "generic" in the sense that there do not exist two or more shortest paths from the base point v to any graph node of G in V . For any input metric graph G, we can perturb it to be one that is generic within arbitrarily small Gromov-Hausdorff distance. For simplicity, when v is fixed, we shall omit v in our notation and speak of the persistence diagram D := Dg(f v ), the function f := f v , and the shortest path tree T := T v . We present three straightforward observations, the first of which follows immediately from the definition of the shortest path tree and the Extreme Value Theorem. Observation 1. The shortest path tree T of G has |V | − 1 edges, and there are |E| − |V | + 1 non-tree edges. For each non-tree edge e ∈ E \ T , there exists a unique u ∈ e such that f (u) is a local maximum value of f . Note that every feature in the persistence diagram D must be born at a point in the graph that is an up-fork, i.e., a point coupled with a pair of adjacent directions along which the function f is increasing. Since there are no local minimum points of f (except for v itself), these must be vertices in the graph of degree at least 3 (see, e.g., [21]). The final observation relates to points belonging to cycles in G that yield local maximum values of f (see [2]). To delve further into this, let {γ 1 , . . . , γ n } denote the elements of the shortest system of loops for G listed in order of non-decreasing loop length. Proof. Since each γ i k (1 ≤ k ≤ m) is an element of the shortest system of loops for G and i 1 ≤ i 2 ≤ . . . ≤ i m , this implies that s i 1 ≤ · · · ≤ s im , where 2s i k is the length of cycle γ i k in the shortest system of loops of G. v v v u < l a t e x i t s h a 1 _ b a s e 6 4 = " g c P Y R Z u L u I Y V b U 2 x K Z Z O s q Q T Y K I = " > A A A B 6 n i c b V A 9 S w N B E J 2 L X z F + R S 1 t F h P B K t y l 0 U o C N p Y R z Q c k R 9 j b z C V L 9 v a O 3 T 0 h H P k J N h a K 2 P q L 7 P w 3 b p I r N P H B w O O 9 G W b m B Y n g 2 r j u t 1 P Y 2 N z a 3 i n u l v b 2 D w 6 P y s c n b R 2 n i m G L x S J W 3 Y B q F F x i y 3 A j s J s o p F E g s B N M b u d + 5 w m V 5 r F 8 N N M E / Y i O J A 8 5 o 8 Z K D 9 W 0 O i h X 3 J q 7 A F k n X k 4 q k K M 5 K H / 1 h z F L I 5 S G C a p 1 z 3 M T 4 2 d U G c 4 E z k r 9 V G N C 2 Y S O s G e p p B F q P 1 u c O i M X V h m S M F a 2 p C E L 9 f d E R i O t p 1 F g O y N q x n r V m 4 v / e b 3 U h N d + x m W S G p R Assume instead that f (u) < s im . Now, γ in G must contain at least one non-tree edge as it is a cycle. Let e 1 , . . . e = e be all non-tree edges of G with largest function value at most f (u). Assume they contain maximum points u 1 , . . . , u = u, respectively, where the edges and maxima are sorted in order of increasing function value of f . For two points x, y ∈ |T |, let α(x, y) denote the unique tree path from x to y within the shortest path tree. For each j ∈ {1, . . . , }, let e j = (e 0 j , e 1 j ) and let c j denote the cycle c j = α(v, e 1 j ) • e j • α(e 0 j , v). By assumption, since u = u is the point in γ with the largest local maximum value of f and f (u) < s im , it follows that the length of every cycle c j is less than s im . However, the set of cycles {c 1 , . . . , c } form a basis for the subgraph of G spanned by all edges containing only points of function value at most f (u). Therefore, we may represent γ as a linear combination of cycles from the set {c 1 , . . . , c }, i.e., γ may be decomposed into shorter cycles, each of length less than s im = length(γ im ) 2 . This is a contradiction to the fact that γ i 1 , . . . , γ im are elements of the shortest system of loops for G. Hence, we conclude that An example that illustrates the proof of Lemma 9 is shown in Figure 5. Later we will use the following simpler version of Lemma 9, where γ is a single element of the shortest system of loops. f (u) ≥ s im . v c j u < l a t e x i t s h a 1 _ b a s e 6 4 = " g c P Y R Z u L u I Y V b U 2 x K Z Z O s q Q T Y K I = " > A A A B 6 n i c b V A 9 S w N B E J 2 L X z F + R S 1 t F h P B K t y l 0 U o C N p Y R z Q c k R 9 j b z C V L 9 v a O 3 T 0 h H P k J N h a K 2 P q L 7 P w 3 b p I r N P H B w O O 9 G W b m B Y n g 2 r j u t 1 P Y 2 N z a 3 i n u l v b 2 D w 6 P y s c n b R 2 n i m G L x S J W 3 Y B q F F x i y 3 A j s J s o p F E g s B N M b u d + 5 w m V 5 r F 8 N N M E / Y i O J A 8 5 o 8 Z K D 9 W 0 O i h X 3 J q 7 A F k n X k 4 q k K M 5 K H / 1 h z F L I 5 S G C a p 1 z 3 M T 4 2 d U G c 4 E z k r 9 V G N C 2 Y S O s G e p p B F q P 1 u c O i M X V h m S M F a 2 p C E L 9 f d E R i O t p 1 F g O y N q x n r V m 4 v / e b 3 U h N d + x m W S G p R s u S h M B T E x m f 9 N h l w h M 2 J q C W W K 2 1 s J G 1 N F m b H p l G w I 3 u r L 6 6 R d r 3 l u z b u v V x o 3 e R x F O I N z u A Q P r q A B d 9 C E F j A Y w T O 8 w p s j n B f n 3 f l Y t h a c f O Y U / s D 5 / A G X j 4 1 P < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " g c P Y R Z u L u I Y V b U 2 x K Z Z O s q Q T Y K I = " > A A A B 6 n i c b V A 9 S w N B E J 2 L X z F + R S 1 t F h P B K t y l 0 U o C N p Y R z Q c k R 9 j b z C V L 9 v y U Q = " > A A A B 7 H i c b V A 9 T w J B E J 3 D L 8 Q v 1 N J m I 5 h Y k T s a r Q y J j S U m H p D A h e w t A 6 z s 7 V 1 2 9 0 z I h d 9 g Y 6 E x t v 4 g O / + N C 1 y h 4 E s m e X l v J j P z w k R w b V z 3 2 y l s b G 5 t 7 x R 3 S 3 v 7 B 4 d H 5 e O T l o 5 T x d B n s Y h V J 6 Q a B Z f o G 2 4 E d h K F N A o F t s P J 7 d x v P 6 H S P J Y P Z p p g E N G R 5 E P O q L G S X 8 X + Y 7 V f r r g 1 d w G y T r y c V C B H s 1 / + 6 g 1 i l k Y o D R N U 6 6 7 n J i b I q D K c C Z y V e q n G h L I J H W H X U k k j 1 E G 2 O H Z G L q w y I M N Y 2 Z K G L N T f E x m N t J 5 G o e 2 M q B n r V W 8 u / u d 1 U z O 8 D j I u k 9 S g Z M t F w 1 Q Q E 5 P 5 5 2 T A F T I j p p Z Q p r i 9 l b A x V Z Q Z m 0 / J h u C t v r x O W v W a 5 9 a 8 + 3 q l c Z P H U Y Q z O I d L 8 O A K G n A H T f C B A Y d n e I U 3 R z o v z r v z s W w t O P n M K f y B 8 / k D + / 6 O H A = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " 7 V Q i h J M 1 f Q y i g j x d r f 7 5 3 Z b 3 y U Q = " > A A A B 7 H i c b V A 9 T w J B E J 3 D L 8 Q v 1 N J m I 5 h Y k T s a r Q y J j S U m H p D A h e w t A 6 z s 7 V 1 2 9 0 z I h d 9 g Y 6 E x t v 4 g O / + N C 1 y h 4 E s m e X l v J j P z w k R w b V z 3 2 y l s b G 5 t 7 x R 3 S 3 v 7 B 4 d H 5 e O T l o 5 T x d B n s Y h V J 6 Q a B Z f o G 2 4 E d h K F N A o F t s P J 7 d x v P 6 H S P J Y P Z p p g E N G R 5 E P O q L G S X 8 X + Y 7 V f r r g 1 d w G y T r y c V C B H s 1 / + 6 g 1 i l k Y o D R N U 6 6 7 n J i b I q D K c C Z y V e q n G h L I J H W H X U k k j 1 E G 2 O H Z G L q w y I M N Y 2 Z K G L N T f E x m N t J 5 G o e 2 M q B n r V W 8 u / u d 1 U z O 8 D j I u k 9 S g Z M t F w 1 Q Q E 5 P 5 5 2 T A F T I j p p Z Q p r i 9 l b A x V Z Q Z m 0 / J h u C t v r x O W v W a 5 9 a 8 + 3 q l c Z P H U Y Q z O I d L 8 O A K G n A H T f C B A Y d n e I U 3 R z o v z r v z s W w t O P n M K f y B 8 / k D + / 6 O H A = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " 7 V Q i h J M 1 f Q y i g j x d r f 7 5 3 Z b 3 y U Q = " > A A A B 7 H i c b V A 9 T w J B E J 3 D L 8 Q v 1 N J m I 5 h Y k T s a r Q y J j S U m H p D A h e w t A 6 z s 7 V 1 2 9 0 z I h d 9 g Y 6 E x t v 4 g O / + N C 1 y h 4 E s m e X l v J j P z w k R w b V z 3 2 y l s b G 5 t 7 x R 3 S 3 v 7 B 4 d H 5 e O T l o 5 T x d B n s Y h V J 6 Q a B Z f o G 2 4 E d h K F N A o F t s P J 7 d x v P 6 H S P J Y P Z p p g E N G R 5 E P O q L G S X 8 X + Y 7 V f r r g 1 d w G y T r y c V C B H s 1 / + 6 g 1 i l k Y o D R N U 6 6 7 n J i b I q D K c C Z y V e q n G h L I J H W H X U k k j 1 E G 2 O H Z G L q w y I M N Y 2 Z K G L N T f E x m N t J 5 G o e 2 M q B n r V W 8 u / u d 1 U z O 8 D j I u k 9 S g Z M t F w 1 Q Q E 5 P 5 5 2 T A F T I j p p Z Q p r i 9 l b A x V Z Q Z m 0 / J h u C t v r x O W v W a 5 9 a 8 + 3 q l c Z P H U Y Q z O I d L 8 O A K G n A H T f C B A Y d n e I U 3 R z o v z r v z s W w t O P n M K f y B 8 / k D + / 6 O H A = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " 7 V Q i h J M 1 f Q y i g j x d r f 7 5 3 Z b 3 y U Q = " > A A A B 7 H i c b V A 9 T w J B E J 3 D L 8 Q v 1 N J m I 5 h Y k T s a r Q y J j S U m H p D A h e w t A 6 z s 7 V 1 2 9 0 z I h d 9 g Y 6 E x t v 4 g O / + N C 1 y h 4 E s m e X l v J j P z w k R w b V z 3 2 y l s b G 5 t 7 x R 3 S 3 v 7 B 4 d H 5 e O T l o 5 T x d B n s Y h V J 6 Q a B Z f o G 2 4 E d h K F N A o F t s P J 7 d x v P 6 H S P J Y P Z p p g E N G R 5 E P O q L G S X 8 X + Y 7 V f r r g 1 d w G y T r y c V C B H s 1 / + 6 g 1 i l k Y o D R N U 6 6 7 n J i b I q D K c C Z y V e q n G h L I J H W H X U k k j 1 E G 2 O H Z G L q w y I M N Y 2 Z K G L N T f E x m N t J 5 G o e 2 M q B n r V W 8 u / u d 1 U z O 8 D j I u k 9 S g Z M t F w 1 Q Q E 5 P 5 5 2 T A F T I j p p Z Q p r i 9 l b A x V Z Q Z m 0 / J h u C t v r x O W v W a 5 9 a 8 + 3 q l c Z P H U Y Q z O I d L 8 O A K G n A H T f C B A Y d n e I U 3 R z o v z r v z s W w t O P n M K f y B 8 / k D + / 6 O H Corollary 10. Let γ be an element of the shortest system of loops for G with a length 2s, and let u denote the point in any edge of γ with largest maximum value of f . Then f (u) ≥ s. The main theorem and its proof We are now ready to establish a comparison of the intrinsicČech and persistence distortion distances between a bouquet metric graph and an arbitrary metric graph. Theorem 11. Let G 1 and G 2 be finite metric graphs such that G 1 is a bouquet graph and G 2 is arbitrary. Then d IC (G 1 , G 2 ) ≤ 1 2 d P D (G 1 , G 2 ). Proof. Let G 1 be a bouquet graph consisting of m cycles of lengths 0 < 2t 1 ≤ . . . ≤ 2t m , all sharing one common point o ∈ |G 1 |. Let G 2 be an arbitrary metric graph with shortest system of loops consisting of n loops of lengths 2s 1 , · · · , 2s n listed in non-decreasing order. In what follows, we suppose n ≥ m; the case when m ≥ n proceeds similarly. As before, we obtain a sequence of length n, 2t 1 ≤ 2t 2 · · · ≤ 2t n (where t 1 = · · · = t n−m = 0, t n−m+1 = t 1 , · · · , and t n = t m ). Let f and g denote the geodesic distance functions on G 1 and G 2 , respectively. First, as in Corollary 6, the intrinsicČech distance between G 1 and G 2 , denoted by δ, is δ := d IC (G 1 , G 2 ) = n max i=1 |s i − t i | 2 .(1) Second, note that the persistence diagram D 1 := Dg(f o ) with respect to the base point o is D 1 = {(0, t 1 ), · · · , (0, t n )} (of course, this may include some copies of (0, 0) if m < n). Next, fix an arbitrary base point v ∈ |G 2 | and consider the persistence diagram D 2 := Dg(g v ). Consider the abstract persistence diagram D := {(0, s 1 ), · · · , (0, s n )} = {s 1 , . . . , s n } that consists only of points on the y-axis at the s i values. Unless G 2 is also a bouquet graph, D is not necessarily in Φ(|G 2 |). Nevertheless, we will use this persistence diagram as a point of comparison and relate points in D 2 to D . Notice that a consequence of Theorem 5 is that d B (D 1 , D ) = n max i=1 |s i − t i | = 2δ.(2) In order to accomplish our objective of relating points in D 2 with points in the ideal diagram D , we need the following lemma relating to feasible regions, which were introduced in Section 4.1. Lemma 12. Let D = {z 1 , . . . , z n } be an arbitrary persistence diagram such that z i ∈ F s i . Then d B (D 1 , D ) ≤ d B (D 1 , D ). Proof. Consider the optimal bottleneck matching between D 1 and D . According to Lemma 8, if the point t j = (0, t j ) ∈ D 1 is matched to z i ∈ D under this optimal matching, the matching of s i = (0, s i ) ∈ D to t j will yield a smaller distance. In other words, the induced bottleneck matching between D 1 and D , which is equal to 2δ, can only be smaller than d B (D 1 , D ). The outline of the remainder of the proof of Theorem 11 is as follows. Theorem 13 shows that one can assign points in D 2 to the points in D in such a way that the condition in Lemma 12 is satisfied. The fact that one can assign points in the fixed persistence diagram D 2 to the distinct feasible regions F s i relies on the series of structural observations and results in Section 4.2, along with an application of Hall's marriage theorem. Finally, the inequality in Lemma 12 and the definition of the persistence distortion distance imply that 2δ = d B (D 1 , D ) ≤ inf v∈|G 2 | d B (D 1 , D 2 ) ≤ d P D (G 1 , G 2 ),(3) which, together with (1), completes the proof of Theorem 11. The following theorem establishes the existence of a one-to-one correspondence between points in D and points in D 2 . The goal is to construct a bipartite graph G = (D , D 2 , E), where there is an edgeê ∈ E from s i ∈ D to z ∈ D 2 if and only if z ∈ F s i . To prove the theorem, we invoke Hall's marriage theorem, which requires showing that for any subset S of points in D , the number of neighbors of S in D 2 is at least |S|. Theorem 13. The graph G contains a perfect matching. Proof. For simplicity, let T = T v and g = g v . First, note that there is a one-to-one correspondence Ψ : E 2 \ T → D 2 between the set of non-tree edges in G 2 (each of which contains a unique maximum point of g) and the set of points in D 2 . In particular, from Observations 1 and 2, the death-time of each point in D 2 uniquely corresponds to a local maximum u e within a non-tree edge e of G 2 . Fix an arbitrary subset S ⊆ D with |S| = a. In order to apply Hall's marriage theorem, we must show that there are at least a neighbors of S in G. We achieve this via an iterative procedure which we now describe. The procedure begins at step k = 0 and will end after a iterations. Elements in S = {s i 1 , . . . , s ia } are processed in non-decreasing order of their values, which also means that i 1 < i 2 < · · · < i a . At the start of the k-th iteration, we will have processed the first k elements of S, denoted S k = {s i 1 , . . . , s i k }, where for each s := s i h ∈ S k that we have processed (1 ≤ h ≤ k), we have maintained the following three invariances: Invariance 1:s is associated to a unique edge es ∈ E 2 \ T containing a unique maximum u es such that Ψ(es) ∈ D 2 is a neighbor ofs. We say that es and u es are marked bys. Invariance 2: :s is also associated to a cycle γ h = γ i h + γ (where the sum ranges over all belonging to some index set J h ⊂ {1, . . . , i h − 1}), such that e s contains the point in γ h with the largest value of g. Invariance 3: : height( γ h ) ≤ s i h , where height(γ) = max x∈γ g(x) − min x∈γ g(x) represents the height (i.e., the maximal difference in the g function values) of a given loop γ. Set S k = S \ S k = {s i k+1 , . . . , s ia }, denoting the remaining elements from S to be processed. Our goal is to identify a new neighbor in D 2 for element s i k+1 from S k satisfying the three invariances. Once we have done so, we will then set S k+1 = S k ∪ {s i k+1 } and move on to the next iteration in the procedure. Note that s i k+1 corresponds to an element γ i k+1 of the shortest system of loops for G 2 . Let e be the edge in γ i k+1 containing the maximum u e of highest g function value among all edges in γ i k+1 . There are now two possible cases to consider, and we will demonstrate how to obtain a new neighbor for s i k+1 in either case. In the first case, suppose u e is not yet marked by a previous element in S. In this case, e s i k+1 = e and γ i k+1 = γ i k+1 . We claim that the point (p e , g(u e )) in the persistence diagram D 2 corresponding to the maximum u e is contained in the feasible region F s i k+1 . In other words, s i k+1 ≤ g(u e ) ≤ p e + s i k+1 . Indeed, by Lemma 9, s i k+1 ≤ g(u e ), and by Observation 3, g(u e ) − s i k+1 ≤ lowest(γ i k+1 ) ≤ p e , where lowest(γ i k+1 ) := min x∈γ i k+1 g(x). Thus, (p e , g(u e )) ∈ D 2 is a new neighbor for s i k+1 ∈ S since it is contained in F s i k+1 . Consequently, we mark e and u e by s i k+1 and continue with the next iteration. In the second case, the maximum point u e has already been marked by a previous element s j 1 ∈ S k and been associated to a cycle γ j 1 . Observe that s j 1 ≤ s i k+1 since our procedure processes elements of S in non-decreasing order of their values (and thus j 1 < i k+1 ). We must now identify an edge other than e for s i k+1 satisfying the three invariance properties. To this end, let γ 1 = γ i k+1 + γ j 1 , and let e 1 be the edge containing the maximum in γ 1 with largest function value. If e 1 is unmarked, we set e s i k+1 = e 1 . Otherwise, if e 1 is marked by some cycle γ j 2 , we construct the loop γ 2 = γ 1 + γ j 2 = γ i k+1 + γ j 1 + γ j 2 . We continue this process until we find γ η = γ i k+1 + γ j 1 + γ j 2 + . . . + γ jη such that the edge e η containing the point of maximum function value of γ η is not marked. Once we arrive at this point, we set γ i k+1 = γ η and e s i k+1 = e η , so that the edge e η and corresponding maximum u eη are marked by s i k+1 . The reason that the procedure outlined above must indeed terminate is as follows. Each time a new γ jν is added to a cycle γ j ν−1 (for ν ∈ {1, . . . , η}), it is because the edge containing the maximum point of γ j ν−1 with largest function value is marked by s jν . Note that j ν = j β for ν = β (as during the procedure, the edge e i containing the maximum function value in the cycle γ i are all distinct), each j ν < i k+1 , and s jν ∈ S k . Furthermore, Invariance 2 guarantees that γ η cannot be empty, as each cycle γ jν can be written as a linear combination of elements in the shortest system of loops with indices at most j ν . As j ν < i k+1 , the cycle γ = γ j 1 + γ j 2 + . . . + γ jη can be represented as a linear combination of basis cycles with indices strictly smaller than i k+1 . In other words, γ i k+1 and γ must be linearly independent, and thus γ η = γ i k+1 + γ cannot be empty. Again, j ν = j β for ν = β and each j ν < i k+1 , and thus it follows that after at most k iterations, we will obtain a cycle whose highest valued maximum and corresponding edge are not yet marked. Now, we must show that the three invariances are satisfied as a result of the process described in this second case. To begin, we point out that Invariance 2 holds by construction. Next, the following lemma establishes Invariance 3. Lemma 14. For γ i k+1 = γ η = γ i k+1 + γ j 1 + γ j 2 + . . . + γ jη as above, height( γ i k+1 ) ≤ s i k+1 . Proof. Set γ 0 = γ i k+1 , and for ν ∈ {1, . . . , η}, set γ ν = γ i k+1 + γ j 1 + · · · + γ jν . Using induction, we will show that height( γ ν ) ≤ s i k+1 for any ν ∈ {0, . . . , η}. The inequality obviously holds for ν = 0. Suppose it holds for all ν ≤ ρ < η, and consider ν = ρ + 1 where γ ρ+1 = γ ρ + γ j ρ+1 . The cycle γ j ρ+1 is added as the edge e ρ of γ ρ containing the current maximum point of highest value of g has already been marked by s j ρ+1 with j ρ+1 < i k+1 . By Invariance 2, e ρ must also be the edge in γ j ρ+1 containing the point of maximum g function value, which we denote by g(e ρ ). Therefore, after the addition of γ ρ and γ j ρ+1 , (i) highest( γ ρ+1 ) := max x∈ γ ρ+1 g(x) ≤ g v (e ρ ), and (ii) lowest( γ ρ+1 ) := min x∈ γ ρ+1 g(x) ≥ min{ lowest( γ ρ ), lowest( γ j ρ+1 ) }. By the induction hypothesis, height( γ ρ ) ≤ s i k+1 , while by Invariance 3, height( γ j ρ+1 ) ≤ s j ρ+1 ≤ s i k+1 . By (ii) of equation (4), it then follows that lowest( γ ρ+1 ) ≥ min{g(e ρ ) − height( γ ρ ), g(e ρ ) − height( γ j ρ+1 )} ≥ g(e ρ ) − s i k+1 . Combining this with (i) of equation (4), we have that height( γ ρ+1 ) ≤ s i k+1 . The lemma then follows by induction. Finally, we show that Invariance 1 also holds. Since γ i k+1 = γ η = γ i k+1 + γ , with γ defined as above, by Lemma 9, we have that g(u eη ) ≥ s i k+1 . Suppose u eη is paired with some graph node w so that p eη = g(w). As the height of γ i k+1 is at most s i k+1 (Lemma 14), combined with Observation 3, we have that g(u eη ) − s i k+1 ≤ lowest( γ i k+1 ) ≤ p eη . This implies that the point (p eη , g(u eη )) ∈ F s i k+1 , establishing Invariance 1. We continue the process described above until k = a. At each iteration, when we process s i k , we add a new neighbor for elements in S. In the end, after processing all of the a elements in S, we find a neighbors for S, and the total number of neighbors in G of elements in S can only be larger. Since this holds for any subset S of D , the condition for Hall's theorem is satisfied for the bipartite graph G. This implies that there exists a perfect matching in G, completing the proof of Theorem 13. Theorem 11 now follows from Lemma 12 and equation (1). Discussion and future work In this paper, we compare the discriminative capabilities of the intrinsicČech and persistence distortion distances, which are based on topological signatures of metric graphs. The intrinsicČech signature arises from the intrinsicČech filtration of a metric graph, and the persistence distortion signature is based on the set of persistence diagrams arising from sublevel set filtrations of geodesic distance functions from all base points in a given metric graph. A map from a metric graph to these topological signatures is not injective: two different metric graphs may map to the same signature. However, each signature captures structural information of a graph and serves as a type of topological summary. Understanding the relationship between the intrinsicČech and persistence distortion distances enables one to better understand the discriminative powers of such summaries. We conjecture that the intrinsicČech distance is less discriminative than the persistence distortion distance for general metric graphs G 1 and G 2 , so that there exists a constant c ≥ 1 with d IC (G 1 , G 2 ) ≤ c · d P D (G 1 , G 2 ). This statement is trivially true in the case when both graphs are trees as the intrinsicČech distance is 0 while the persistence distortion distance is not. We establish a sharper version of the conjectured inequality in the case when one of the graphs is a bouquet graph and the other is arbitrary, as well as in the case when both graphs are obtained via wedges of cycles and edges. The methods of proof in Theorem 11 and Proposition 17 rely on explicitly knowing the forms of the persistence diagrams for the geodesic distance function in the case of a bouquet graph or a tree of loops. Therefore, these methods do not readily carry over to the most general setting for arbitrary metric graphs. Nevertheless, we believe that the relationship between the intrinsicČech and persistence distortion distances should hold for arbitrary finite metric graphs. Intuitively, the intrinsicČech signature only captures the sizes of the shortest loops in a metric graph, whereas the persistence distortion signature takes into consideration the relative positions of such loops and their interactions with one another. As one example application relating the intrinsicČech and persistence distortion summaries (and hence, distances), the work of Pirashvili, et al. [22] considers how the topological structure of chemical compounds relates to solubility in water, which is of fundamental importance in modern drug discovery. Analysis with the topological tool mapper [23] reveals that compounds with a smaller number of cycles are more soluble. The number of cycles, as well as cycle lengths, is naturally encoded in the intrinsicČech summary. In addition, these authors also use a discrete persistence distortion summary -where only the graph nodes, i.e., the atoms, serve as base points -to show that nearby compounds have similar levels of solubility. Although we conjecture that the intrinsič Cech distance is less discriminative then the persistence distortion distance, it might be sufficient in this particular analysis since solubility is highly correlated with the number of cycles of a chemical compound, that is, with the intrinsicČech summary [16]. It would be interesting to investigate other applications of the intrinsicČech and persistence distortion summaries in the context of data sets modeled by metric graphs. In addition, recall from the definition of the persistence distortion distance the map Φ : |G| → SpDg, Φ(v) = Dg(f v ). The map Φ is interesting in its own right. For instance, what can be said about the set Φ(|G|) in the space of persistence diagrams for a given G? Given only the set Φ(|G|) ⊂ SpDg, what information can one recover about the graph G? Oudot and Solomon [21] show that there is a dense subset of metric graphs (in the Gromov-Hausdorff topology, and indeed an open dense set in the so-called fibered topology) on which their barcode transform via the map Φ is globally injective up to isometry. They also prove its local injectivity on the space of metric graphs. Another question of interest is, how does the map Φ induce a stratification in the space of persistence diagrams? Finally, it would also be worthwhile to compare the discriminative capacities of the persistence distortion and intrinsicČech distances to other graph distances, such as the interleaving and functional distortion distances in the special case of Reeb graphs.
19,665
1907.01657
2954142106
Conventionally, model-based reinforcement learning (MBRL) aims to learn a global model for the dynamics of the environment. A good model can potentially enable planning algorithms to generate a large variety of behaviors and solve diverse tasks. However, learning an accurate model for complex dynamical systems is difficult, and even then, the model might not generalize well outside the distribution of states on which it was trained. In this work, we combine model-based learning with model-free learning of primitives that make model-based planning easy. To that end, we aim to answer the question: how can we discover skills whose outcomes are easy to predict? We propose an unsupervised learning algorithm, Dynamics-Aware Discovery of Skills (DADS), which simultaneously discovers predictable behaviors and learns their dynamics. Our method can leverage continuous skill spaces, theoretically, allowing us to learn infinitely many behaviors even for high-dimensional state-spaces. We demonstrate that zero-shot planning in the learned latent space significantly outperforms standard MBRL and model-free goal-conditioned RL, can handle sparse-reward tasks, and substantially improves over prior hierarchical RL methods for unsupervised skill discovery.
Another line of work that is conceptually close to our method copes with intrinsic motivation which is used to drive the agent's exploration. Examples of such works include empowerment @cite_10 @cite_2 , count-based exploration @cite_6 @cite_9 @cite_13 @cite_7 , information gain about agent's dynamics @cite_1 and forward-inverse dynamics models @cite_12 . While our method uses an information-theoretic objective that is similar to these approaches, it is used to learn a variety of skills that can be directly used for model-based planning, which is in contrast to learning a better exploration policy for a single skill. We provide a discussion on the connection between empowerment and DADS in Appendix .
{ "abstract": [ "", "The classical approach to using utility functions suffers from the drawback of having to design and tweak the functions on a case by case basis. Inspired by examples from the animal kingdom, social sciences and games we propose empowerment, a rather universal function, defined as the information-theoretic capacity of an agent's actuation channel. The concept applies to any sensorimotor apparatus. Empowerment as a measure reflects the properties of the apparatus as long as they are observable due to the coupling of sensors and actuators via the environment. Using two simple experiments we also demonstrate how empowerment influences sensor-actuator evolution", "Motivated by vision-based reinforcement learning (RL) problems, in particular Atari games from the recent benchmark Aracade Learning Environment (ALE), we consider spatio-temporal prediction problems where future image-frames depend on control variables or actions as well as previous frames. While not composed of natural scenes, frames in Atari games are high-dimensional in size, can involve tens of objects with one or more objects being controlled by the actions directly and many other objects being influenced indirectly, can involve entry and departure of objects, and can involve deep partial observability. We propose and evaluate two deep neural network architectures that consist of encoding, action-conditional transformation, and decoding layers based on convolutional neural networks and recurrent neural networks. Experimental results show that the proposed architectures are able to generate visually-realistic frames that are also useful for control over approximately 100-step action-conditional futures in some games. To the best of our knowledge, this paper is the first to make and evaluate long-term predictions on high-dimensional video conditioned by control inputs.", "Achieving efficient and scalable exploration in complex domains poses a major challenge in reinforcement learning. While Bayesian and PAC-MDP approaches to the exploration problem offer strong formal guarantees, they are often impractical in higher dimensions due to their reliance on enumerating the state-action space. Hence, exploration in complex domains is often performed with simple epsilon-greedy methods. In this paper, we consider the challenging Atari games domain, which requires processing raw pixel inputs and delayed rewards. We evaluate several more sophisticated exploration strategies, including Thompson sampling and Boltzman exploration, and propose a new exploration method based on assigning exploration bonuses from a concurrently learned model of the system dynamics. By parameterizing our learned model with a neural network, we are able to develop a scalable and efficient approach to exploration bonuses that can be applied to tasks with complex, high-dimensional state spaces. In the Atari domain, our method provides the most consistent improvement across a range of games that pose a major challenge for prior methods. In addition to raw game-scores, we also develop an AUC-100 metric for the Atari Learning domain to evaluate the impact of exploration on this benchmark.", "We consider an agent's uncertainty about its environment and the problem of generalizing this uncertainty across states. Specifically, we focus on the problem of exploration in non-tabular reinforcement learning. Drawing inspiration from the intrinsic motivation literature, we use density models to measure uncertainty, and propose a novel algorithm for deriving a pseudo-count from an arbitrary density model. This technique enables us to generalize count-based exploration algorithms to the non-tabular case. We apply our ideas to Atari 2600 games, providing sensible pseudo-counts from raw pixels. We transform these pseudo-counts into exploration bonuses and obtain significantly improved exploration in a number of hard games, including the infamously difficult MONTEZUMA'S REVENGE.", "The mutual information is a core statistical quantity that has applications in all areas of machine learning, whether this is in training of density models over multiple data modalities, in maximising the efficiency of noisy transmission channels, or when learning behaviour policies for exploration by artificial agents. Most learning algorithms that involve optimisation of the mutual information rely on the Blahut-Arimoto algorithm — an enumerative algorithm with exponential complexity that is not suitable for modern machine learning applications. This paper provides a new approach for scalable optimisation of the mutual information by merging techniques from variational inference and deep learning. We develop our approach by focusing on the problem of intrinsically-motivated learning, where the mutual information forms the definition of a well-known internal drive known as empowerment. Using a variational lower bound on the mutual information, combined with convolutional networks for handling visual input streams, we develop a stochastic optimisation algorithm that allows for scalable information maximisation and empowerment-based reasoning directly from pixels to actions.", "Count-based exploration algorithms are known to perform near-optimally when used in conjunction with tabular reinforcement learning (RL) methods for solving small discrete Markov decision processes (MDPs). It is generally thought that count-based methods cannot be applied in high-dimensional state spaces, since most states will only occur once. Recent deep RL exploration strategies are able to deal with high-dimensional continuous state spaces through complex heuristics, often relying on optimism in the face of uncertainty or intrinsic motivation. @PARASPLIT In this work, we describe a surprising finding: a simple generalization of the classic count-based approach can reach near state-of-the-art performance on various high-dimensional and or continuous deep RL benchmarks. States are mapped to hash codes, which allows to count their occurrences with a hash table. These counts are then used to compute a reward bonus according to the classic count-based exploration theory. We find that simple hash functions can achieve surprisingly good results on many challenging tasks. Furthermore, we show that a domain-dependent learned hash code may further improve these results. @PARASPLIT Detailed analysis reveals important aspects of a good hash function: 1) having appropriate granularity and 2) encoding information relevant to solving the MDP. This exploration strategy achieves near state-of-the-art performance on both continuous control tasks and Atari 2600 games, hence providing a simple yet powerful baseline for solving MDPs that require considerable exploration.", "In many real-world scenarios, rewards extrinsic to the agent are extremely sparse, or absent altogether. In such cases, curiosity can serve as an intrinsic reward signal to enable the agent to explore its environment and learn skills that might be useful later in its life. We formulate curiosity as the error in an agent's ability to predict the consequence of its own actions in a visual feature space learned by a self-supervised inverse dynamics model. Our formulation scales to high-dimensional continuous state spaces like images, bypasses the difficulties of directly predicting pixels, and, critically, ignores the aspects of the environment that cannot affect the agent. The proposed approach is evaluated in two environments: VizDoom and Super Mario Bros. Three broad settings are investigated: 1) sparse extrinsic reward, where curiosity allows for far fewer interactions with the environment to reach the goal; 2) exploration with no extrinsic reward, where curiosity pushes the agent to explore more efficiently; and 3) generalization to unseen scenarios (e.g. new levels of the same game) where the knowledge gained from earlier experience helps the agent explore new places much faster than starting from scratch. Demo video and code available at this https URL" ], "cite_N": [ "@cite_7", "@cite_10", "@cite_9", "@cite_1", "@cite_6", "@cite_2", "@cite_13", "@cite_12" ], "mid": [ "", "1591713425", "2962841471", "779494576", "2963276097", "2962730405", "2561776174", "2614839826" ] }
DYNAMICS-AWARE UNSUPERVISED DISCOVERY OF SKILLS
Deep reinforcement learning (RL) enables autonomous learning of diverse and complex tasks with rich sensory inputs, temporally extended goals, and challenging dynamics, such as discrete gameplaying domains (Mnih et al., 2013;, and continuous control domains including locomotion (Schulman et al., 2015;Heess et al., 2017) and manipulation (Rajeswaran et al., 2017;Kalashnikov et al., 2018;Gu et al., 2017). Most of the deep RL approaches learn a Q-function or a policy that are directly optimized for the training task, which limits their generalization to new scenarios. In contrast, MBRL methods (Li & Todorov, 2004;Deisenroth & Rasmussen, 2011;Watter et al., 2015) can acquire dynamics models that may be utilized to perform unseen tasks at test time. While this capability has been demonstrated in some of the recent works (Levine et al., 2016;Nagabandi et al., 2018;Chua et al., 2018b;Kurutach et al., 2018;Ha & Schmidhuber, 2018), learning an accurate global model that works for all state-action pairs can be exceedingly challenging, especially for high-dimensional system with complex and discontinuous dynamics. The problem is further exacerbated as the learned global model has limited generalization outside of the state distribution it was trained on and exploring the whole state space is generally infeasible. Can we retain the flexibility of model-based RL, while using model-free RL to acquire proficient low-level behaviors under complex dynamics? While learning a global dynamics model that captures all the different behaviors for the entire statespace can be extremely challenging, learning a model for a specific behavior that acts only in a small part of the state-space can be much easier. For example, consider learning a model for dynamics of all gaits of a quadruped versus a model which only works for a specific gait. If we can learn many such behaviors and their corresponding dynamics, we can leverage model-predictive control to plan in the behavior space, as opposed to planning in the action space. The question then becomes: how do we acquire such behaviors, considering that behaviors could be random and unpredictable? To this end, we propose Dynamics-Aware Discovery of Skills (DADS), an unsupervised RL framework for learning low-level skills using model-free RL with the explicit aim of making model-based control easy. Skills obtained using DADS are directly optimized for predictability, providing a better representation on top of which predictive models can be learned. Crucially, the skills do not require any supervision to learn, and are acquired entirely through autonomous exploration. This means that the repertoire of skills and their predictive model are learned before the agent has been tasked with any goal or reward function. When a task is provided at test-time, the agent utilizes the previously learned skills and model to immediately perform the task without any further training. The key contribution of our work is an unsupervised reinforcement learning algorithm, DADS, grounded in mutual-information-based exploration. We demonstrate that our objective can embed learned primitives in continuous spaces, which allows us to learn a large, diverse set of skills. Crucially, our algorithm also learns to model the dynamics of the skills, which enables the use of model-based planning algorithms for downstream tasks. We adapt the conventional model predictive control algorithms to plan in the space of primitives, and demonstrate that we can compose the learned primitives to solve downstream tasks without any additional training. PRELIMINARIES Mutual information can been used as an objective to encourage exploration in reinforcement learning (Houthooft et al., 2016;Mohamed & Rezende, 2015). According to its definition, I(X; Y ) = H(X) − H(X | Y ), maximizing mutual information I with respect to Y amounts to maximizing the entropy H of X while minimizing the conditional entropy H(X | Y ). In the context of RL, X is usually a function of the state and Y a function of actions. Maximizing this objective encourages the state entropy to be high, making the underlying policy to be exploratory. Recently, multiple works (Eysenbach et al., 2018;Gregor et al., 2016;Achiam et al., 2018) apply this idea to learn diverse skills which maximally cover the state space. To leverage planning-based control, MBRL estimates the true dynamics of the environment by learning a modelp(s | s, a). This allows it to predict a trajectory of statesτ H = (s t ,ŝ t+1 , . . .ŝ t+H ) resulting from a sequence of actions without any additional interaction with the environment. While model-based RL methods have been demonstrated to be sample efficient compared to their modelfree counterparts, learning an effective model for the whole state-space is challenging. An openproblem in model-based RL is to incorporate temporal abstraction in model-based control, to enable high-level planning and move-away from planning at the granular level of actions. These seemingly unrelated ideas can be combined into a single optimization scheme, where we first discover skills (and their models) without any extrinsic reward and then compose these skills to optimize for the task defined at test time using model-based planning. At train time, we assume a Markov Decision Process (MDP) M 1 ≡ (S, A, p). The state space S and action space A are assumed to be continuous, and the A bounded. We assume the transition dynamics p to be stochastic, such that p : S × A × S → [0, ∞). We learn a skill-conditioned policy π(a | s, z), where the skills z belongs to the space Z, detailed in Section 3. We assume that the skills are sampled from a prior p(z) over Z. We simultaneously learn a skill-conditioned transition function q(s | s, z), coined as skill-dynamics, which predicts the transition to the next state s from the current state s for the skill z under the given dynamics p. At test time, we assume an MDP M 2 ≡ (S, A, p, r), where S, A, p match those defined in M 1 , and the reward function r : S × A → (−∞, ∞). We plan in Z using q(s | s, z) to compose the learned skills z for optimizing r in M 2 , which we detail in Section 4. DYNAMICS-AWARE DISCOVERY OF SKILLS (DADS) Algorithm 1: Dynamics-Aware Discovery of Skills (DADS) Initialize π, q φ ; while not converged do Sample a skill z ∼ p(z) every episode; Collect new M on-policy samples; Update q φ using K1 steps of gradient descent on M transitions; Compute rz(s, a, s ) for M transitions; Update π using any RL algorithm; end Figure 2: The agent π interacts with the environment to produce a transition s → s . Intrinsic reward is computed by computing the transition probability under q for the current skill z, compared to random samples from the prior p(z). The agent maximizes the intrinsic reward computed for a batch of episodes, while q maximizes the log-probability of the actual transitions of (s, z) → s . We use the information theoretic paradigm of mutual information to obtain our unsupervised skill discovery algorithm. In particular, we propose to maximize the mutual information between the next state s and current skill z conditioned on the current state s. I(s ; z | s) = H(z | s) − H(z | s , s) (1) = H(s | s) − H(s | s, z) (2) Mutual information in Equation 1 quantifies how much can be known about s given z and s, or symmetrically, z given the transition from s → s . From Equation 2, maximizing this objective corresponds to maximizing the diversity of transitions produced in the environment, that is denoted by the entropy H(s | s), while making z informative about the next state s by minimizing the entropy H(s | s, z). Intuitively, skills z can be interpreted as abstracted action sequences which are identifiable by the transitions generated in the environment (and not just by the current state). Thus, optimizing this mutual information can be understood as encoding a diverse set of skills in the latent space Z, while making the transitions for a given z ∈ Z predictable. We use the entropydecomposition in Equation 2 to connect this objective with model-based control. We want to optimize the our skill-conditioned controller π(a | s, z) such that the latent space z ∼ p(z) is maximally informative about the transitions s → s . Using the definition of conditional mutual information, we can rewrite Equation 2 as: I(s ; z | s) = p(z, s, s ) log p(s | s, z) p(s | s) ds dsdz(3) We assume the following generative model: p(z, s, s ) = p(z)p(s | z)p(s | s, z), where p(z) is user specified prior over Z, p(s|z) denotes the stationary state-distribution induced by π(a | s, z) for a skill z and p(s | s, z) denotes the transition distribution under skill z. Note, p(s | s, z) = p(s | s, a)π(a | s, z)da is intractable to compute because the underlying dynamics are unknown. However, we can variationally lower bound the objective as follows: I(s ; z | s) = E z,s,s ∼p log p(s | s, z) p(s | s) = E z,s,s ∼p log q φ (s | s, z) p(s | s) + E s,z∼p D KL (p(s | s, z) || q φ (s | s, z)) ≥ E z,s,s ∼p log q φ (s | s, z) p(s | s)(4) where we have used the non-negativity of KL-divergence, that is D KL ≥ 0. Note, skill-dynamics q φ represents the variational approximation for the transition function p(s | s, z), which enables model-based control as described in Section 4. Equation 4 suggests an alternating optimization between q φ and π, summarized in Algorithm 1. In every iteration: (Tighten variational lower bound) We minimize D KL (p(s | s, z) || q φ (s | s, z)) with respect to the parameters φ on z, s ∼ p to tighten the lower bound. For general function approximators like neural networks, we can write the gradient for φ as follows: ∇ φ E s,z [D KL (p(s | s, z) || q φ (s | s, z))] = ∇ φ E z,s,s log p(s | s, z) q φ (s | s, z) = −E z,s,s ∇ φ log q φ (s | s, z)(5) which corresponds to maximizing the likelihood of the samples from p under q φ . (Maximize approximate lower bound) After fitting q φ , we can optimize π to maximize E z,s,s [log q φ (s | s, z) − log p(s | s)]. Note, this is a reinforcement-learning style optimization with a reward function log q φ (s | s, z) − log p(s | s). However, log p(s | s) is intractable to compute, so we approximate the reward function for π: r z (s, a, s ) = log q φ (s | s, z) L i=1 q φ (s | s, z i ) + log L, z i ∼ p(z).(6) The approximation is motivated as follows: p(s | s) = p(s | s, z)p(z|s)dz ≈ q φ (s | s, z)p(z)dz ≈ 1 L L i=1 q φ (s | s, z i ) for z i ∼ p(z), where L denotes the number of samples from the prior. We are using the marginal of variational approximation q φ over the prior p(z) to approximate the marginal distribution of transitions. We discuss this approximation in Appendix C. Note, the final reward function r z encourages the policy π to produce transitions that are (a) predictable under q φ (predictability) and (b) different from the transitions produced under z i ∼ p(z) (diversity). To generate samples from p(z, s, s ), we use the rollouts from the current policy π for multiple samples z ∼ p(z) in an episodic setting for a fixed horizon T . We also introduce entropy regularization for π(a | s, z), which encourages the policy to discover action-sequences with similar state-transitions and to be clustered under the same skill z, making the policy robust besides encouraging exploration (Haarnoja et al., 2018a). The use of entropy regularization can be justified from an information bottleneck perspective as discussed for Information Maximization algorithm in (Mohamed & Rezende, 2015). This is even more extensively discussed from the graphical model perspective in Appendix B, which connects unsupervised skill discovery and information bottleneck literature, while also revealing the temporal nature of skills z. Details corresponding to implementation and hyperparameters are discussed in Appendix A. PLANNING USING SKILL DYNAMICS Given the learned skills π(a | s, z) and their respective skill-transition dynamics q φ (s | s, z), we can perform model-based planning in the latent space Z to optimize for a reward r that is given to the agent at test time. Note, that this essentially allows us to perform zero-shot planning given the unsupervised pre-training procedure described in Section 3. In order to perform planning, we employ the model-predictive-control (MPC) paradigm Garcia et al. (1989), which in a standard setting generates a set of action plans P k = (a k,1 , . . . a k,H ) ∼ P for a planning horizon H. The MPC plans can be generated due to the fact that the planner is able to simulate the trajectoryτ k = (s k,1 , a k,1 . . . s k,H+1 ) assuming access to the transition dynamicŝ p(s | s, a). In addition, each plan computes the reward r(τ k ) for its trajectory according to the reward function r that is provided for the test-time task. Following the MPC principle, the planner selects the best plan according to the reward function r and executes its first action a 1 . The planning algorithm repeats this procedure for the next state iteratively until it achieves its goal. We use a similar strategy to design an MPC planner to exploit previously-learned skill-transition dynamics q φ (s | s, z). Note that unlike conventional model-based RL, we generate a plan P k = (z k,1 , . . . z k,H P ) in the latent space Z as opposed to the action space A that would be used by a standard planner. Since the primitives are temporally meaningful, it is beneficial to hold a primitive for a horizon H Z > 1, unlike actions which are usually held for a single step. Thus, effectively, the planning horizon for our latent space planner is H = H P × H Z , enabling longer-horizon planning using fewer primitives. Similar to the standard MPC setting, the latent space planner simulates the trajectoryτ k = (s k,1 , z k,1 , a k,1 , s k,2 , z k,2 , a k,2 , . . . s k,H+1 ) and computes the reward r(τ k ). After a small number of trajectory samples, the planner selects the first latent action z 1 of the best plan, executes it for H Z steps in the environment, and the repeats the process until goal completion. The latent planner P maintains a distribution of latent plans, each of length H P . Each element in the sequence represents the distribution of the primitive to be executed at that time step. For continuous spaces, each element of the sequence can be modelled using a normal distribution, N (µ 1 , Σ), . . . N (µ H P , Σ). We refine the planning distributions for R steps, using K samples of latent plans P k , and compute the r k for the simulated trajectoryτ k . The update for the parameters follows that in Model Predictive Path Integral (MPPI) controller Williams et al. (2016): µ i = K k=1 exp(γr k ) K p=1 exp(γr p ) z k,i ∀i = 1, . . . H P(7) While we keep the covariance matrix of the distributions fixed, it is possible to update that as well as shown in Williams et al. (2016). We show an overview of the planning algorithm in Figure 3, and provide more implementation details in Appendix A. EXPERIMENTS Through our experiments, we aim to demonstrate that: (a) DADS as a general purpose skill discovery algorithm can scale to high-dimensional problems; (b) discovered skills are amenable to hierarchical composition and; (c) not only is planning in the learned latent space feasible, but it is competitive to strong baselines. In Section 6.1, we provide visualizations and qualitative analysis of the skills learned using DADS. We demonstrate in Section 6.2 and Section 6.4 that optimizing the primitives for predictability renders skills more amenable to temporal composition that can be used for Hierarchical RL.We benchmark against state-of-the-art model-based RL baseline in Section 6.3, and against goal-conditioned RL in Section 6.5. 6.1 QUALITATIVE ANALYSIS Figure 4: Skills learned on different MuJoCo environments in the OpenAI gym. DADS can discover diverse skills without any extrinsic rewards, even for problems with high-dimensional state and action spaces. In this section, we provide a qualitative discussion of the unsupervised skills learned using DADS. We use the MuJoCo environments (Todorov et al., 2012) from the OpenAI gym as our test-bed (Brockman et al., 2016). We find that our proposed algorithm can learn diverse skills without any reward, even in problems with high-dimensional state and actuation, as illustrated in Figure 4. DADS can discover primitives for Half-Cheetah to run forward and backward with multiple different gaits, for Ant to navigate the environment using diverse locomotion primitives and for Humanoid to walk using stable locomotion primitives with diverse gaits and direction. The videos of the discovered primitives are available at: https://sites.google.com/view/dads-skill Qualitatively, we find the skills discovered by DADS to be predictable and stable, in line with implicit constraints of the proposed objective. While the Half-Cheetah will learn to run in both backward and forward directions, DADS will disincentivize skills which make Half-Cheetah flip owing to the reduced predictability on landing. Similarly, skills discovered for Ant rarely flip over, and tend to provide stable navigation primitives in the environment. This also incentivizes the Humanoid, which is characteristically prone to collapsing and extremely unstable by design, to discover gaits which are stable for sustainable locomotion. One of the significant advantages of the proposed objective is that it is compatible with continuous skill spaces, which has not been shown in prior work on skill discovery (Eysenbach et al., 2018). Not only does this allow us to embed a large and diverse set of skills into a compact latent space, but also the smoothness of the learned space allows us to interpolate between behaviors generated in the environment. We demonstrate this on the Ant environment ( Figure 5), where we learn two-dimensional continuous skill space with a uniform prior over (−1, 1) in each dimension, and compare it to a discrete skill space with a uniform prior over 20 skills. Similar to Eysenbach et al. (2018), we restrict the observation space of the skill-dynamics q to the cartesian coordinates (x, y). We hereby call this the x-y prior, and discuss its role in Section 6.2. Trajectories in Discrete Skill Space Trajectories in Continuous Skill Space Orientation of Ant Trajectory Figure 5: (Left, Centre) X-Y traces of Ant skills and (Right) Heatmap to visualize the learned continuous skill space. Traces demonstrate that the continuous space offers far greater diversity of skills, while the heatmap demonstrates that the learned space is smooth, as the orientation of the X-Y trace varies smoothly as a function of the skill. In Figure 5, we project the trajectories of the learned Ant skills from both discrete and continuous spaces onto the Cartesian plane. From the traces of the skills, it is clear that the continuous latent space can generate more diverse trajectories. We demonstrate in Section 6.3, that continuous primitives are more amenable to hierarchical composition and generally perform better on downstream tasks. More importantly, we observe that the learned skill space is semantically meaningful. The heatmap in Figure 5 shows the orientation of the trajectory (with respect to the x-axis) as a function of the skill z ∈ Z, which varies smoothly as z is varied, with explicit interpolations shown in Appendix D. SKILL VARIANCE ANALYSIS In an unsupervised skill learning setup, it is important to optimize the primitives to be diverse. However, we argue that diversity is not sufficient for the learned primitives to be useful for downstream tasks. Primitives must exhibit low-variance behavior, which enables long-horizon composition of the learned skills in a hierarchical setup. We analyze the variance of the x-y trajectories in the environment, where we also benchmark the variance of the primitives learned by DIAYN (Eysenbach et al., 2018). For DIAYN, we use the x-y prior for the skill-discriminator, which biases the discovered skills to diversify in the x-y space. This step was necessary for that baseline to obtain a competitive set of navigation skills. Figure 6 (Top-Left) demonstrates that DADS, which optimizes the primitives for predictability and diversity, yields significantly lower-variance primitives when compared to DIAYN, which only optimizes for diversity. This is starkly demonstrated in the plots of X-Y traces of skills learned in different setups. Skills learned by DADS show significant control over the trajectories generated in the environment, while skills from DIAYN exhibit high variance in the environment, which limits their utility for hierarchical control. This is further demonstrated quantitatively in Section 6.4. While optimizing for predictability already significantly reduces the variance of the trajectories generated by a primitive, we find that using the x-y prior with DADS brings down the skill variance even further. For quantitative benchmarks in the next sections, we assume that the Ant skills are learned using an x-y prior on the observation space, for both DADS and DIAYN. MODEL-BASED REINFORCEMENT LEARNING The key utility of learning a parametric model q φ (s |s, z) is to take advantage of planning algorithms for downstream tasks, which can be extremely sample-efficient. In our setup, we can solve testtime tasks in zero-shot, that is without any learning on the downstream task. We compare with the state-of-the-art model-based RL method (Chua et al., 2018a), which learns a dynamics model parameterized as p(s |s, a), on the task of the Ant navigating to a specified goal with a dense reward. Given a goal g, reward at any position u is given by r(u) = − g − u 2 . We benchmark our method against the following variants: • Random-MBRL (rMBRL): We train the model p(s |s, a) on randomly collected trajectories, and test the zero-shot generalization of the model on a distribution of goals. • Weak-oracle MBRL (WO-MBRL): We train the model p(s |s, a) on trajectories generated by the planner to navigate to a goal, randomly sampled in every episode. The distribution of goals during training matches the distribution at test time. • Strong-oracle MBRL (SO-MBRL): We train the model p(s |s, a) on a trajectories generated by the planner to navigate to a specific goal, which is fixed for both training and test time. Amongst the variants, only the rMBRL matches our assumptions of having an unsupervised taskagnostic training. Both WO-MBRL and SO-MBRL benefit from goal-directed exploration during training, a significant advantage over DADS, which only uses mutual-information-based exploration. We use ∆ = H t=1 −r(u) H g 2 as the metric, which represents the distance to the goal g averaged over the episode (with the same fixed horizon H for all models and experiments), normalized by the initial distance to the goal g. Therefore, lower ∆ indicates better performance and 0 < ∆ ≤ 1 (assuming the agent goes closer to the goal). The test set of goals is fixed for all the methods, sampled from [−15, 15] 2 . Figure 7 demonstrates that the zero-shot planning significantly outperforms all model-based RL baselines, despite the advantage of the baselines being trained on the test goal(s). For the experiment depicted in Figure 7 (Right), DADS has an unsupervised pre-training phase, unlike SO-MBRL which is training directly for the task. A comparison with Random-MBRL shows the significance of mutual-information-based exploration, especially with the right parameterization and priors. This experiment also demonstrates the advantage of learning a continuous space of primitives, which outperforms planning on discrete primitives. HIERARCHICAL CONTROL WITH UNSUPERVISED PRIMITIVES We benchmark hierarchical control for primitives learned without supervision, against our proposed scheme using an MPPI based planner on top of DADS-learned skills. We persist with the task of Ant-navigation as described in 6.3. We benchmark against Hierarchical DIAYN (Eysenbach et al., 2018), which learns the skills using the DIAYN objective, freezes the low-level policy and learns a meta-controller that outputs the skill to be executed for the next H Z steps. We provide the x-y prior to the DIAYN's disciminator while learning the skills for the Ant agent. The performance of the meta-controller is constrained by the low-level policy, however, this hierarchical scheme is agnostic to the algorithm used to learn the low-level policy. To contrast the quality of primitives learned by the DADS and DIAYN, we also benchmark against Hierarchical DADS, which learns a meta-controller the same way as Hierarchical DIAYN, but learns the skills using DADS. From Figure 8 (Left) We find that the meta-controller is unable to compose the skills learned by DIAYN, while the same meta-controller can learn to compose skills by DADS to navigate the Ant to different goals. This result seems to confirm our intuition described in Section 6.2, that the high variance of the DIAYN skills limits their temporal compositionality. Interestingly, learning a RL Figure 8: (Left) A RL-trained meta-controller is unable to compose primitive learned by DIAYN to navigate Ant to a goal, while it succeeds to do so using the primitives learned by DADS. (Right) Goal-Conditioned RL (GCRL-dense/sparse) does not generalize outside its training distribution, while MPPI controller on learned skills (DADS-dense/sparse) experiences significantly smaller degrade in performance. meta-controller reaches similar performance to the MPPI controller, taking an additional 200, 000 samples per goal. GOAL-CONDITIONED RL To demonstrate the benefits of our approach over model-free RL, we benchmark against goalconditioned RL on two versions of Ant-navigation: (a) with a dense reward r(u) and (b) with a sparse reward r(u) = 1 if u − g 2 ≤ , else 0. We train the goal-conditioned RL agent using soft actor-critic, where the state variable of the agent is augmented with u − g, the position delta to the goal. The agent gets a randomly sampled goal from [−10, 10] 2 at the beginning of the episode. In Figure 8 (Right), we measure the average performance of the all the methods as a function of the initial distance of the goal, ranging from 5 to 30 metres. For dense reward navigation, we observe that while model-based planning on DADS-learned skills degrades smoothly as the initial distance to goal to increases, goal-conditioned RL experiences a sudden deterioration outside the goal distribution it was trained on. Even within the goal distribution observed during training of goal-conditioned RL model, skill-space planning performs competitively to it. With sparse reward navigation, goal-conditioned RL is unable to navigate, while MPPI demonstrates comparable performance to the dense reward up to about 20 metres. This highlights the utility of learning task-agnostic skills, which makes them more general while showing that latent space planning can be leveraged for tasks requiring long-horizon planning. CONCLUSION We have proposed a novel unsupervised skill learning algorithm that is amenable to model-based planning for hierarchical control on downstream tasks. We show that our skill learning method can scale to high-dimensional state-spaces, while discovering a diverse set of low-variance skills. In addition, we demonstrated that, without any training on the specified task, we can compose the learned skills to outperform competitive model-based baselines that were trained with the knowledge of the test tasks. We plan to extend the algorithm to work with off-policy data, potentially using relabelling tricks (Andrychowicz et al., 2017;Nachum et al., 2018) and explore more nuanced planning algorithms. We plan to apply the hereby-introduced method to different domains, such as manipulation and enable skill/model discovery directly from images. ACKNOWLEDGEMENTS We would like to thank Evan Liu, Ben Eysenbach, Anusha Nagabandi for their help in reproducing the baselines for this work. We are thankful to Ben Eysenbach for their comments and discussion on the initial drafts. We would also like to acknowledge Ofir Nachum, Alex Alemi, Daniel Freeman, Yiding Jiang, Allan Zhou and other colleagues at Google Brain for their helpful feedback and discussions at various stages of this work. We are also thankful to Michael Ahn and others in Adept team for their support, especially with the infrastructure setup and scaling up the experiments. (Abadi et al., 2015). A.1 SKILL SPACES When using discrete spaces, we parameterize Z as one-hot vectors. These one-hot vectors are randomly sampled from the uniform prior p(z) = 1 D , where D is the number of skills. We experiment with D ≤ 128. For discrete skills learnt for MuJoCo Ant in Section 6.3, we use D = 20. For continuous spaces, we sample z ∼ Uniform(−1, 1) D . We experiment with D = 2 for Ant learnt with x-y prior, D = 3 for Ant learnt without x-y prior (that is full observation space), to D = 5 for Humanoid on full observation spaces. The skills are sampled once in the beginning of the episode and fixed for the rest of the episode. However, it is possible to resample the skill from the prior within the episode, which allows for every skill to experience a different distribution than the initialization distribution. This also encourages discovery of skills which can be composed temporally. However, this increases the hardness of problem, especially if the skills are re-sampled from the prior frequently. A.2 AGENT We use SAC as the optimizer for our agent π(a | s, z), in particular, EC-SAC (Haarnoja et al., 2018b). The s input to the policy generally excludes global co-ordinates (x, y) of the centre-ofmass, available for a lot of enviroments in OpenAI gym, which helps produce skills agnostic to the location of the agent. We restrict to two hidden layers for our policy and critic networks. However, to improve the expressivity of skills, it is beneficial to increase the capacity of the networks. The hidden layer sizes can vary from (128, 128) for Half-Cheetah to (512, 512) for Ant and (1024, 1024) for Humanoid. The critic Q(s, a, z) is similarly parameterized. The target function for critic Q is updated every iteration using a soft updates with co-efficient of 0.005. We use Adam (Kingma & Ba, 2014) optimizer with a fixed learning rate of 3e − 4 , and a fixed initial entropy co-efficient β = 0.1. While the policy is parameterized as a normal distribution N (µ(s, z), Σ(s, z)) where Σ is a diagonal covariance matrix, it undergoes through tanh transformation, to transform the output to the range (−1, 1) and constrain to the action bounds. A.3 SKILL-DYNAMICS Skill-dynamics, denoted by q(s | s, z), is parameterized by a deep neural network. A common trick in model-based RL is to predict the ∆s = s − s, rather than the full state s . Hence, the prediction network is q(∆s | s, z). Note, both parameterizations can represent the same set of functions. However, the latter will be easy to learn as ∆s will be centred around 0. We exclude the global coordinates from from the state input to q. However, we can (and we still do) predict ∆ x , ∆ y , because reward functions for goal-based navigation generally rely on the position prediction from the model. This represents another benefit of predicting state-deltas, as we can still predict changes in position without explicitly knowing the global position. The output distribution is modelled as a Mixture-of-Experts (Jacobs et al., 1991). We fix the number of experts to be 4. We model each expert as a Gaussian distribution. The input (s, z) goes through two hidden layers (the same capacity as that of policy and critic networks, for example (512, 512) for Ant). The output of the two hidden layers is used as an input to the mixture-of-experts, which is linearly transformed to output the parameters of the Gaussian distribution, and a discrete distribution over the experts using a softmax distribution. In practice, we fix the covariance matrix of the Gaussian experts to be an identity matrix, so we only need to output the means for the experts. We use batch-normalization for both input and the hidden layers. We normalize the output targets using their batch-average and batch-standard deviation, similar to batch-normalization. A.4 OTHER HYPERPARAMETERS The episode horizon is generally kept shorter for stable agents like Ant (200), while longer for unstable agents like Humanoid (1000). For Ant, longer episodes do not add value, but Humanoid can benefit from longer episodes as it helps it filter skills which are unstable. The optimization scheme is on-policy, and we collect 2000 steps for Ant and 4000 steps for Humanoid in one iteration. The intuition is to experience trajectories generated by multiple skills (approximately 10) in a batch. Re-sampling skills can enable experiencing larger number of skills. Once a batch of episodes is collected, the skill-dynamics is updated using Adam optimizer with a fixed learning rate of 3e − 4. The batch size is 128, and we carry out 32 steps of gradient descent. To compute the intrinsic reward, we need to resample the prior for computing the denominator. For continuous spaces, we set L = 500. For discrete spaces, we can marginalize over all skills. After the intrinsic reward is computed, the policy and critic networks are updated for 128 steps with a batch size of 128. The intuition is to ensure that every sample in the batch is seen for policy and critic updates about 3 − 4 times in expectation. A.5 PLANNING AND EVALUATION SETUPS For evaluation, we fix the episode horizon to 200 for all models in all evaluation setups. Depending upon the size of the latent space and planning horizon, the number of samples from the planning distribution P is varied between 10 − 200. For H P = 1, H Z = 10 and a 2D latent space, we use 50 samples from the planning distribution P . The co-efficient γ for MPPI is fixed to 10. We use a setting of H P = 1 and H Z = 10 for dense-reward navigation, in which case we set the number of refine steps R = 10. However, for sparse reward navigation it is important to have a longer horizon planning, in which case we set H P = 4, H Z = 25 with a higher number of samples from the planning distribution (200 from P ). Also, when using longer planning horizons, we found that smoothing the sampled plans help. Thus, if the sampled plan is z 1 , z 2 , z 3 , z 4 . . ., we smooth the plan to make z 2 = βz 1 + (1 − β)z 2 and so on, with β = 0.9. For hierarchical controllers being learnt on top of low-level unsupervised primitives, we use PPO for discrete action skills, while we use SAC for continuous skills. We keep the number of steps after which the meta-action is decided as 10 (that is H Z = 10). The hidden layer sizes of the meta-controller are (128, 128). We use a learning rate of 1e − 4 for PPO and 3e − 4 for SAC. For our model-based RL baseline PETS, we use an ensemble size of 3, with a fixed planning horizon of 20. For the model, we use a neural network with two hidden layers of size 400. In our experiments, we found that MPPI outperforms CEM, so we report the results using the MPPI as our controller. B GRAPHICAL MODELS, INFORMATION BOTTLENECK AND UNSUPERVISED SKILL LEARNING We now present a novel perspective on unsupervised skill learning, motivated from the literature on information bottleneck. This section takes inspiration from (Alemi & Fischer, 2018), which helps us provide a rigorous justification for our objective proposed earlier. To obtain our unsupervised RL objective, we setup a graphical model P as shown in Figure 9, which represents the distribution of trajectories generated by a given policy π. The joint distribution is given by: p(s 1 , a 1 . . . a T −1 , s T , z) = p(z)p(s 1 ) T −1 t=1 π(a t |s t , z)p(s t+1 |s t , a t ). We setup another graphical model N , which represents the desired model of the world. In particular, we are interested in approximating p(s |s, z), which represents the transition function for a particular primitive. This abstraction helps us get away from knowing the exact actions, enabling model-based planning in behavior space (as discussed in the main paper). The joint distribution for N shown in Figure 10 is given by: η(s 1 , a 1 , . . . s T , a T , z) = η(z)η(s 1 ) T −1 t=1 η(a t )η(s t+1 |s t , z).(9) The goal of our approach is to optimize the distribution π(a|s, z) in the graphical model P to minimize the distance between the two distributions, when transforming to the representation of the graphical model Z. In particular, we are interested in minimizing the KL divergence between p and η, that is D KL (p||η). Note, if N had the same structure as P , the information lost in projection would be 0 for any valid P . Interestingly, we can exploit the following result from in Friedman et al. (2001) to setup the objective for π, without explicitly knowing η: min η D KL (p||η) = I P − I N ,(10) where I P and I N represents the multi-information for distribution P on the respective graphical models. Note, min η∈N D KL (p||η), which is the reverse information projection (Csiszár & Matus, 2003). The multi-information (Slonim et al., 2005) for a graphical model G with nodes g i is defined as: I G = i I(g i ; P a(g i )),(11) where P a(g i ) denotes the nodes upon which g i has direct conditional dependence in G. Using this definition, we can compute the multi-information terms: I P = Following the Optimal Frontier argument in (Alemi & Fischer, 2018), we introduce Lagrange multipliers β t ≥ 0, δ t ≥ 0 for the information terms in I P to setup an objective R(π) to be maximized with respect to π: R(π) = T −1 t=1 I(s t+1 ; {s t , z}) − β t I(a t ; {s t , z}) − δ t I(s t+1 ; {s t , a t })(13) As the underlying dynamics are fixed and unknown, we simplify the optimization by setting δ t = 0 which intuitively corresponds to us neglecting the unchangeable information of the underlying dynamics. This gives us R(π) = T −1 t=1 I(s t+1 ; {s t , z}) − β t I(a t ; {s t , z}) (15) ≥ T −1 t=1 I(s t+1 ; z | s t ) − β t I(a t ; {s t , z})(16) Here, we have used the chain rule of mutual information: I(s t+1 ; {s t , z}) = I(s t+1 ; s t ) + I(s t+1 ; z | s t ) ≥ I(s t+1 ; z | s t ), resulting from the non-negativity of mutual information. This yield us an information bottleneck style objective where we maximize the mutual information motivated in Section 3, while minimizing I(a t ; {s t , z}). We can show that the minimization of the latter mutual information corresponds to entropy regularization of π(a t | s t , z), as follows: I(a t ; {s t , z}) = E at∼π(at|st,z),st,z∼p log π(a t | s t , z) π(a t )(17)= E at∼π(at|st,z),st,z∼p log π(a t | s t , z) p(a t ) − D KL (π(a t ) || p(a t ))(18) ≤ E at∼π(at|st,z),st,z∼p log π(a t | s t , z) p(a t ) for some arbitrary distribution log p(a t ) (for example uniform). Again, we have used the nonnegativity of D KL to get the inequality. We use Equation 19 in Equation 16 to get: R(π) ≥ T −1 t=1 I(s t+1 ; z | s t ) − β t E at∼π(at|st,z),st,z∼p log π(a t | s t , z)(20) where we have ignored p(a t ) as it is a constant with respect to optimization for π. This motivates the use of entropy regularization. We can follow the arguments in Section 3 to obtain an approximate lower bound for I(s t+1 ; z | s t ). The above discussion shows how DADS can be motivated from a graphical modelling perspective, while justifying the use of entropy regularization from an information bottleneck perspective. This objective also explicates the temporally extended nature of z, and how it corresponds to a sequence of actions producing a predictable sequence of transitions in the environment. We can carry out the exercise for the reward function in Eysenbach et al. (2018) (DIAYN) to provide a graphical model interpretation of the objective used in the paper. To conform with objective in the paper, we assume to be sampling to be state-action pairs from skill-conditioned stationary distributions in the world P , rather than trajectories. The objective to be maximized is given by: R(π) = −I P + I Q (21) = −I(a; {s, z}) + I(z; s) (22) = E π [log p(z|s) p(z) − log π(a|s, z) π(a) ](23)≥ E π [log q φ (z|s) − log p(z) − log π(a|s, z)] = R(π, q φ )(24) where we have used the variational inequalities to replace p(z|s) with q φ (z|s) and π(a) with a uniform prior over bounded actions p(a) (which is ignored as a constant). C APPROXIMATING THE REWARD FUNCTION We revisit Equation 4 and the resulting approximate reward function constructed in Equation 6. The maximization objective for policy was: R(π | q φ ) = E z,s,s log q φ (s | s, z) − log p(s | s)(25) The computational problem arises from the intractability of p(s | s) = p(s | s, z)p(z | s)dz, where both p(s | s, z) and p(z | s) ∝ p(s | z)p(z) are intractable. Unfortunately, any variational approximation results in an improper lower bound for the objective. To see that: R(π | q φ ) = E z,s,s log q φ (s | s, z) − log q(s | s) − D KL (p(s | s) || q(s | s)) (26) ≤ E z,s,s log q φ (s | s, z) − log q(s | s)(27) where the inequality goes the wrong way for any variational approximation q(s | s). Our approximation can be seen as a special instantiation of q(s | s) = q φ (s | s, z)p(z)dz. This approximation is simple to compute as generating samples from the prior p(z) is inexpensive and effectively requires only a forward pass through q φ . Reusing q φ to approximate p(s | s) makes intuitive sense because we want q φ to reasonably approximate p(s | s, z) (which is why we collect large batches of data and take multiple steps of gradient descent for fitting q φ ). While sampling from the prior p(z) is crude, sampling p(z | s) can be computationally prohibitive. For a certain class of problems, especially locomotion, sampling from p(z) is a reasonable approximation as well. We want our primitives/skills to be usable from any state, which is especially the case with locomotion. Empirically, we have found our current approximation provides satisfactory results. We also discuss some other potential solutions (and their limitations): (a) One could potentially use another network q β (z | s) to approximate p(z | s) by minimizing E s,z∼p D KL (p(z | s) || q β (z | s)) . Note, the resulting approximation would still be an improper lower bound for R(π | q φ ). However, sampling from this q β might result in a better approximation than sampling from the prior p(z) for some problems. (b) We can bypass the computational intractability of p(s | s) by exploiting the variational lower bounds from Agakov (2004). We use the following inequality, used in Hausman et al. (2018): H(x) ≥ p(x, z) log q(z|x) p(x, z) dxdz(28) where q is a variational approximation to the posterior p(z|x). I(s ; z|s) = −H(s |s, z) + H(s |s) ≥ E z,s,s ∼p log q φ (s |s, z)] + E z,s,s ∼p log q α (z|s , s) + H(s , z|s) = E z,s,s ∼p log q φ (s |s, z) + log q α (z|s , s)] + H(s , z|s) where we have used the inequality for H(s |s) to introduce the variational posterior for skill inference q α (z | s , s) besides the conventional variational lower bound to introduce q(s | s, z). Further decomposing the leftover entropy: H(s , z|s) = H(z|s) + H(s |s, z) Reusing the variational lower bound for marginal entropy from Agakov (2004), we get: H(s |s, z) ≥ E s,z p(s , a|s, z) log q(a|s , s, z) p(s , a|s, z) ds da (32) = − log c + H(s , a|s, z) (33) = − log c + H(s |s, a, z) + H(a|s, z) Since, the choice of posterior is upon us, we can choose q(a|s , s, z) = 1/c to induce a uniform distribution for the bounded action space. For H(s |s, a, z), notice that the underlying dynamics p(s |s, a) are independent of z, but the actions do depend upon z. Therefore, this corresponds to entropy-regularized RL when the dynamics of the system are deterministic. Even for stochastic dynamics, the analogy might be a good approximation , assuming the underlying dynamics are not very entropic. The final objective (making this low-entropy dynamics assumption) can be written as: I(s ; z|s) ≥ E s E p(s ,z|s) [log q φ (s |s, z) + log q α (z|s , s) − log p(z|s)] + H(a|s, z) While this does bypass the intractability of p(s | s), it runs into the intractable p(z | s), despite deploying significant mathematical machinery and additional assumptions. Any variational approximation for p(z | s) would again result in an improper lower bound for I(s ; z | s). (c) One way to a make our approximation q(s | s) to more closely resemble p(s | s) is to change our generative model p(z, s, s ). In particular, if we resample z ∼ p(z) for every timestep of the rollout from π, we can indeed write p(z | s) = p(z). Note, p(s | s) is still intractable to compute, but marginalizing q φ (s | s, z) over p(z) becomes a better approximation of p(s | s). However, this severely dampens the interpretation of our latent space Z as temporally extended actions (or skills). It becomes better to interpret the latent space Z as dimensional reduction of action space. Empirically, we found that this significantly throttles the learning, not yielding useful or interpretable skills. D INTERPOLATION IN CONTINUOUS LATENT SPACE E MODEL PREDICTION From Figure 14, we observe that skill-dynamics can provide robust state-predictions over long planning horizons. When learning skill-dynamics with x−y prior, we observe that the error in prediction rises slower with horizon as compared to the norm of the actual position. This provides strong evidence of cooperation between the primitives and skill-dynamics learned using DADS with x − y prior. As the error-growth for skill-dynamics learned on full-observation space is sub-exponential, similar argument can be made for DADS without x − y prior as well (albeit to a weaker extent).
7,989
1812.04767
2949767467
We present a new algorithm for predicting the near-term trajectories of road-agents in dense traffic videos. Our approach is designed for heterogeneous traffic, where the road-agents may correspond to buses, cars, scooters, bicycles, or pedestrians. We model the interactions between different road-agents using a novel LSTM-CNN hybrid network for trajectory prediction. In particular, we take into account heterogeneous interactions that implicitly accounts for the varying shapes, dynamics, and behaviors of different road agents. In addition, we model horizon-based interactions which are used to implicitly model the driving behavior of each road-agent. We evaluate the performance of our prediction algorithm, TraPHic, on the standard datasets and also introduce a new dense, heterogeneous traffic dataset corresponding to urban Asian videos and agent trajectories. We outperform state-of-the-art methods on dense traffic datasets by 30 .
Methods that do not model road-agent interactions are regarded as sub-optimal or as less accurate than methods that model the interactions between road-agents in the scene @cite_3 . Examples of methods that explicitly model road-agent interaction include techniques based on social forces @cite_28 @cite_18 , velocity obstacles @cite_11 , LTA @cite_32 , etc. Many of these models were designed to account for interactions between pedestrians in a crowds (i.e. homogeneous interactions) and to improve the prediction accuracy @cite_35 . Techniques based on velocity obstacles have been extended using kinematic constraints to model the interactions between heterogeneous road-agents @cite_29 . Our learning approach does not use any explicit pairwise motion model. Rather, we model the heterogeneous interactions between road-agents implicitly.
{ "abstract": [ "We present a novel real-time algorithm to predict the path of pedestrians in cluttered environments. Our approach makes no assumption about pedestrian motion or crowd density, and is useful for short-term as well as long-term prediction. We interactively learn the characteristics of pedestrian motion and movement patterns from 2D trajectories using Bayesian inference. These include local movement patterns corresponding to the current and preferred velocities and global characteristics such as entry points and movement features. Our approach involves no precomputation and we demonstrate the real-time performance of our prediction algorithm on sparse and noisy trajectory data extracted from dense indoor and outdoor crowd videos. The combination of local and global movement patterns can improve the accuracy of long-term prediction by 12–18 over prior methods in high-density videos.", "We propose an agent-based behavioral model of pedestrians to improve tracking performance in realistic scenarios. In this model, we view pedestrians as decision-making agents who consider a plethora of personal, social, and environmental factors to decide where to go next. We formulate prediction of pedestrian behavior as an energy minimization on this model. Two of our main contributions are simple, yet effective estimates of pedestrian destination and social relationships (groups). Our final contribution is to incorporate these hidden properties into an energy formulation that results in accurate behavioral prediction. We evaluate both our estimates of destination and grouping, as well as our accuracy at prediction and tracking against state of the art behavioral model and show improvements, especially in the challenging observational situation of infrequent appearance observations–something that might occur in thousands of webcams available on the Internet.", "It is suggested that the motion of pedestrians can be described as if they would be subject to social forces.'' These forces'' are not directly exerted by the pedestrians' personal environment, but they are a measure for the internal motivations of the individuals to perform certain actions (movements). The corresponding force concept is discussed in more detail and can also be applied to the description of other behaviors. In the presented model of pedestrian behavior several force terms are essential: first, a term describing the acceleration towards the desired velocity of motion; second, terms reflecting that a pedestrian keeps a certain distance from other pedestrians and borders; and third, a term modeling attractive effects. The resulting equations of motion of nonlinearly coupled Langevin equations. Computer simulations of crowds of interacting pedestrians show that the social force model is capable of describing the self-organization of several observed collective effects of pedestrian behavior very realistically.", "", "Object tracking typically relies on a dynamic model to predict the object's location from its past trajectory. In crowded scenarios a strong dynamic model is particularly important, because more accurate predictions allow for smaller search regions, which greatly simplifies data association. Traditional dynamic models predict the location for each target solely based on its own history, without taking into account the remaining scene objects. Collisions are resolved only when they happen. Such an approach ignores important aspects of human behavior: people are driven by their future destination, take into account their environment, anticipate collisions, and adjust their trajectories at an early stage in order to avoid them. In this work, we introduce a model of dynamic social behavior, inspired by models developed for crowd simulation. The model is trained with videos recorded from birds-eye view at busy locations, and applied as a motion model for multi-people tracking from a vehicle-mounted camera. Experiments on real sequences show that accounting for social interactions and scene knowledge improves tracking performance, especially during occlusions.", "In this paper, we study the safe navigation of a mobile robot through crowds of dynamic agents with uncertain trajectories. Existing algorithms suffer from the “freezing robot” problem: once the environment surpasses a certain level of complexity, the planner decides that all forward paths are unsafe, and the robot freezes in place (or performs unnecessary maneuvers) to avoid collisions. Since a feasible path typically exists, this behavior is suboptimal. Existing approaches have focused on reducing the predictive uncertainty for individual agents by employing more informed models or heuristically limiting the predictive covariance to prevent this overcautious behavior. In this work, we demonstrate that both the individual prediction and the predictive uncertainty have little to do with the frozen robot problem. Our key insight is that dynamic agents solve the frozen robot problem by engaging in “joint collision avoidance”: They cooperatively make room to create feasible trajectories. We develop IGP, a nonparametric statistical model based on dependent output Gaussian processes that can estimate crowd interaction from data. Our model naturally captures the non-Markov nature of agent trajectories, as well as their goal-driven navigation. We then show how planning in this model can be efficiently implemented using particle based inference. Lastly, we evaluate our model on a dataset of pedestrians entering and leaving a building, first comparing the model with actual pedestrians, and find that the algorithm either outperforms human pedestrians or performs very similarly to the pedestrians. We also present an experiment where a covariance reduction method results in highly overcautious behavior, while our model performs desirably.", "" ], "cite_N": [ "@cite_35", "@cite_18", "@cite_28", "@cite_29", "@cite_32", "@cite_3", "@cite_11" ], "mid": [ "2418081708", "2146183743", "2167052694", "", "2532516272", "2082585576", "" ] }
TraPHic: Trajectory Prediction in Dense and Heterogeneous Traffic Using Weighted Interactions
The increasing availability of cameras and computer vision techniques has made it possible to track traffic road agents in realtime. These road agents may correspond to vehicles such as cars, buses, or scooters as well as pedestrians, bicycles, or animals. The trajectories of road agents extracted from a video can be used to model traffic patterns, driver behaviors, that are useful for autonomous driving. In addition to tracking, it is also important to predict the future trajectory of each road agent in realtime. The predicted trajectories are useful for performing safe autonomous navigation, traffic forecasting, vehicle routing, and congestion Figure 1. Trajectory Prediction: in dense heterogeneous traffic conditions. The scene consists of cars, scooters, motorcycles, three-wheelers, and bicycles in close proximity. Our algorithm (TraPHic) can predict the trajectory (red) of each road-agent close to the ground truth (green) and is better than other prior algorithms (shown in other colors). management [32,11]. In this paper, we deal with dense traffic composed of heterogeneous road agents. The heterogeneity corresponds to the interactions between different types of road agents such as cars, buses, pedestrians, two-wheelers (scooters and motorcycles), three-wheelers (rickshaws), animals, etc. These agents have different shapes, dynamic constraints, and behaviors. The traffic density corresponds to the number of distinct road agents captured in a single frame of the video or the number of agents per unit length (e.g., a kilometer) of the roadway. High density traffic is described as traffic with more than 100 road agents per Km. Finally, an interaction corresponds to how two road agents in close proximity affect each other's movement or avoid collisions. There is considerable work on trajectory prediction for moving agents [2,18,36,12,26,30,12,26]. Most of these algorithms have been developed for scenarios with single type of agents (a.k.a. homogeneous agents), which may correspond to human pedestrians in a crowd or cars driving on a highway. Furthermore, many prior methods have been evaluated on traffic videos corresponding to relatively sparse scenarios with only a few heterogeneous interactions, such as the NGSIM [1] and KITTI [15] datasets. In these cases, the interaction between agents can be modeled using well-known models based on social forces [19], velocity obstacles [35], or LTA [31]. Prior prediction algorithms do not work well on dense, heterogeneous traffic scenarios because they do not model the interactions accurately. For example, the dynamics of a bus-pedestrian interaction differs significantly from a pedestrian-pedestrian or a car-pedestrian interaction due to the differences in shape, size, maneuverability, and velocities. The differences in the dynamic characteristics of road agents affect their trajectories and how they navigate around each other in dense traffic situations [29]. Moreover, prior learning-based prediction algorithms typically model the interactions uniformly for all other road agents in its neighborhood and the resulting model assigns equal weight to each interaction. This method works well for homogeneous traffic. However, it does not work well for dense heterogeneous traffic, and we need methods to assign different weights to different pairwise interactions. Main Contributions: We present a novel traffic prediction algorithm, TraPHic, for predicting the trajectories of road agents in realtime. The input to our algorithm is the trajectory history of each road agent as observed over a short time-span (2-4 seconds), and the output is the predicted trajectory over a short span (3-5 seconds). In order to develop a general approach to handle dense traffic scenarios, our approach models two kinds of weighted interactions, horizon-based and heterogeneous-based. 1. Heterogeneous-Based: We implicitly take into account varying sizes, aspect ratios, driver behaviors, and dynamics of road agents. Our formulation accounts for several dynamic constraints such as average velocity, turning radius, spatial distance from neighbors, and local density. We embed these functions into our statespace formulation and use them as inputs to our network to perform learning. 2. Horizon-Based: We use a semi-elliptical region (horizon) based on a pre-defined radius in front of each road agent. We prioritize the interactions in which the road agents are within the horizon using a Horizon Map. Our approach learns a weighting mechanism using a non-linear formulation, and uses that to assign weights to each road agent in the horizon automatically. We formulate these interactions within an LSTM-CNN hybrid network that learns locally useful relationships between the heterogeneous road agents. Our approach is end-to-end and does not require explicit knowledge of an agent's behavior. Furthermore, we present a new traffic dataset (TRAF) comprising of dense and heterogeneous traffic. The dataset consists of the following road agents: cars, buses, trucks, rickshaws, pedestrians, scooters, motorcycles, carts, and animals and is collected in dense Asian cities. We also compare our approach with prior methods and highlight the accuracy benefits. Overall, TraPHiC offers the following benefits as a realtime prediction algorithm: 1. TraPHIC outperforms prior methods on dense traffic datasets with 10-30 road agents by 0.78 meters on the root mean square error (RMSE) metric, which is a 30% improvement over prior methods. 2. Our algorithm offers accuracy similar to prior methods on sparse or homogeneous datasets such as the NGSIM dataset [1]. The rest of the paper is organized as follows. We give a brief overview of prior work in Section 2. Section 3 presents an overview of the weighted interactions. We present the overall learning algorithm in Section 4 and evaluate its performance on different datasets in Section 5. Prediction Algorithms and Interactions Trajectory prediction has been researched extensively. Approaches include the Bayesian formulation [27], the Monte Carlo simulation [10], Hidden Markov Models (HMMs) [14], and Kalman Filters [23]. Methods that do not model road-agent interactions are regarded as sub-optimal or as less accurate than methods that model the interactions between road agents in the scene [34]. Examples of methods that explicitly model road-agent interaction include techniques based on social forces [19,37], velocity obstacles [35], LTA [31], etc. Many of these models were designed to account for interactions between pedestrians in a crowd (i.e. homogeneous interactions) and improve the prediction accuracy [3]. Techniques based on velocity obstacles have been extended using kinematic constraints to model the interactions between heterogeneous road agents [29]. Our learning approach does not use any explicit pairwise motion model. Rather, we model the heterogeneous interactions between road agents implicitly. Deep-Learning Based Methods Approaches based on deep neural networks use variants of Recurrent Neural Networks (RNNs) for sequence modeling. These have been extended to hybrid networks by combining RNNs with other deep learning architectures for motion prediction. RNN-Based Methods RNNs are natural generalizations of feedforward neural networks to sequence [33]. The benefits of RNNs for sequence modeling makes them a reasonable choice for traffic prediction. Since RNNs are incapable of modeling long-term sequences, many traffic trajectory prediction methods use long short-term memory networks (LSTMs) to model road-agent interactions. These include algorithms to predict trajectories in traffic scenarios with few heterogeneous interactions [12,30]. These techniques have also been used for trajectory prediction for pedestrians in a crowd [2,36]. Hybrid Methods Deep-learning-based hybrid methods consist of networks that integrate two or more deep learning architectures. Some examples of deep learning architectures include CNNs, GANs, VAEs, and LSTMs. Each architecture has its own advantages and, for many tasks, the advantages of individual architectures can be combined. There is considerable work on the development of hybrid networks. Generative models have been successfully used for tasks such as super resolution [25], image-to-image translation [22], and image synthesis [17]. However, their application in trajectory prediction has been limited because back-propagation during training is non-trivial. In spite of this, generative models such as VAEs and GANs have been used for trajectory prediction of pedestrians in a crowd [18] and in sparse traffic [26]. Alternatively, Convolutional Neural Networks (CNNs or ConvNets) have also been successfully used in many computer vision applications like object recognition [38]. Recently, they have also been used for traffic trajectory prediction [8,13]. In this paper, we present a new hybrid network that combines LSTMs with CNNs for traffic prediction. Traffic Datasets There are several datasets corresponding to traffic scenarios. ApolloScape [20] is a large-scale dataset of street views that contain scenes with higher complexities, 2D/3D annotations and pose information, lane markings and video frames. However, this dataset does not provide trajectory information. The NGSIM simulation dataset [1] consists of trajectory data for road agents corresponding to cars and trucks, but the traffic scenes are limited to highways with fixed-lane traffic. KITTI [15] dataset has been used in different computer vision applications such as stereo, optical flow, 2D/3D object detection, and tracking. There are some pedestrian trajectory datasets like ETH [31] and UCY [28], but they are limited to pedestrians in a crowd. Our new dataset, TRAF, corresponds to dense and heterogeneous traffic captured from Asian cities and includes 2D/3D trajectory information. TraPHic: Trajectory Prediction in Heterogeneous Traffic In this section, we give an overview of our prediction algorithm that uses weighted interactions. Our approach is designed for dense and heterogeneous traffic scenarios and is based on two observations. The first observation is based on the idea that road agents in such dense traffic do not react to every road agent around them; rather, they selectively focus attention on key interactions in a semi-elliptical region in the field of view, which we call the "horizon". For example, consider a motorcyclist who suddenly moves in front of a car and the neighborhood of the car consists of other road agents such as three-wheelers and pedestrians ( Figure 2). The car must prioritize the motorcyclist interaction over the other interactions to avoid a collision. The second observation stems from the heterogeneity of different road agents such as cars, buses, rickshaws, pedestrians, bicycles, animals, etc. in the neighborhood of an road agent ( Figure 2). For instance, the dynamic constraints of a bus-pedestrian interaction differs significantly from a pedestrian-pedestrian or even a car-pedestrian interaction due to the differences in road agent shapes, sizes, and maneuverability. To capture these heterogeneous road agent dynamics, we embed these properties into the state-space representation of the road agents and feed them into our hybrid network. We also implicitly model the behaviors of the road agents. Behavior in our case the different driving and walking styles of different drivers and pedestrians. Some are more aggressive while others more conservative. We model these behaviors as they directly influence the outcome of various interactions [7], thereby affecting the road agents' navigation. Problem Setup and Notation Given a set of N road agents A = {a i } i=1...N , trajectory history of each road agent a i over t frames, denoted Ψ i,t := [(x i,1 , y i,1 ), . . . , (x i,t , y i,t )] , and the road agent's size l i , we predict the spatial coordinates of that road agent for the next τ frames. In addition, we introduce a feature called traffic concentration c, motivated by traffic flow theory [21]. Traffic concentration, c(x, y), at the location (x, y) is defined as the number of road agents between (x, y) and (x, y) + (δx, δy) for some predefined (δx, δy) > 0. This metric is similar to traffic density, but the key difference is that traffic density is a macroscopic property of a traffic video, whereas traffic concentration is a mesoscopic property and is locally defined at a particular location. So we achieve a representation of traffic on several scales. Finally, we define the state space of each road agent a i as where ∆ is a derivative operator that is used to compute the velocity of the road agent, and Ω i := Ψ i,t ∆Ψ i,t c i l i(1)c i := [c(x i,1 , y i,1 ), . . . , c(x i,t , y i,t )] . 2D Image Space to 3D World Coordinate Space: We compute camera parameters from given videos using standard techniques [4,5], and use the parameters to estimate the camera homography matrices. The homography matrices are subsequently used to convert the location of road agents in 2D pixels to 3D world coordinates w.r.t. a predetermined frame of reference, similar to approaches in [18,2]. All state-space representations are subsequently converted to the 3D world space. Horizon and Neighborhood Agents: Prior trajectory prediction methods have collected neighborhood information using lanes and rectangular grids [12]. Our approach is more generalized in that we pre-process the trajectory data by assuming a lack of lane information. This assumption is especially true in practice in dense and heterogeneous traffic conditions. We formulate a road agent a i 's neighborhood, N i , using an elliptical region and selecting a fixed number of closest road agents using the nearest-neighbor search algorithm in that region. Similarly, we define the horizon of that agent, H i , by selecting a smaller threshold in the nearest-neighbor search algorithm, and in a semi-elliptical region in front of a i . Hybrid Architecture for Traffic Prediction In this section, we present our novel network architecture for performing trajectory prediction in dense and heterogeneous environments. In the context of heterogeneous traffic, the goal is to predict trajectories, i.e. temporal sequences of spatial coordinates of a road agent. Temporal sequence prediction requires models that can capture temporal dependencies in data, such as LSTMs [16]. However, LSTMs cannot learn dependencies or relationships of various heterogeneous road agents because the parameters of each individual LSTM are independent of one another. In this regard, ConvNets have been used in computer vision applications with greater success because they can learn locally dependent features from images. Thus, in order to leverage the benefits of both, we combine ConvNets with LSTMs to learn locally useful relationships, both in space and in time, between the heterogeneous road agents. We now describe our model to predict the trajectory for each road agent a i . A visualization of the model is shown in Figure 3. We start by computing H i and N i for the agent a i . Next, we identify all road agents a j ∈ N i ∪ H i . Each a j has an input state-space Ω j that is used to create the embeddings e j , using e j = φ(W l Ω i + b l )(2) where W l and b l are conventional symbols denoting the weight matrix and bias vector respectively, of the layer l in the network, and φ is the non-linear activation on each node. Our network consists of three layers. The horizon layer (top cyan layer in Figure 3) takes in the embedding of each road agent in H i , and the neighbor layer (middle green layer in Figure 3) takes in the embedding of each road agent in N i . The input embeddings in both these layers are passed through fully connected layers with ELU non-linearities [9], and then fed into single-layered LSTMs (yellow blocks in Figure 3). The outputs of the LSTMs in the two layers are hidden state vectors, h j (t), that are computed using h j (t) = LSTM(e j , W l , b l , h t−1 j )(3) where h t−1 j refers to the corresponding road agent's hidden state vector from the previous time-step t − 1. The hidden state vector of a road agent is a latent representation that contains temporally useful information. In the remainder of the text, we drop the parameter t for the sake of simplicity, i.e., h j is understood to mean h j (t) for any j. The hidden vectors in the horizon layer are passed through an additional fully connected layer with ELU nonlinearities [9]. We denote the output of the fully connected layer as h jw . All the h jw 's in the horizon layer are then pooled together in a "horizon map". The hidden vectors in the neighbor layer are directly pooled together in a "neighbor map". These maps are further elaborated in Section 4.1. Both these maps are then passed through separate ConvNets in the two layers. The ConvNets in both the layers are comprised of two convolution operations followed by a maxpool operation. We denote the output feature vector from Figure 3. TraPHic Network Architecture: The ego agent is marked by the red dot. The green elliptical region around it is its neighborhood and the cyan semi-elliptical region in front of it is its horizon. We generate input embeddings for all agents based on trajectory information and heterogeneous dynamic constraints such as agent shape, velocity, and traffic concentration at the agent's spatial coordinates, and other parameters. These embeddings are passed through LSTMs and eventually used to construct the horizon map, the neighbor map and the ego agent's own tensor map. The horizon and neighbor maps are passed through separate ConvNets and then concatenated together with the ego agent tensor to produce latent representations. Finally, these latent representations are passed through an LSTM to generate a trajectory prediction for the ego agent. the ConvNet in the horizon layer as f hz , and that from the ConvNet in the neighbor layer as f nb . Finally, the bottom-most layer corresponds to the ego agent a i . Its input embedding, e i , passes sequentially through a fully connected with ELU non-linearities [9], and a single-layered LSTM to compute its hidden vector, h i . The feature vectors from the horizon and neighbor layers, f hz and f nb , are concatenated with h i to generate a final vector encoding z := concat(h i , f hz , f nb )(4) Finally, the concatenated encoding z passes through an LSTM to compute the prediction for the next τ seconds. Weighted Interactions Our model is trained to learn weighted interactions in both the horizon and neighborhood layers. Specifically, it learns to assign appropriate weights to various pairwise interactions based on the shape, dynamic constraints and behaviors of the involved agents. The horizon-based weighted interactions takes into account the agents in the horizon of the ego agent, and learns the "horizon map" H i , given as H i = {h jw |a j ∈ H i }(5) Similarly, the neighbor or heterogeneous-based weighted interactions accounts for all the agents in the neighborhood of the ego agent, and learns the "neighbor map" N i , given as N i = {h j |a j ∈ N i }(6) During training, back-propagation optimizes the weights corresponding to these maps by minimizing the loss between predicted output and ground truth labels. Our formulation results in higher weights for prioritized interactions (larger tensors in Horizon Map or blue vehicles in Figure 2) and lower weights for less relevant interactions (smaller tensors in Neighbor Map or green vehicles in Figure 2). Implicit Constraints Turning Radius: In addition to constraints such as position, velocity and shape, constraints such as the turning radius of a road agent also affects its maneuverability, especially as it interacts with other road agents within some distance. For example, a car (a non-holonomic agent) cannot alter its orientation in a short time frame to avoid collisions, whereas a bicycle or a pedestrian can. However, the turning radius of a road agent can be determined by the dimensions of the road agent, i.e., its length and width. Since we include these parameters into our statespace representation, we implicitly take into consideration each agent's turning radius constraints as well. Driver Behavior: As stated in [7], velocity and acceleration (both relative and average ) are clear indicators of driver aggressiveness. For instance, a road agent with a relative velocity (and/or acceleration) much higher than the average velocity (and/or acceleration) of all road agents in a given traffic scenario would be deemed as aggressive. Moreover, given the traffic concentrations at two consecutive spatial coordinates, c(x, y) and c(x+δx, y +δy), where c(x, y) >> c(x + δx, y + δy), aggressive drivers move in a "greedy" fashion in an attempt to occupy the empty spots in the subsequent spatial locations. For each road agent, we compute its concentration with respect to its neighborhood and add this value to its input state-space. Finally, the relative distance of a road agent from its neighbors is another factor pertaining to how conservative or aggressive a driver is. More conservative drivers tend to maintain a healthy distance while aggressive drivers tend to tail-gate. Hence, we compute the spatial distance of each road agent in the neighborhood and encode this in its statespace representation. Overall Trajectory Prediction Our algorithm follows a well-known scheme for prediction [2]. We assume that the position of the road agent in the next frame follows a bi-variate Gaussian distribution with parameters µ t i , σ t i = [(µ x , µ y ) t i , ((σ x , σ y ) t i )], and correla- tion coefficient ρ t i . The spatial coordinates (x t i , y t i ) are thus drawn from N (µ t i , σ t i , ρ t i ) . We train the model by minimizing the negative log-likelihood loss function for the i th road agent trajectory, L i = −Σ τ t+1 log(P((x t i , y t i )|(µ t i , σ t i , ρ t i ))).(7) We jointly back-propagate through all three layers of our network, optimizing the weights for the linear blocks, Con-vNets, LSTMs, and Horizon and Neighbor Maps. The optimized parameters learned for the Linear-ELU block in the horizon layer indicates the priority for the interaction in the horizon of an road agent a i . Experimental Evaluation We describe our new dataset in Section 5.1. In Section 5.2, we list all implementation details used in our training process. Next, we list the evaluation metrics and methods that we compare with, in Section 5.3. Finally, we present the evaluation results in Section 5.4. TRAF Dataset: Dense & Heterogeneous Urban Traffic We present a new dataset, currently comprising of 50 videos of dense and heterogeneous traffic. The dataset consists of the following road agent categories: car, bus, truck, rickshaw, pedestrian, scooter, motorcycle, and other road agents such as carts and animals. Overall, the dataset contains approximately 13 motorized vehicles, 5 pedestrians and 2 bicycles per frame. Annotations were performed following a strict protocol and each annotated video file consists of spatial coordinates, an agent ID, and an agent type. The dataset is categorized according to camera viewpoint (front-facing/top-view), motion (moving/static), time of day (day/evening/night), and difficulty level (sparse/moderate/heavy/challenge). All the videos have a resolution of 1280 × 720. We present a comparison of our dataset with standard traffic datasets in Table 3. The dataset is available at https://gamma.umd.edu/ traphic/dataset. Implementation Details We use single-layer LSTMs as our encoders and decoders with hidden state dimensions of 64 and 128, respectively. Each ConvNet is implemented using two convolutional operations each followed by an ELU non-linearity [9] and then max-pooling. We train the network for 16 epochs using the Adam optimizer [24] with a batch size of 128 and learning rate of 0.001. We use a radius of 2 meters to define the neighborhood and a minor axis length of 1.5 meters to define the horizon, respectively. Our approach uses 3 seconds of history and predicts spatial coordinates of the road agent for up to 5 seconds (4 seconds for KITTI dataset). We do not down-sample on the NGSIM dataset due to its sparsity. However, we use a down-sampling factor of 2 on the Beijing and TRAF datasets due to their high density. Our network is implemented in Pytorch using a single TiTan Xp GPU. Our network does not use batch norm or dropout as they can decrease accuracy. We include the experimental details involving batch norm and dropout in the appendix due to space limitations. Evaluation Metrics and Comparison Methods We use the following commonly used metrics [2,18,12] to measure the performances of the algorithms used for predicting the trajectories of the road agents. 1. Average displacement error (ADE): The root mean square error (RMSE) of all the predicted positions and real positions during the prediction time. 2. Final displacement error (FDE): The RMSE distance between the final predicted positions at the end of the predicted trajectory and the corresponding true location. We compare our approach with the following methods. • RNN-ED (Seq2Seq): An RNN encoder-decoder model, which is widely used in motion and trajectory prediction for vehicles [6]. [2] in order to predict trajectories in sparse highway traf- Table 2. Evaluation on our new, highly dense and heterogeneous TRAF dataset. The first number is the average RMSE error (ADE) and the second number is final RMSE error (FDE) after 5 seconds (in meters). The original setting for a method indicates that it was tested with default settings. The learned setting indicates that it was trained on our dataset for fair comparison. We present variations of our approach with each weighted interaction and demonstrate the contribution of the method. Lower is better and bold is best result. Table 3. Comparison of our new TRAF dataset with various traffic datasets in terms of heterogeneity and density of traffic agents. Heterogeneity is described in terms of the number of different agents that appear in the overall dataset. Density is the total number of traffic agents per Km in the dataset. The value for each agent type under "Agents" corresponds to the average number of instances of that agent per frame of the dataset. It is computed by taking all the instances of that agent and dividing by the total number of frames. Visibility is a ballpark estimate of the length of road in meters that is visible from the camera. NGSIM data were collected using tower-mounted cameras (bird's eye view), whereas both Beijing and TRAF data presented here were collected with car-mounted cameras (frontal view). fic [12]. We also perform ablation studies with the following four versions of our approach. • TraPHic-B: A base version of our approach without using any weighted interactions. • TraPHic-H o : A version of our approach without using Heterogeneous-Based Weighted interactions, i.e., we do not take into account driver behavior and information such as shape, relative velocity, and concentration. • TraPHic-H e : A version of our approach without using Horizon-Based Weighted interactions. In this case, we do not explicitly model the horizon, but account for heterogeneous interactions. • TraPHic: Our main algorithm using both Heterogeneous-Based and Horizon-Based Weighted interactions. We explicitly model the horizon and implicitly account for dynamic constraints and driver behavior. Results on Traffic Datasets In order to provide a comprehensive evaluation, we compare our method with state-of-the-art methods on several datasets. Table 1 shows the results on the standard NGSIM dataset and an additional dataset containing heterogeneous traffic of moderate density. We present results on our new TRAF dataset in Table 2. TraPHic outperforms all prior methods we compared with on our TRAF dataset. For a fairer comparison, we trained these methods on our dataset before testing them on the dataset. However, the prior methods did not generalize well to dense and heterogeneous traffic videos. One possible explanation for this is that S-LSTM and S-GAN were designed to predict trajectories of humans in topdown crowd videos whereas the TRAF dataset consists of front-view heterogeneous traffic videos with high density. CS-LSTM uses lane information in its model and weight all agent interactions equally. Since the traffic in our dataset does not include the concept of lane-driving, we used the version of CS-LSTM that does not include lane information for a fairer comparison. However, it still led to a poor performance since CS-LSTM does not account for heterogeneous-based interactions. On the other hand, TraPHic considers both heterogeneous-based and horizonbased interactions, and thus produces superior performance on our dense and heterogeneous dataset. We visualize the performance of the various trajectory prediction methods on our TRAF dataset Figure 5. Compared to the prior methods, TraPHic produces the least de- On the average, using TraPHic-He reduces RMSE by 15% relative to TraPHic-B, and using TraPHic-Ho reduces RMSE by 55% relative to TraPHic-B. TraPHic, the combination of TraPhic-He and TraPhic-Ho, reduces RMSE by 36% relative to TraPHic-Ho, 66% relative to TraPHic-He, and 71% relative to TraPHic-B. Relative to CS-LSTM, TraPHic reduces RMSE by 30%. Figure 5. Trajectory Prediction Results: We highlight the performance of various trajectory prediction methods on our TRAF dataset with different types of road-agents. We showcase six scenarios with different density, heterogeneity, camera position (fixed or moving), time of the day, and weather conditions. We highlight the predicted trajectories (over 5 seconds) of some of the road-agents in each scenario to avoid clutter. The ground truth (GT) trajectory is drawn as a solid green line, and our (TraPHic) prediction results are shown using a solid red line. The prediction results of other methods (RNN-ED, S-LSTM, S-GAN, CS-LSTM) are drawn with different dashed lines. TraPHic predictions are closest to GT in all the scenarios. We observe up to 30% improvement in accuracy over prior methods over this dense, heterogeneous traffic. viation from the ground truth trajectory in all the scenarios. Due to the significantly high density and heterogeneity in these videos, coupled with the unpredictable nature of the involved agents, all the predictions deviate from the ground truth in the long term (after 5 seconds). We demonstrate that our approach is comparable to prior methods on sparse datasets such as the NGSIM dataset. We do not outperform the current sate-of-the-art in such datasets, since our algorithm tries to account for heterogeneous agents and weighted interactions even when interactions are sparse and mostly homogeneous. Nevertheless, we are at par with the state-of-the-art performance. Lastly, we note that our RMSE value on the NGSIM dataset is quite high, which we attribute to the fact that we used a much higher (2X) sampling rate for averaging than prior methods. Finally, we perform an ablation study to highlight the contribution of our weighted interaction formulation. We compare the four versions of TraPHic as stated in Section 5.3. We find that the Horizon-based formulation contributes more significantly to higher accuracy. TraPHic-H e reduces ADE by 15% and FDE by 20% over TraPHic-B, whereas TraPHic-H o reduces ADE by 55% and FDE by 58% over TraPHic-B. Incorporating both formulations results in the highest accuracy, reducing the ADE by 71% and the FDE by 66% over TraPHic-B. Conclusion, Limitations, and Future Work We presented a novel algorithm for predicting the trajectories of road agents in dense and heterogeneous traffic. Our approach is end-to-end, dealing with traffic videos without assuming lane-based driving. Furthermore, we are able to model the interactions between heterogeneous road agents corresponding to cars, buses, pedestrians, two-wheelers, three-wheelers, and animals. We use an LSTM-CNN hybrid network to model two kinds of weighted interactions between road agents: horizon-based and heterogeneousbased. We demonstrate the benefits of our model over state-of-the-art trajectory prediction methods on standard datasets and on a novel dense traffic dataset. We observe up to 30% improvement in prediction accuracy. Our work has some limitations. Our model design is motivated by some of the characteristics observed in dense heterogeneous traffic. As a result, we do not outperform prior methods on sparse or homogeneous traffic videos, although our prediction results are comparable to prior methods. In addition, modeling heterogeneous constraints requires the knowledge of the shapes and sizes of different road agents. This information could be tedious to collect. In the future, we plan to design a system that eliminates the need for ground truth trajectory data and can directly predict the trajectories from an input video. We also intend to use TraPHic for autonomous navigation in dense traffic.
5,242
1812.04359
2904148778
Efficient Reinforcement Learning usually takes advantage of demonstration or good exploration strategy. By applying posterior sampling in model-free RL under the hypothesis of GP, we propose Gaussian Process Posterior Sampling Reinforcement Learning(GPPSTD) algorithm in continuous state space, giving theoretical justifications and empirical results. We also provide theoretical and empirical results that various demonstration could lower expected uncertainty and benefit posterior sampling exploration. In this way, we combined the demonstration and exploration process together to achieve a more efficient reinforcement learning.
Two typical methods of learning from demonstration, are inverse reinforcement learning (IRL) and imitation learning (IL). Inverse reinforcement learning was introduced in ng2000algorithms . Its goal is to infer the underlying reward function given the optimal demonstration behavior. Further IRL algorithm includes Bayesian IRL @cite_13 @cite_21 , Maximum Entropy IRL @cite_1 @cite_12 , Repeated IRL @cite_25 , etc. But IRL can be intractable when problem scale is large. Earlier imitation learning indicates behavior cloning, which could fail when agent encounters untrained states. Later representative IL algorithm includes Data Aggregation (DAgger) @cite_28 , Generative Adversarial Imitation Learning (GAIL) , etc. However, their work focuses on imitating optimal demonstration, regarding mediocre and failed demonstration unusable. They also never consider exploration problem after imitating.
{ "abstract": [ "Sequential prediction problems such as imitation learning, where future observations depend on previous predictions (actions), violate the common i.i.d. assumptions made in statistical learning. This leads to poor performance in theory and often in practice. Some recent approaches provide stronger guarantees in this setting, but remain somewhat unsatisfactory as they train either non-stationary or stochastic policies and require a large number of iterations. In this paper, we propose a new iterative algorithm, which trains a stationary deterministic policy, that can be seen as a no regret algorithm in an online learning setting. We show that any such no regret algorithm, combined with additional reduction assumptions, must find a policy with good performance under the distribution of observations it induces in such sequential settings. We demonstrate that this new approach outperforms previous approaches on two challenging imitation learning problems and a benchmark sequence labeling problem.", "", "Recent research has shown the benefit of framing problems of imitation learning as solutions to Markov Decision Problems. This approach reduces learning to the problem of recovering a utility function that makes the behavior induced by a near-optimal policy closely mimic demonstrated behavior. In this work, we develop a probabilistic approach based on the principle of maximum entropy. Our approach provides a well-defined, globally normalized distribution over decision sequences, while providing the same performance guarantees as existing methods. We develop our technique in the context of modeling real-world navigation and driving behaviors where collected data is inherently noisy and imperfect. Our probabilistic approach enables modeling of route preferences as well as a powerful new approach to inferring destinations and routes based on partial trajectories.", "Inverse Reinforcement Learning (IRL) is the problem of learning the reward function underlying a Markov Decision Process given the dynamics of the system and the behaviour of an expert. IRL is motivated by situations where knowledge of the rewards is a goal by itself (as in preference elicitation) and by the task of apprenticeship learning (learning policies from an expert). In this paper we show how to combine prior knowledge and evidence from the expert's actions to derive a probability distribution over the space of reward functions. We present efficient algorithms that find solutions for the reward learning and apprenticeship learning tasks that generalize well over these distributions. Experimental results show strong improvement for our methods over previous heuristic-based approaches.", "We introduce a novel repeated Inverse Reinforcement Learning problem: the agent has to act on behalf of a human in a sequence of tasks and wishes to minimize the number of tasks that it surprises the human by acting suboptimally with respect to how the human would have acted. Each time the human is surprised, the agent is provided a demonstration of the desired behavior by the human. We formalize this problem, including how the sequence of tasks is chosen, in a few different ways and provide some foundational results.", "" ], "cite_N": [ "@cite_28", "@cite_21", "@cite_1", "@cite_13", "@cite_25", "@cite_12" ], "mid": [ "1931877416", "", "2098774185", "1591675293", "2621205314", "" ] }
Efficient Model-Free Reinforcement Learning Using Gaussian Process
0
1812.04180
2905421523
We investigate learning to probabilistically bypass computations in a network architecture. Our approach is motivated by AIG, where layers are conditionally executed depending on their inputs, and the network is trained against a target bypass rate using a per-layer loss. We propose a per-batch loss function, and describe strategies for handling probabilistic bypass during inference as well as training. Per-batch loss allows the network additional flexibility. In particular, a form of mode collapse becomes plausible, where some layers are nearly always bypassed and some almost never; such a configuration is strongly discouraged by AIG's per-layer loss. We explore several inference-time strategies, including the natural MAP approach. With data-dependent bypass, we demonstrate improved performance over AIG. With data-independent bypass, as in stochastic depth, we observe mode collapse and effectively prune layers. We demonstrate our techniques on ResNet-50 and ResNet-101 for ImageNet , where our techniques produce improved accuracy (.15--.41 in precision@1) with substantially less computation (bypassing 25--40 of the layers).
Conditional computation has been well studied in computer vision. Cascaded classifiers @cite_26 shorten computation by identifying easy negatives and have recently been adapted to deep learning @cite_32 @cite_12 . More directly, @cite_34 and @cite_30 both propose a cascading architecture which computes features at multiple scales and allows for dynamic evaluation, where at inference time the user can trade off speed for accuracy. Similarly, @cite_6 adds intermediate classifiers and returns a label once the network reaches a specified confidence. @cite_21 @cite_23 both use the state of the network to adaptively decrease the number of computational steps during inference. @cite_23 uses an intermediate state sequence and a halting unit to limit the number of blocks that can be executed in an RNN; @cite_21 learns an image dependent stopping condition for each ResNet block that conditionally bypasses the rest of the layers in the block. @cite_8 trains a large number of small networks, called Experts, and then uses gates to select a sparse combination of the experts for a given input.
{ "abstract": [ "We propose and systematically evaluate three strategies for training dynamically-routed artificial neural networks: graphs of learned transformations through which different input signals may take different paths. Though some approaches have advantages over others, the resulting networks are often qualitatively similar. We find that, in dynamically-routed networks trained to classify images, layers and branches become specialized to process distinct categories of images. Additionally, given a fixed computational budget, dynamically-routed networks tend to perform better than comparable statically-routed networks.", "This paper describes a face detection framework that is capable of processing images extremely rapidly while achieving high detection rates. There are three key contributions. The first is the introduction of a new image representation called the “Integral Image” which allows the features used by our detector to be computed very quickly. The second is a simple and efficient classifier which is built using the AdaBoost learning algorithm (Freund and Schapire, 1995) to select a small number of critical visual features from a very large set of potential features. The third contribution is a method for combining classifiers in a “cascade” which allows background regions of the image to be quickly discarded while spending more computation on promising face-like regions. A set of experiments in the domain of face detection is presented. The system yields face detection performance comparable to the best previous systems (Sung and Poggio, 1998; , 1998; Schneiderman and Kanade, 2000; , 2000). Implemented on a conventional desktop, face detection proceeds at 15 frames per second.", "The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.", "This paper proposes a deep learning architecture based on Residual Network that dynamically adjusts the number of executed layers for the regions of the image. This architecture is end-to-end trainable, deterministic and problem-agnostic. It is therefore applicable without any modifications to a wide range of computer vision problems such as image classification, object detection and image segmentation. We present experimental results showing that this model improves the computational efficiency of Residual Networks on the challenging ImageNet classification and COCO object detection datasets. Additionally, we evaluate the computation time maps on the visual saliency dataset cat2000 and find that they correlate surprisingly well with human eye fixation positions.", "In real-world face detection, large visual variations, such as those due to pose, expression, and lighting, demand an advanced discriminative model to accurately differentiate faces from the backgrounds. Consequently, effective models for the problem tend to be computationally prohibitive. To address these two conflicting challenges, we propose a cascade architecture built on convolutional neural networks (CNNs) with very powerful discriminative capability, while maintaining high performance. The proposed CNN cascade operates at multiple resolutions, quickly rejects the background regions in the fast low resolution stages, and carefully evaluates a small number of challenging candidates in the last high resolution stage. To improve localization effectiveness, and reduce the number of candidates at later stages, we introduce a CNN-based calibration stage after each of the detection stages in the cascade. The output of each calibration stage is used to adjust the detection window position for input to the subsequent stage. The proposed method runs at 14 FPS on a single CPU core for VGA-resolution images and 100 FPS using a GPU, and achieves state-of-the-art detection performance on two public face detection benchmarks.", "Deep neural networks are state of the art methods for many learning tasks due to their ability to extract increasingly better features at each network layer. However, the improved performance of additional layers in a deep network comes at the cost of added latency and energy usage in feedforward inference. As networks continue to get deeper and larger, these costs become more prohibitive for real-time and energy-sensitive applications. To address this issue, we present BranchyNet, a novel deep network architecture that is augmented with additional side branch classifiers. The architecture allows prediction results for a large portion of test samples to exit the network early via these branches when samples can already be inferred with high confidence. BranchyNet exploits the observation that features learned at an early layer of a network may often be sufficient for the classification of many data points. For more difficult samples, which are expected less frequently, BranchyNet will use further or all network layers to provide the best likelihood of correct prediction. We study the BranchyNet architecture using several well-known networks (LeNet, AlexNet, ResNet) and datasets (MNIST, CIFAR10) and show that it can both improve accuracy and significantly reduce the inference time of the network.", "This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.", "", "In this paper, we investigate two new strategies to detect objects accurately and efficiently using deep convolutional neural network: 1) scale-dependent pooling and 2) layerwise cascaded rejection classifiers. The scale-dependent pooling (SDP) improves detection accuracy by exploiting appropriate convolutional features depending on the scale of candidate object proposals. The cascaded rejection classifiers (CRC) effectively utilize convolutional features and eliminate negative object proposals in a cascaded manner, which greatly speeds up the detection while maintaining high accuracy. In combination of the two, our method achieves significantly better accuracy compared to other state-of-the-arts in three challenging datasets, PASCAL object detection challenge, KITTI object detection benchmark and newly collected Inner-city dataset, while being more efficient." ], "cite_N": [ "@cite_30", "@cite_26", "@cite_8", "@cite_21", "@cite_32", "@cite_6", "@cite_23", "@cite_34", "@cite_12" ], "mid": [ "2949427550", "2137401668", "2581624817", "2952922798", "1934410531", "2610140147", "2325237720", "2598097916", "2474389331" ] }
Deep networks with probabilistic gates
Despite the enormous success of convolutional networks [11,23,37], they remain poorly understood and difficult to optimize. A natural line of investigation, which [1] called conditional computation, is to conditionally bypass parts of the network. While inference-time efficiency could obviously benefit [1], bypassing computations can improve training time or test performance [4,18,44], and can provide insight into network behavior [18,44]. In this paper we investigate several new probabilistic bypass techniques. We focus on ResNet [11] architectures since these are the mainstay of current deep learning techniques for image classification. The general architecture of a ResNet with probabilistic bypass gates is shown in figure 1. The main idea is that a residual layer, such as f 1 , can potentially be bypassed depending on the results of the gating computation g 1 , which controls the gate. The gating computation g i can be data-independent, or it can depend on its input. If a g i always executes its layer, this describes a conventional ResNet. A more interesting data-independent architecture is stochastic depth [18], where at training time g i executes a layer with probability defined by a hyperparameter. At inference time the stochastic depth network is deterministic, though the frequency with which a layer was bypassed during training is used to scale down its weight. With the introduction of Gumbel-Softmax (GS) [2,7,20,30] it became possible to train a network to bypass computations, given some target bypass rate that trades off against training set accuracy. In this sense, probabilistic bypass serves as an additional regularizer for the network. This is the approach taken by AIG [44], where the bypass decision is data-dependent and the loss is per-layer. We propose a per-batch loss function, which allows the network to more flexibly distribute bypass among different layers, compared to AIG's per-layer loss. This in turn leads to more advantageous tradeoffs between accuracy and inference speed. When our per-batch loss is applied with data-independent bypass, we observe a form of mode collapse where individual layers are either nearly always bypassed or nearly never bypassed. This effectively prunes layers, and again results in advantageous tradeoffs between accuracy and inference speed. Whether a network uses data-dependent or data-independent probabilistic bypass, there remains a question of how to perform inference. We explore several alternative inference strategies, and provide evidence that the natural MAP approach gives good performance. This paper is organized as follows. We begin by introducing notation and briefly reviewing related work. Section 3 introduces our per-batch loss function and our inference strategies. Experimental results on ImageNet and CIFAR are presented in section 4, followed by a discussion of some natural extensions of our work. Additional experiments and more details are included in the supplemental material. Notation We first introduce some notation that allows us to more precisely discuss probabilistic bypass. Following [44], we write the output of layer l ∈ {0, 1, . . . , L} as x l , with the input image being x 0 . We can express the effect of a residual layer F l in a feed-forward neural network as x l = x l−1 + F l (x l−1 ). Then a probabilistic bypass gate z l ∈ {0, 1} for this layer modifies the forward pass to either run or skip the layer: x l = x l−1 + z l F l (x l−1 ). There are many different gating computations to determine the value of z l , which can be set at training time, at inference time, or both. The degenerate case z l = 1 corresponds to the original ResNet architecture. Stochastic depth (SD) [18] can be formalized as setting z l = 1 − l L (1 − p L ) during training, where p L is a hyperparameter set to 0.5 in the SD experiments. During inference, SD sets z l = 1 but uses information gleaned during training to adjust the weights of each layer, where layers that were frequently bypassed are down-weighted. We are particularly interested in AIG [44], which uses probabilistic bypass during both training and inference, along with a gating computation that depends on the input data (i.e., z l is a function of x l−1 ). Let G be the set of gates in the network and B be the set of instances in some mini-batch. AIG uses a target loss rate during training, and computes this on a per-gate basis. Given a target rate t ∈ [0, 1] this is L G = 1 |G| g∈G t − 1 |B| i∈B z g,i 2 This loss function encourages each layer to be bypassed at the target rate. Note that this penalty is symmetric, so bypassing more layers is as expensive as bypassing fewer. The overall loss is the sum of the target loss (denoted as L target ) and the standard multi-class logistic loss L MC , L = L target + L MC . For AIG, the target loss is L G . AIG uses the straight through trick -the z's are categorical during the forward pass but treated as a Gumbel softmax during the backwards pass. At inference time AIG is stochastic, since a given layer l might be bypassed depending on its input x l−1 and the learned bypass probability for that layer. Related work Conditional computation has been well studied in computer vision. Cascaded classifiers [45] shorten computation by identifying easy negatives and have recently been adapted to deep learning [26,46]. More directly, [14] and [31] both propose a cascading architecture which computes features at multiple scales and allows for dynamic evaluation, where at inference time the user can trade off speed for accuracy. Similarly, [43] adds intermediate classifiers and returns a label once the network reaches a specified confidence. [4,6] both use the state of the network to adaptively decrease the number of computational steps during inference. [6] uses an intermediate state sequence and a halting unit to limit the number of blocks that can be executed in an RNN; [4] learns an image dependent stopping condition for each ResNet block that conditionally bypasses the rest of the layers in the block. [36] trains a large number of small networks, called Experts, and then uses gates to select a sparse combination of the experts for a given input. Another approach to decreasing the computation time is network pruning. The earliest works attempted to determine the importance of specific weights [10,24] or hidden units [34] and remove those which are unimportant or redundant. Weight-based pruning on CNNs follows the same fundamental approach; [9] prunes weights with small magnitude and [8] incorporates these into a pipeline which also includes quantization and Huffman coding. Numerous techniques prune at the channel level, whether through heuristics [13,25] or approximations to importance [12,33,42]. [29] prunes at the filter level using statistics from the following layer. [48] applies binary mask variables to a layer's weight tensors, sorts the weights during train time, and then sends the lowest to zero. [19] is the most related to our data-independent bypass. They add a sparsity regularization and then modifies stochastic Accelerated Proximal Gradient to prune the network in an end-to-end fashion. Our work differs from [19] by using GS to integrate the sparsity constraint into an additive loss which can be trained by any optimization technique; we use unmodified stochastic gradient descent with momentum (SGD), the typical technique for training classification. Recently, [27] suggests that the main benefits of pruning come primarily from the identified architecture. Our work is also related to regularization techniques such as Dropout [41] and Stochastic Depth [18]. Both techniques try to induce redundancy through stochastically removing parts of the network during training time. Dropout ignores individual units and Stochastic Depth (as described above) skips entire layers. Both provide evidence that the increased redundancy improves helps to prevent overfitting. These techniques can be seen as applying stochastic gates to units or layers, respectively, where the gate probabilities are hyperparameters. In the Bayesian machine learning community, data-independent gating is used as form of regularization. This line of work is cast as generalizing hyperparameter-per-weight dropout by learning individual dropout weights. [39] performs pruning by learning multipliers for weights, which are incentivized to be 0 − 1 by a sparsity-encouraging loss w(1 − w). [5] proposes per-weight regularization, using the straight-through Gumbel-Softmax trick. [38] uses a form of trainable dropout, learning a per-neuron gating probability. These are regularized by their likelihood against a beta distribution, and training is done with the straight-through trick. [40] learns sparsity at the weight level using a binary mask. They adopt a complexity loss which is L 0 on weights, plus a sparsification loss similar to [39]. This is similar to a per-batch loss. [28] extends the straight-through trick with a hard sigmoid to obtain less biased estimates of the gradient. They use a loss equal to the sum of Bernoulli weights, which is similar to a per-batch loss. [32] extends the variational dropout in [21] to allow dropout probabilities greater than a half. Training with the straight-through trick and placing a log-scale uniform prior on the dropout probabilities, they find substantial sparsification with minimal change in change in accuracy, including on some vision problems. Using probabilistic bypass in deep networks In this section we investigate ways to use probabilistic bypass in deep networks. We propose a per-batch loss function, which we use during training with Gumbel softmax following AIG. This frequently leads to mode collapse (not dissimilar to that encouraged by the sparsity-encouraging loss in [40]), which effectively prunes network layers. At inference time we take a deterministic approach, and observe that a simple MAP approach gives strong experimental results. Batch loss during training We note that z g,i can be viewed as a random variable depending on the instance, whose expectation is E B [z g ] = 1 |B| i∈B z g,i . Seen this way, AIG's per-gate loss is L G = E G (t − E B [z]) 2 , that is, a squared L 2 loss on (t − z). The intuition for per-gate target loss is that each gate should be on expectation open around target t amount of time. When the gates are data-independent, this loss encourages each layer to execute with probability t. When the gates are data-dependent, this loss encourages each layer to learn to execute on a fraction t of the training instances. This per-gate target loss was intended to caused the layers of the network to specialize [44]. However, it is not a-priori obvious that specialization is the best architecture from a performance perspective. Instead, the most natural approach is to allow optimizer to select the network configuration which performs the best given a target activation rate. With this intuition, we propose per-batch target loss, which can be trained against using Gumbel softmax following AIG. For a target rate t ∈ [0, 1] L B =   t − 1 |G||B| g∈G i∈B z g,i   2 . This can be interpreted as: (t − E G,B [z]) 2 that is, a squared L 1 loss on (t − z). This loss only induces the network to have an activation of t. The intuition is that each batch is given t capacity and distributes the capacity among the instances and gates however it chooses. For example, if there exists a per-gate configuration with zero training loss that was easily found by optimization techniques, then per-batch would converge to it. Mode collapse With AIG's per-gate loss, each gate independently tries to hit its target rate, which means that the bypass rates will in general be fairly similar among gates. Our per-batch loss, however, allows different layers to have very different bypass rates, a network configuration would be heavily penalized by AIG. In our experiments we frequently observe a form of mode collapse, where layers are nearly always bypassed or nearly never bypassed. In this situation, our loss function encourages a form of network pruning, where we start with an overcapacitated network and then determine which layers to remove during training. Surprisingly, our experiments demonstrate that we end up with improved accuracy. Inference strategies Once training has produced a deep network with stochastic gates, it is necessary to decide how to perform inference. The simplest approach is to leave the gates in the network and allow them to be stochastic during inference time. This is the technique that AIG uses. Experimentally, we observe a small variance so this may be sufficient for most use cases. In addition, one way to take advantage of the stochasticity is to create an ensemble composed of multiple runs with the same network. Then any kind of ensemble technique can be used to combine the different runs: voting, weighing, boosting, etc. In practice, we observe a small bump in accuracy from this ensemble technique, though there is obviously a computational penalty. However, stochasticity has the awkward consequence that multiple classification runs on the same image will often return different results. There are several techniques to remove stochasticity. The gates can be removed, setting z l = 1 at test time. This is natural when viewing these gates as a regularization technique, and is the technique used by Stochastic Depth. Alternately, inference can be made deterministic by using a threshold τ instead of sampling. Thresholding with value τ means that a layer will be executed if the learned probability is greater than τ . This also allows the user some small degree of dynamic control over the computational cost of inference. If the user passes in a very high τ , then fewer layers will activate and inference will be faster. In our experiments, we set τ = 1 2 Note that we observe mode collapse for a large number of our per-batch experiments (particularly with data-independent gates). In this situation, for a wide range of τ thresholding can be interpreted as a pruning technique, where layers below a certain probability τ are pruned. Experiments Our primary experiments centered around probabilistic bypass to ResNet [11] and running the resulting network on ImageNet [3]. Our main finding is that our techniques improve both accuracy and inference speed. We also perform an empirical investigation into our networks in order to better understand their performance. Additional experiments, including CIFAR [22] as well as ImageNet, as well as more details are included in the supplemental material. Improving speed and accuracy on ImageNet We have implemented several probabilistic bypass techniques on the ResNet-50 and ResNet-101, and explored their performance on ImageNet. Since ResNet-101 is so computationally demanding, we have done more experiments on ResNet-50. Our techniques demonstrate improvements in accuracy and inference time on both ResNet-50 and ResNet-101. Figure 3: CIFAR-10 results. Dependent Per-Gate is our implementation of [44]. Note that Dependent Per-Batch has both higher accuracy and lower activation than any of the other combinations. Architecture and training details We use the baseline architecture of ResNet-50 and ResNet-101 [11] and place gates at the start of each residual layer. We adopt the AIG [44] gate architecture. During training we explore different combinations of data-dependent or -independent gates. We used our per-batch loss, as well as AIG's per-gate loss, with target rates t = {0.4, 0.5, 0.6}. We kept the same training schedule as AIG, and followed the standard ResNet training procedure: mini-batch size of 256, momentum of 0.9, and weight decay of 10 −4 . We train for 100 epochs from a pretrained model of the appropriate architecture with step-wise learning rate starting at 0.1, and after every 30 epochs decay by 10 −1 . We use standard training data-augmentation, and rescale the images to 256×256 followed by a 224×224 center crop. We observe that configurations with low gate activations cause the batch norm estimates of mean and variance to be slightly unstable. Therefore before final evaluation, we run training with a learning rate of zero and a large batch size for 200 batches in order to improve the stability and performance of the BatchNorm layers. This general technique was also utilized by [44]. Experimental results Our results are shown in figures 2 and 4. The most interesting experimental results are obtained with data-dependent gates and our per-batch loss function L B , along with thresholding at inference time. This combination gives a 0.41 improvement in top 1 error over ResNet-101 while using 30% less computation. It also gives the same improvement in top 1 error over AIG. On ResNet-50, this technique saves significant computation compared to AIG or the baseline ResNet, albeit with a small loss of accuracy. On the ResNet-50 architecture, we also investigated data-independent gates with per-batch loss, with thresholding at inference time. This produces an improvement in accuracy over our data-dependent architecture, as well as over AIG and vanilla ResNet-50. It saves significant computation (33% fewer gFLOPs) over ResNet-50, and is slightly faster than AIG but slower than the data-dependent architecture. Figure 4 also shows the impact of the thresholding inference strategy, which is used for the 3 columns at right. We found that thresholding at inference time often gives the best performance, leading to roughly .1 − .2 percentage points improvement in top-1 accuracy. We report the result on CIFAR10 and ResNet-101 in table 3. We note, for target rate of 0.5, dependent per-batch has the best performance in both accuracy and average activation. Figures for the remaining target rates can be found in the supplemental materials. Empirical investigations We performed a number of experiments to try to better understand the performance of our architecture. In particular, examining our learned bypass probability provides some interesting insights into how the network behaves. Pruning With the per-batch loss, we often observe mode collapse, where some layers are nearly always on and some nearly always off. In the case of data-dependent bypass, we can measure the observed activation of a gate during training. For example, on a per-batch run on ResNet-50 (16 gates) on ImageNet, nearly all of the 16 gates mode collapse, as shown in figure 4.2.1: four gates collapsed to a mode of zero or one exactly; more than half were at their mode more than 99.9% of the time. Interestingly, we observe different activation behavior on different datasets. ImageNet leads to frequent and aggressive mode collapse, all networks exhibited some degree of mode collapse; CIFAR10 can induce mode collapse but does so much less frequently, approximately less than 40% of our runs. Mode collapse can effectively perform end-to-end network pruning. At inference time, layers with near zero activation can be permanently skipped and even removed from the network entirely, decreasing the number of parameters in the network. In the data-independent per-batch case, the threshold inference technique will permanently skip all layers with probability lower than threshold value τ , essentially pruning them from the network. Thus, we propose this combination as a pruning technique and report an experimental comparison 1 with other modern pruning 1 We note a discrepancy between the GFlops reported by the baseline ResNet-50 between [44] and [16]. We calculate our 8 Figure 5: Demonstration of mode collapse on (left) data-dependent, per-batch ResNet-50 on ImageNet with target rate of .5, and (right) data-independent per-batch with target rate of .4 (right). Nearly all of the 16 gates collapse. Note that full mode collapse is discouraged by the quadratic loss whenever the target rate t is not equal to an integer over the number of gates g. Even if the layers try to mode collapse, the network will either be penalized by gt mod 1 or learn activations that utilize the extra amount of target rate. techniques, shown in figure 6. Understanding networks The learned activation rates for various gates can be used to explore the relative importance of the gated layers. If the average activation for a layer in the dependent or independent case is low, this suggests the network has learned that layer is not very important. Counter-intuitively, our experiments show that early layers are not particularly important in both ResNet-50 and -101. As seen in figure 7 and figure 8, for low-level features, the network only keeps one layer out of the three available. This suggests that fewer low-level features are needed for classification than generally thought. For example, on the ResNet-101 architecture, AIG constrains the three coarsest layers to have a target rate of 1, which indicates that these layers are essential for the rest of the network and must be on. numbers the same way as [44]. To compare fairly to [16], we do the most conservative thing and add back the discrepancy. We can also experimentally investigate the extent to which layers specialize. AIG [44] uses their pergate loss to encourage specialization, hoping to reduce overall inference time by letting the network restrict certain layers be used on specific classes. Although we find that per-batch generally outperforms per-gate in terms of overall activation, we note that in layers which are not mode collapsed, we do observe this kind of specialization even with a per-batch loss. An interesting example of specialization is shown in figure 8. The figure shows activation rates for dependent per-batch ResNet-101 with a target rate of 0.5 using thresholding at inference time. The network has mostly mode collapsed -most layers' activations are either 1 or 0. However, the layers that did not mode collapse show an interesting specialization, similar to what AIG reported. Extensions There are a number of natural extensions to our work that we have started to explore. We have focused on the use of probabilistic bypass gates to provide an early exit, when the network is sufficiently certain of the answer. We are motivated by MSDNet [14], which investigated early exit for both ResNet [11] and DenseNet [17]. We tested the usage of probabilistic bypass gates for early exit on both ResNet and DenseNet. Consistent with [14], we found that ResNet tended to degrade with intermediate classifiers while DenseNet did not. An immediate challenge is that in DenseNet, unlike ResNet, there is no natural interpretation of skipping a layer. Instead, we simply use the gate as a masking term. When the layer computation is skipped, the layer's output is set to zero and then, as per the architecture's design, is passed to later layers. For early exit in DenseNet, we follow [43] and place gates and intermediate classifiers at the end of each dense block. At the gate, the network makes a discrete decision as to whether the the instance can be successfully classified at that stage. If the gate returns true, then the instance is run through the classifier and the answer is returned; if the gate returns false, then the instance continues throughout the network. The advantage of using GS here is that the early exit can be trained in an end-to-end fashion unlike [43] which uses reinforcement learning. In our experiment, we implemented both early exit and probabilistic bypass on a per-layer basis. We set a per-gate target for layers of .9 and a per-gate target for both early exit points of .3 using a piece-wise function with a quadratic before the target rate and a constant after. This function matches the intuition that we should not penalize the network if it can increase the number of early exits without affecting accuracy. We observe that these early exit gates can make good decisions regarding which instances to classify early; more specifically, the first classifier has a much higher accuracy on the instances chosen by the gate than on the entire test set. The network had an overall error of 5.61 while utilizing on average only 68.4% of the layers; our implementation of the original DenseNet architecture achieves an error of 4.76 ( [17] reports an error of 4.51). The results for each block classifier are seen in Figure 9. More than a third of examples exited early, while overall error was still low. This demonstrates the potential of early exit with probabilistic bypass for DenseNet. Conclusion and future work One intriguing direction to explore is to remove the notion of a target activation rate completely, since it is not obvious what a good target would be for a particular use case. In general a user would prefer an accurate network with fast inference. The exact tradeoff between speed and accuracy will in general vary between applications, but there is no natural way for a user to express such a preference in terms of a target activation Figure 8: Specialization on ResNet-101 with data-dependent per-batch at target rate of 0.5. The left heatmap uses the stochastic strategy technique and the right heatmap uses the threshold inference strategy. Each vertical stripe is one layer; each row is an ImageNet class. While most layers have mode collapsed, the ones that have not mode collapsed show similar specializations as seen in [44]. For example, layer 24 runs mostly on fish and lizards, while layer 28 runs specifically on cats and dogs. These layers are highlighted in green. rate. It might be possible to automatically chose a target rate that optimizes a particular combination of inference speed and accuracy. Another promising avenue is the idea of annealing the target rate down from 1. This makes the adjustment to the target loss more gradual and may encourage more possible configurations. Intuitively, this could gives the network a greater chance to 'change its mind' regarding a layer and alter the layer's representation instead of always skipping it. We have demonstrated that probabilistic bypass is a powerful tool for optimizing and understanding neural networks. The per-batch loss function that we have proposed, together with thresholding at inference time, has produced strong experimental results both in terms of speed and accuracy. Appendix: Additional extensions and experimental results S1 Ongoing Work Currently we are pursuing several different avenues of extending this project. S1.1 Taskonomy We are working to apply this work to Taskonomy [47], which tries to identify the most important features for a given task. One of the problems this paper faces is the combinatoral explosion in the number of higherorder transfers; specifically, to find the best k features for a given task, they need to exhaustively try |S| k . In the paper, they rely on a beam search using the performance of a single feature by itself as a proxy for how well the feature will do in a subset. However, this seems highly suboptimal since it is plausible that some feature will perform poorly on its own but perform well when matched with a complement feature. Instead, we propose to use all features as input and place probabilistic gates on the input. If the behavior of our data-independent gates remains the same (namely we observe mode collapse), then we can use our PerBatch training schedule to figure out the best subset of features. More specifically, we would restrict the number of features used through the target rate. For example, Taskonomy uses 26 features, so we expect that a target rate of k 26 will give the k best features for the task. S1.2 Multi-task Learning More generally, we plan to observe how probabilistic gates affects the common architecture for multitask learning. We plan to apply data-dependent gates to the different feature representations and allow the network to either use or ignore the representation depending on the input value. The main motivation for this line of research is that for some inputs a given feature representation may not be useful and in fact using it may lead to worse results. Therefore the network should be allowed to ignore it depending on the input. S1.3 MobileNet We are actively exploring applying these techniques to MobileNet [35] and have some initial results. Applying this technique as is gives results that roughly work as well as changing the expansion factor; more specifically, our results are approximately on the line for MobileNetV2 224 × 224 in Figure 5. We are now working on improving on the line given by the expansion factor. Specifically, we are exploring two directions: (1) increasing the granularity with which we apply the gates and (2) creating an over-capacitated version of MobileNet and then using our techniques to prune it to be the correct size. S2 Additional techniques for training S2.1 Annealing Additionally in the training stage, we propose annealing the target rate. In particular, we use a step-wise annealing target rate which decreases a amount after k epochs. Typical values are a = .05 and k = 5. The intuition behind annealing is that with per-batch activation loss, annealing prevents the network from greedily killing off the worst starting layers. Instead, layers which perform worse in the beginning have a chance to change their representation. In practice, we have observed that over time layer activations are not always monotonic and some layers will initially start to be less active but will recover. We observe this behavior more with an annealing schedule than for a fixed target rate. S2.2 Variable target rate As far as we are aware, previous work has used the L 2 term exclusively. This has the affect of forcing activations towards a specific target rate and letting them vary only if it leads to improvements in accuracy. However, this also prevents a situation where the network can decrease activation while retaining the same accuracy -a scenario which is clearly desirable. As a result, we propose a piece-wise activation loss composed of a constant and then quadratic which indicates that there is no penalty for decreasing activation. Let B i = i∈B z g,i and t be the target activation rate. For the per-batch setup, this loss is as follows. L CQ,B =    0, for B i ≤ t t − 1 |G||B| g∈G B i 2 , for B i > t Additionally, there are many cases where a target activation is not clear and the user simply wants an accurate network with reduced train time. For this training schedule, we propose Variable Target Rates, which treat the target rate as a moving average of the network utilization. For each epoch, the target rate starts at a specific hyperparameter (to prevent collapse to 1) and then is allowed to change according to the batch's activation rate. The two simplest possibilities for the update step are: 1) moving average, and 2) exponential moving average. S2.3 Granularity For more granularity, we consider gating blocks of filters separately. In this case, we assume that F has N filters, i.e., F(x l−1 ) has dimension (W × H × N ). Then we write x l = x l−1 +      z l,1 F l,1 (x l−1 ) z l,2 F l,2 (x l−1 ) . . . z l,n F l,n (x l−1 )      where F l,i has dimension (W × H × (N/n)). We note that it is not essential that the layers F are residual; we consider also dropping the added x l−1 on the right-hand side of these equations. S2.4 Additional Early Exit Details For our early exit classifiers, we use the same classifiers as [14]. For the gate structure, we use a stronger version of the gate described by [44]. The gates are comprised of the following: a 3 × 3 convolutional layer with stride of 1 and padding of 1 which takes the current state of the model and outputs 128 channels, a BatchNorm, another 3 × 3 convolutional layer with stride of 1 and padding of 1 which outputs 128 channels, a BatchNorm, a 4 × 4 average pool, a linear layer, and then finally a GumbleSoftmax. S3 Observations Observation S3.1 A layer with activation of p can only affect the final accuracy by p. Observation S3.2 Given a set of layers with activation p p p, we can apply the Inclusion-Exclusion principle to get an upper bound for the amount these layers can affect the final accuracy of the network. So for example, if Layer 1 and Layer 2 both run with probability p but always co-occur (activate at the same time), then the set of Layer 1 and Layer 2 can only affect final accuracy by p. We use these observations to motivate an explanation for mode collapse in the data-independent perbatch case. Consider a network with only two layers and the restriction that, on expectation, only one layer should be on. Then let p be the probability that layer 1 is on. Intuitively, if p ∈ {0, 1}, then we are in a high entropy state where the network must deal with a large amount of uncertainty regarding which layers will be active. Furthermore, some of the work training each individual layer will be wasted during inference some percentage of the time since that layer will be skipped with non-zero probability. More precisely: Observation S3.3 Consider a network with two layers with data-independent probability p 1 and p 2 of being on. The network is then given a hard capacity of 1 (ie. one layer can be on, or each layer can be on half the time). Let a 1 be the expected accuracy of a one-layer network, a 2 be the expected accuracy of a two-layer network. Then if the network is given a hard capacity of 1, in order for p 1 ∈ {0, 1}, we need that a 2 a 1 ≥ 2. Since the network is given a hard capacity of 1, then we are only learning a single parameter p 1 since p 2 = 1 − p 1 . Let p = p 1 . Then p(1 − p) is the probability that both layers will be on and also the probability that both layers will be off. Note that the network has a strict upper bound on accuracy of 1 − p(1 − p) since with probability p(1 − p) none of the layers will activate and no output will be given. Then the expected accuracy of the network for any probability p ∈ [0, 1] is (1−2p+2p 2 )a 1 +p(1−p)a 2 , note that for p = 0, 1 the accuracy is simply a 1 . For a value p ∈ (0, 1) to be better than p ∈ {0, 1}, we need a 1 < (1 − 2p + 2p 2 )a 1 + p(1 − p)a 2 −2p 2 + 2p p(1 − p) < a 2 a 1 2 < a 2 a 1 S4 ImageNet Results We report all the data collected on ImageNet using the different gate strategies (independent, dependent), target loss strategies (per-batch, per-gate), and inference time strategies (threshold, always-on, stochastic, ensemble). Note that we try to include AIG for reference whenever it's a fair comparison. Also of note, for the ensemble technique we also include data from Snapshot Ensembles: Train 1, Get M For Free [15]. Note that using the stochastic networks, we outperform their ensemble technique. Also note that their technique is orthogonal to ours, so both could be utilized to identify an even better ensemble. In general, we observe that unsurprisingly, ensemble has the highest performance in terms of error; however, this requires multiple forward passes through the network, so the performance gain is somewhat offset by the inference time required. We also observe that threshold generally outperforms stochastic. This roughly makes sense if you consider stochastic inference as drawing a sample from a inference distribution -in this interpretation, threshold at .5 basically acts as an argmax. In addition to the improvement in performance, for the per-batch cases, threshold also tends to increase the number of activations. For all ImageNet results, we used the pretrained models provided by TorchVision 1 . S5.1 CIFAR10 Performance We report all the data collected on CIFAR10 using the different gate strategies (independent, dependent), target loss strategies (per-batch, per-gate). We report only the numbers for the stochastic inference time technique. We used CIFAR10 as a faster way to explore the space of parameters and combinations and as such have a more dense sweep of the combination and parameters. Note that for CIFAR10, we did not use a pretrained model; the entire model is trained from scratch. In general, we found that for a wide set of parameters per-batch outperforms per-gate. This includes independent per-batch outperforming dependent per-gate. The only exception to this is very high and very low target rates. However we note that at very high target rates, the accuracy of per-batch can be recovered through annealing. We attribute this to the fact that for CIFAR10 we train from scratch. Since the model is completely blank for the first several epochs, the per-batch loss can lower activations for any layers while still improving accuracy. In other words, at the beginning, the model is so inaccurate that any training on any subset of the model will result in a gain of accuracy; so when training from scratch, the per-batch loss will choose the layers to decrease activations for greedily and sub-optimally. One surprising result is that independent per-gate works at all for a wide range of target rates. This suggests that the redundancy effect described in [18] is so strong that the gates can be kept during inference time. This also suggests that at least for CIFAR10, most of the gains described in [44] were from regularization and not from specialization. We also report some variable rate target rates. We note that these tend to outperform the quadratic loss on a constant target rate. We believe that this is because variable target rate allows the optimizer to take the easiest and farther path down the manifold. We note that some of the variable target rates that worked on CIFAR10 did not work on ImageNet; namely, variable target rates which updated the target to previous mean quickly (within 5 epochs) lead to mode collapse to 1 for all gates. We attribute this to the much larger amount of training data for ImageNet and increased complexity of the task. However, both annealing and variable target rates merit further experimentation and research to truly understand how they perform on different datasets and with different training setups (from scratch vs from pretrained).
6,344
1812.04180
2905421523
We investigate learning to probabilistically bypass computations in a network architecture. Our approach is motivated by AIG, where layers are conditionally executed depending on their inputs, and the network is trained against a target bypass rate using a per-layer loss. We propose a per-batch loss function, and describe strategies for handling probabilistic bypass during inference as well as training. Per-batch loss allows the network additional flexibility. In particular, a form of mode collapse becomes plausible, where some layers are nearly always bypassed and some almost never; such a configuration is strongly discouraged by AIG's per-layer loss. We explore several inference-time strategies, including the natural MAP approach. With data-dependent bypass, we demonstrate improved performance over AIG. With data-independent bypass, as in stochastic depth, we observe mode collapse and effectively prune layers. We demonstrate our techniques on ResNet-50 and ResNet-101 for ImageNet , where our techniques produce improved accuracy (.15--.41 in precision@1) with substantially less computation (bypassing 25--40 of the layers).
During train time, the activation rate, a hyperparameter set by the user, determines how frequently each individual gate should be open and the target loss is added to the classification loss, where activation loss is L2 of the difference between the target rate and current rate. Our work differs from @cite_31 several ways. We reformulate the loss to a per-batch loss and consider both data-dependent and data-independent layer bypass. Data-independent per-batch loss results in the network trying to remove enough layers to reach the target rate while retaining accuracy (similar to pruning techniques), while data-dependent per-batch loss has more flexibility to utilize layers.
{ "abstract": [ "Do convolutional networks really need a fixed feed-forward structure? What if, after identifying the high-level concept of an image, a network could move directly to a layer that can distinguish fine-grained differences? Currently, a network would first need to execute sometimes hundreds of intermediate layers that specialize in unrelated aspects. Ideally, the more a network already knows about an image, the better it should be at deciding which layer to compute next. In this work, we propose convolutional networks with adaptive inference graphs (ConvNet-AIG) that adaptively define their network topology conditioned on the input image. Following a high-level structure similar to residual networks (ResNets), ConvNet-AIG decides for each input image on the fly which layers are needed. In experiments on ImageNet we show that ConvNet-AIG learns distinct inference graphs for different categories. Both ConvNet-AIG with 50 and 101 layers outperform their ResNet counterpart, while using (20 ) and (33 ) less computations respectively. By grouping parameters into layers for related classes and only executing relevant layers, ConvNet-AIG improves both efficiency and overall classification quality. Lastly, we also study the effect of adaptive inference graphs on the susceptibility towards adversarial examples. We observe that ConvNet-AIG shows a higher robustness than ResNets, complementing other known defense mechanisms." ], "cite_N": [ "@cite_31" ], "mid": [ "2884751099" ] }
Deep networks with probabilistic gates
Despite the enormous success of convolutional networks [11,23,37], they remain poorly understood and difficult to optimize. A natural line of investigation, which [1] called conditional computation, is to conditionally bypass parts of the network. While inference-time efficiency could obviously benefit [1], bypassing computations can improve training time or test performance [4,18,44], and can provide insight into network behavior [18,44]. In this paper we investigate several new probabilistic bypass techniques. We focus on ResNet [11] architectures since these are the mainstay of current deep learning techniques for image classification. The general architecture of a ResNet with probabilistic bypass gates is shown in figure 1. The main idea is that a residual layer, such as f 1 , can potentially be bypassed depending on the results of the gating computation g 1 , which controls the gate. The gating computation g i can be data-independent, or it can depend on its input. If a g i always executes its layer, this describes a conventional ResNet. A more interesting data-independent architecture is stochastic depth [18], where at training time g i executes a layer with probability defined by a hyperparameter. At inference time the stochastic depth network is deterministic, though the frequency with which a layer was bypassed during training is used to scale down its weight. With the introduction of Gumbel-Softmax (GS) [2,7,20,30] it became possible to train a network to bypass computations, given some target bypass rate that trades off against training set accuracy. In this sense, probabilistic bypass serves as an additional regularizer for the network. This is the approach taken by AIG [44], where the bypass decision is data-dependent and the loss is per-layer. We propose a per-batch loss function, which allows the network to more flexibly distribute bypass among different layers, compared to AIG's per-layer loss. This in turn leads to more advantageous tradeoffs between accuracy and inference speed. When our per-batch loss is applied with data-independent bypass, we observe a form of mode collapse where individual layers are either nearly always bypassed or nearly never bypassed. This effectively prunes layers, and again results in advantageous tradeoffs between accuracy and inference speed. Whether a network uses data-dependent or data-independent probabilistic bypass, there remains a question of how to perform inference. We explore several alternative inference strategies, and provide evidence that the natural MAP approach gives good performance. This paper is organized as follows. We begin by introducing notation and briefly reviewing related work. Section 3 introduces our per-batch loss function and our inference strategies. Experimental results on ImageNet and CIFAR are presented in section 4, followed by a discussion of some natural extensions of our work. Additional experiments and more details are included in the supplemental material. Notation We first introduce some notation that allows us to more precisely discuss probabilistic bypass. Following [44], we write the output of layer l ∈ {0, 1, . . . , L} as x l , with the input image being x 0 . We can express the effect of a residual layer F l in a feed-forward neural network as x l = x l−1 + F l (x l−1 ). Then a probabilistic bypass gate z l ∈ {0, 1} for this layer modifies the forward pass to either run or skip the layer: x l = x l−1 + z l F l (x l−1 ). There are many different gating computations to determine the value of z l , which can be set at training time, at inference time, or both. The degenerate case z l = 1 corresponds to the original ResNet architecture. Stochastic depth (SD) [18] can be formalized as setting z l = 1 − l L (1 − p L ) during training, where p L is a hyperparameter set to 0.5 in the SD experiments. During inference, SD sets z l = 1 but uses information gleaned during training to adjust the weights of each layer, where layers that were frequently bypassed are down-weighted. We are particularly interested in AIG [44], which uses probabilistic bypass during both training and inference, along with a gating computation that depends on the input data (i.e., z l is a function of x l−1 ). Let G be the set of gates in the network and B be the set of instances in some mini-batch. AIG uses a target loss rate during training, and computes this on a per-gate basis. Given a target rate t ∈ [0, 1] this is L G = 1 |G| g∈G t − 1 |B| i∈B z g,i 2 This loss function encourages each layer to be bypassed at the target rate. Note that this penalty is symmetric, so bypassing more layers is as expensive as bypassing fewer. The overall loss is the sum of the target loss (denoted as L target ) and the standard multi-class logistic loss L MC , L = L target + L MC . For AIG, the target loss is L G . AIG uses the straight through trick -the z's are categorical during the forward pass but treated as a Gumbel softmax during the backwards pass. At inference time AIG is stochastic, since a given layer l might be bypassed depending on its input x l−1 and the learned bypass probability for that layer. Related work Conditional computation has been well studied in computer vision. Cascaded classifiers [45] shorten computation by identifying easy negatives and have recently been adapted to deep learning [26,46]. More directly, [14] and [31] both propose a cascading architecture which computes features at multiple scales and allows for dynamic evaluation, where at inference time the user can trade off speed for accuracy. Similarly, [43] adds intermediate classifiers and returns a label once the network reaches a specified confidence. [4,6] both use the state of the network to adaptively decrease the number of computational steps during inference. [6] uses an intermediate state sequence and a halting unit to limit the number of blocks that can be executed in an RNN; [4] learns an image dependent stopping condition for each ResNet block that conditionally bypasses the rest of the layers in the block. [36] trains a large number of small networks, called Experts, and then uses gates to select a sparse combination of the experts for a given input. Another approach to decreasing the computation time is network pruning. The earliest works attempted to determine the importance of specific weights [10,24] or hidden units [34] and remove those which are unimportant or redundant. Weight-based pruning on CNNs follows the same fundamental approach; [9] prunes weights with small magnitude and [8] incorporates these into a pipeline which also includes quantization and Huffman coding. Numerous techniques prune at the channel level, whether through heuristics [13,25] or approximations to importance [12,33,42]. [29] prunes at the filter level using statistics from the following layer. [48] applies binary mask variables to a layer's weight tensors, sorts the weights during train time, and then sends the lowest to zero. [19] is the most related to our data-independent bypass. They add a sparsity regularization and then modifies stochastic Accelerated Proximal Gradient to prune the network in an end-to-end fashion. Our work differs from [19] by using GS to integrate the sparsity constraint into an additive loss which can be trained by any optimization technique; we use unmodified stochastic gradient descent with momentum (SGD), the typical technique for training classification. Recently, [27] suggests that the main benefits of pruning come primarily from the identified architecture. Our work is also related to regularization techniques such as Dropout [41] and Stochastic Depth [18]. Both techniques try to induce redundancy through stochastically removing parts of the network during training time. Dropout ignores individual units and Stochastic Depth (as described above) skips entire layers. Both provide evidence that the increased redundancy improves helps to prevent overfitting. These techniques can be seen as applying stochastic gates to units or layers, respectively, where the gate probabilities are hyperparameters. In the Bayesian machine learning community, data-independent gating is used as form of regularization. This line of work is cast as generalizing hyperparameter-per-weight dropout by learning individual dropout weights. [39] performs pruning by learning multipliers for weights, which are incentivized to be 0 − 1 by a sparsity-encouraging loss w(1 − w). [5] proposes per-weight regularization, using the straight-through Gumbel-Softmax trick. [38] uses a form of trainable dropout, learning a per-neuron gating probability. These are regularized by their likelihood against a beta distribution, and training is done with the straight-through trick. [40] learns sparsity at the weight level using a binary mask. They adopt a complexity loss which is L 0 on weights, plus a sparsification loss similar to [39]. This is similar to a per-batch loss. [28] extends the straight-through trick with a hard sigmoid to obtain less biased estimates of the gradient. They use a loss equal to the sum of Bernoulli weights, which is similar to a per-batch loss. [32] extends the variational dropout in [21] to allow dropout probabilities greater than a half. Training with the straight-through trick and placing a log-scale uniform prior on the dropout probabilities, they find substantial sparsification with minimal change in change in accuracy, including on some vision problems. Using probabilistic bypass in deep networks In this section we investigate ways to use probabilistic bypass in deep networks. We propose a per-batch loss function, which we use during training with Gumbel softmax following AIG. This frequently leads to mode collapse (not dissimilar to that encouraged by the sparsity-encouraging loss in [40]), which effectively prunes network layers. At inference time we take a deterministic approach, and observe that a simple MAP approach gives strong experimental results. Batch loss during training We note that z g,i can be viewed as a random variable depending on the instance, whose expectation is E B [z g ] = 1 |B| i∈B z g,i . Seen this way, AIG's per-gate loss is L G = E G (t − E B [z]) 2 , that is, a squared L 2 loss on (t − z). The intuition for per-gate target loss is that each gate should be on expectation open around target t amount of time. When the gates are data-independent, this loss encourages each layer to execute with probability t. When the gates are data-dependent, this loss encourages each layer to learn to execute on a fraction t of the training instances. This per-gate target loss was intended to caused the layers of the network to specialize [44]. However, it is not a-priori obvious that specialization is the best architecture from a performance perspective. Instead, the most natural approach is to allow optimizer to select the network configuration which performs the best given a target activation rate. With this intuition, we propose per-batch target loss, which can be trained against using Gumbel softmax following AIG. For a target rate t ∈ [0, 1] L B =   t − 1 |G||B| g∈G i∈B z g,i   2 . This can be interpreted as: (t − E G,B [z]) 2 that is, a squared L 1 loss on (t − z). This loss only induces the network to have an activation of t. The intuition is that each batch is given t capacity and distributes the capacity among the instances and gates however it chooses. For example, if there exists a per-gate configuration with zero training loss that was easily found by optimization techniques, then per-batch would converge to it. Mode collapse With AIG's per-gate loss, each gate independently tries to hit its target rate, which means that the bypass rates will in general be fairly similar among gates. Our per-batch loss, however, allows different layers to have very different bypass rates, a network configuration would be heavily penalized by AIG. In our experiments we frequently observe a form of mode collapse, where layers are nearly always bypassed or nearly never bypassed. In this situation, our loss function encourages a form of network pruning, where we start with an overcapacitated network and then determine which layers to remove during training. Surprisingly, our experiments demonstrate that we end up with improved accuracy. Inference strategies Once training has produced a deep network with stochastic gates, it is necessary to decide how to perform inference. The simplest approach is to leave the gates in the network and allow them to be stochastic during inference time. This is the technique that AIG uses. Experimentally, we observe a small variance so this may be sufficient for most use cases. In addition, one way to take advantage of the stochasticity is to create an ensemble composed of multiple runs with the same network. Then any kind of ensemble technique can be used to combine the different runs: voting, weighing, boosting, etc. In practice, we observe a small bump in accuracy from this ensemble technique, though there is obviously a computational penalty. However, stochasticity has the awkward consequence that multiple classification runs on the same image will often return different results. There are several techniques to remove stochasticity. The gates can be removed, setting z l = 1 at test time. This is natural when viewing these gates as a regularization technique, and is the technique used by Stochastic Depth. Alternately, inference can be made deterministic by using a threshold τ instead of sampling. Thresholding with value τ means that a layer will be executed if the learned probability is greater than τ . This also allows the user some small degree of dynamic control over the computational cost of inference. If the user passes in a very high τ , then fewer layers will activate and inference will be faster. In our experiments, we set τ = 1 2 Note that we observe mode collapse for a large number of our per-batch experiments (particularly with data-independent gates). In this situation, for a wide range of τ thresholding can be interpreted as a pruning technique, where layers below a certain probability τ are pruned. Experiments Our primary experiments centered around probabilistic bypass to ResNet [11] and running the resulting network on ImageNet [3]. Our main finding is that our techniques improve both accuracy and inference speed. We also perform an empirical investigation into our networks in order to better understand their performance. Additional experiments, including CIFAR [22] as well as ImageNet, as well as more details are included in the supplemental material. Improving speed and accuracy on ImageNet We have implemented several probabilistic bypass techniques on the ResNet-50 and ResNet-101, and explored their performance on ImageNet. Since ResNet-101 is so computationally demanding, we have done more experiments on ResNet-50. Our techniques demonstrate improvements in accuracy and inference time on both ResNet-50 and ResNet-101. Figure 3: CIFAR-10 results. Dependent Per-Gate is our implementation of [44]. Note that Dependent Per-Batch has both higher accuracy and lower activation than any of the other combinations. Architecture and training details We use the baseline architecture of ResNet-50 and ResNet-101 [11] and place gates at the start of each residual layer. We adopt the AIG [44] gate architecture. During training we explore different combinations of data-dependent or -independent gates. We used our per-batch loss, as well as AIG's per-gate loss, with target rates t = {0.4, 0.5, 0.6}. We kept the same training schedule as AIG, and followed the standard ResNet training procedure: mini-batch size of 256, momentum of 0.9, and weight decay of 10 −4 . We train for 100 epochs from a pretrained model of the appropriate architecture with step-wise learning rate starting at 0.1, and after every 30 epochs decay by 10 −1 . We use standard training data-augmentation, and rescale the images to 256×256 followed by a 224×224 center crop. We observe that configurations with low gate activations cause the batch norm estimates of mean and variance to be slightly unstable. Therefore before final evaluation, we run training with a learning rate of zero and a large batch size for 200 batches in order to improve the stability and performance of the BatchNorm layers. This general technique was also utilized by [44]. Experimental results Our results are shown in figures 2 and 4. The most interesting experimental results are obtained with data-dependent gates and our per-batch loss function L B , along with thresholding at inference time. This combination gives a 0.41 improvement in top 1 error over ResNet-101 while using 30% less computation. It also gives the same improvement in top 1 error over AIG. On ResNet-50, this technique saves significant computation compared to AIG or the baseline ResNet, albeit with a small loss of accuracy. On the ResNet-50 architecture, we also investigated data-independent gates with per-batch loss, with thresholding at inference time. This produces an improvement in accuracy over our data-dependent architecture, as well as over AIG and vanilla ResNet-50. It saves significant computation (33% fewer gFLOPs) over ResNet-50, and is slightly faster than AIG but slower than the data-dependent architecture. Figure 4 also shows the impact of the thresholding inference strategy, which is used for the 3 columns at right. We found that thresholding at inference time often gives the best performance, leading to roughly .1 − .2 percentage points improvement in top-1 accuracy. We report the result on CIFAR10 and ResNet-101 in table 3. We note, for target rate of 0.5, dependent per-batch has the best performance in both accuracy and average activation. Figures for the remaining target rates can be found in the supplemental materials. Empirical investigations We performed a number of experiments to try to better understand the performance of our architecture. In particular, examining our learned bypass probability provides some interesting insights into how the network behaves. Pruning With the per-batch loss, we often observe mode collapse, where some layers are nearly always on and some nearly always off. In the case of data-dependent bypass, we can measure the observed activation of a gate during training. For example, on a per-batch run on ResNet-50 (16 gates) on ImageNet, nearly all of the 16 gates mode collapse, as shown in figure 4.2.1: four gates collapsed to a mode of zero or one exactly; more than half were at their mode more than 99.9% of the time. Interestingly, we observe different activation behavior on different datasets. ImageNet leads to frequent and aggressive mode collapse, all networks exhibited some degree of mode collapse; CIFAR10 can induce mode collapse but does so much less frequently, approximately less than 40% of our runs. Mode collapse can effectively perform end-to-end network pruning. At inference time, layers with near zero activation can be permanently skipped and even removed from the network entirely, decreasing the number of parameters in the network. In the data-independent per-batch case, the threshold inference technique will permanently skip all layers with probability lower than threshold value τ , essentially pruning them from the network. Thus, we propose this combination as a pruning technique and report an experimental comparison 1 with other modern pruning 1 We note a discrepancy between the GFlops reported by the baseline ResNet-50 between [44] and [16]. We calculate our 8 Figure 5: Demonstration of mode collapse on (left) data-dependent, per-batch ResNet-50 on ImageNet with target rate of .5, and (right) data-independent per-batch with target rate of .4 (right). Nearly all of the 16 gates collapse. Note that full mode collapse is discouraged by the quadratic loss whenever the target rate t is not equal to an integer over the number of gates g. Even if the layers try to mode collapse, the network will either be penalized by gt mod 1 or learn activations that utilize the extra amount of target rate. techniques, shown in figure 6. Understanding networks The learned activation rates for various gates can be used to explore the relative importance of the gated layers. If the average activation for a layer in the dependent or independent case is low, this suggests the network has learned that layer is not very important. Counter-intuitively, our experiments show that early layers are not particularly important in both ResNet-50 and -101. As seen in figure 7 and figure 8, for low-level features, the network only keeps one layer out of the three available. This suggests that fewer low-level features are needed for classification than generally thought. For example, on the ResNet-101 architecture, AIG constrains the three coarsest layers to have a target rate of 1, which indicates that these layers are essential for the rest of the network and must be on. numbers the same way as [44]. To compare fairly to [16], we do the most conservative thing and add back the discrepancy. We can also experimentally investigate the extent to which layers specialize. AIG [44] uses their pergate loss to encourage specialization, hoping to reduce overall inference time by letting the network restrict certain layers be used on specific classes. Although we find that per-batch generally outperforms per-gate in terms of overall activation, we note that in layers which are not mode collapsed, we do observe this kind of specialization even with a per-batch loss. An interesting example of specialization is shown in figure 8. The figure shows activation rates for dependent per-batch ResNet-101 with a target rate of 0.5 using thresholding at inference time. The network has mostly mode collapsed -most layers' activations are either 1 or 0. However, the layers that did not mode collapse show an interesting specialization, similar to what AIG reported. Extensions There are a number of natural extensions to our work that we have started to explore. We have focused on the use of probabilistic bypass gates to provide an early exit, when the network is sufficiently certain of the answer. We are motivated by MSDNet [14], which investigated early exit for both ResNet [11] and DenseNet [17]. We tested the usage of probabilistic bypass gates for early exit on both ResNet and DenseNet. Consistent with [14], we found that ResNet tended to degrade with intermediate classifiers while DenseNet did not. An immediate challenge is that in DenseNet, unlike ResNet, there is no natural interpretation of skipping a layer. Instead, we simply use the gate as a masking term. When the layer computation is skipped, the layer's output is set to zero and then, as per the architecture's design, is passed to later layers. For early exit in DenseNet, we follow [43] and place gates and intermediate classifiers at the end of each dense block. At the gate, the network makes a discrete decision as to whether the the instance can be successfully classified at that stage. If the gate returns true, then the instance is run through the classifier and the answer is returned; if the gate returns false, then the instance continues throughout the network. The advantage of using GS here is that the early exit can be trained in an end-to-end fashion unlike [43] which uses reinforcement learning. In our experiment, we implemented both early exit and probabilistic bypass on a per-layer basis. We set a per-gate target for layers of .9 and a per-gate target for both early exit points of .3 using a piece-wise function with a quadratic before the target rate and a constant after. This function matches the intuition that we should not penalize the network if it can increase the number of early exits without affecting accuracy. We observe that these early exit gates can make good decisions regarding which instances to classify early; more specifically, the first classifier has a much higher accuracy on the instances chosen by the gate than on the entire test set. The network had an overall error of 5.61 while utilizing on average only 68.4% of the layers; our implementation of the original DenseNet architecture achieves an error of 4.76 ( [17] reports an error of 4.51). The results for each block classifier are seen in Figure 9. More than a third of examples exited early, while overall error was still low. This demonstrates the potential of early exit with probabilistic bypass for DenseNet. Conclusion and future work One intriguing direction to explore is to remove the notion of a target activation rate completely, since it is not obvious what a good target would be for a particular use case. In general a user would prefer an accurate network with fast inference. The exact tradeoff between speed and accuracy will in general vary between applications, but there is no natural way for a user to express such a preference in terms of a target activation Figure 8: Specialization on ResNet-101 with data-dependent per-batch at target rate of 0.5. The left heatmap uses the stochastic strategy technique and the right heatmap uses the threshold inference strategy. Each vertical stripe is one layer; each row is an ImageNet class. While most layers have mode collapsed, the ones that have not mode collapsed show similar specializations as seen in [44]. For example, layer 24 runs mostly on fish and lizards, while layer 28 runs specifically on cats and dogs. These layers are highlighted in green. rate. It might be possible to automatically chose a target rate that optimizes a particular combination of inference speed and accuracy. Another promising avenue is the idea of annealing the target rate down from 1. This makes the adjustment to the target loss more gradual and may encourage more possible configurations. Intuitively, this could gives the network a greater chance to 'change its mind' regarding a layer and alter the layer's representation instead of always skipping it. We have demonstrated that probabilistic bypass is a powerful tool for optimizing and understanding neural networks. The per-batch loss function that we have proposed, together with thresholding at inference time, has produced strong experimental results both in terms of speed and accuracy. Appendix: Additional extensions and experimental results S1 Ongoing Work Currently we are pursuing several different avenues of extending this project. S1.1 Taskonomy We are working to apply this work to Taskonomy [47], which tries to identify the most important features for a given task. One of the problems this paper faces is the combinatoral explosion in the number of higherorder transfers; specifically, to find the best k features for a given task, they need to exhaustively try |S| k . In the paper, they rely on a beam search using the performance of a single feature by itself as a proxy for how well the feature will do in a subset. However, this seems highly suboptimal since it is plausible that some feature will perform poorly on its own but perform well when matched with a complement feature. Instead, we propose to use all features as input and place probabilistic gates on the input. If the behavior of our data-independent gates remains the same (namely we observe mode collapse), then we can use our PerBatch training schedule to figure out the best subset of features. More specifically, we would restrict the number of features used through the target rate. For example, Taskonomy uses 26 features, so we expect that a target rate of k 26 will give the k best features for the task. S1.2 Multi-task Learning More generally, we plan to observe how probabilistic gates affects the common architecture for multitask learning. We plan to apply data-dependent gates to the different feature representations and allow the network to either use or ignore the representation depending on the input value. The main motivation for this line of research is that for some inputs a given feature representation may not be useful and in fact using it may lead to worse results. Therefore the network should be allowed to ignore it depending on the input. S1.3 MobileNet We are actively exploring applying these techniques to MobileNet [35] and have some initial results. Applying this technique as is gives results that roughly work as well as changing the expansion factor; more specifically, our results are approximately on the line for MobileNetV2 224 × 224 in Figure 5. We are now working on improving on the line given by the expansion factor. Specifically, we are exploring two directions: (1) increasing the granularity with which we apply the gates and (2) creating an over-capacitated version of MobileNet and then using our techniques to prune it to be the correct size. S2 Additional techniques for training S2.1 Annealing Additionally in the training stage, we propose annealing the target rate. In particular, we use a step-wise annealing target rate which decreases a amount after k epochs. Typical values are a = .05 and k = 5. The intuition behind annealing is that with per-batch activation loss, annealing prevents the network from greedily killing off the worst starting layers. Instead, layers which perform worse in the beginning have a chance to change their representation. In practice, we have observed that over time layer activations are not always monotonic and some layers will initially start to be less active but will recover. We observe this behavior more with an annealing schedule than for a fixed target rate. S2.2 Variable target rate As far as we are aware, previous work has used the L 2 term exclusively. This has the affect of forcing activations towards a specific target rate and letting them vary only if it leads to improvements in accuracy. However, this also prevents a situation where the network can decrease activation while retaining the same accuracy -a scenario which is clearly desirable. As a result, we propose a piece-wise activation loss composed of a constant and then quadratic which indicates that there is no penalty for decreasing activation. Let B i = i∈B z g,i and t be the target activation rate. For the per-batch setup, this loss is as follows. L CQ,B =    0, for B i ≤ t t − 1 |G||B| g∈G B i 2 , for B i > t Additionally, there are many cases where a target activation is not clear and the user simply wants an accurate network with reduced train time. For this training schedule, we propose Variable Target Rates, which treat the target rate as a moving average of the network utilization. For each epoch, the target rate starts at a specific hyperparameter (to prevent collapse to 1) and then is allowed to change according to the batch's activation rate. The two simplest possibilities for the update step are: 1) moving average, and 2) exponential moving average. S2.3 Granularity For more granularity, we consider gating blocks of filters separately. In this case, we assume that F has N filters, i.e., F(x l−1 ) has dimension (W × H × N ). Then we write x l = x l−1 +      z l,1 F l,1 (x l−1 ) z l,2 F l,2 (x l−1 ) . . . z l,n F l,n (x l−1 )      where F l,i has dimension (W × H × (N/n)). We note that it is not essential that the layers F are residual; we consider also dropping the added x l−1 on the right-hand side of these equations. S2.4 Additional Early Exit Details For our early exit classifiers, we use the same classifiers as [14]. For the gate structure, we use a stronger version of the gate described by [44]. The gates are comprised of the following: a 3 × 3 convolutional layer with stride of 1 and padding of 1 which takes the current state of the model and outputs 128 channels, a BatchNorm, another 3 × 3 convolutional layer with stride of 1 and padding of 1 which outputs 128 channels, a BatchNorm, a 4 × 4 average pool, a linear layer, and then finally a GumbleSoftmax. S3 Observations Observation S3.1 A layer with activation of p can only affect the final accuracy by p. Observation S3.2 Given a set of layers with activation p p p, we can apply the Inclusion-Exclusion principle to get an upper bound for the amount these layers can affect the final accuracy of the network. So for example, if Layer 1 and Layer 2 both run with probability p but always co-occur (activate at the same time), then the set of Layer 1 and Layer 2 can only affect final accuracy by p. We use these observations to motivate an explanation for mode collapse in the data-independent perbatch case. Consider a network with only two layers and the restriction that, on expectation, only one layer should be on. Then let p be the probability that layer 1 is on. Intuitively, if p ∈ {0, 1}, then we are in a high entropy state where the network must deal with a large amount of uncertainty regarding which layers will be active. Furthermore, some of the work training each individual layer will be wasted during inference some percentage of the time since that layer will be skipped with non-zero probability. More precisely: Observation S3.3 Consider a network with two layers with data-independent probability p 1 and p 2 of being on. The network is then given a hard capacity of 1 (ie. one layer can be on, or each layer can be on half the time). Let a 1 be the expected accuracy of a one-layer network, a 2 be the expected accuracy of a two-layer network. Then if the network is given a hard capacity of 1, in order for p 1 ∈ {0, 1}, we need that a 2 a 1 ≥ 2. Since the network is given a hard capacity of 1, then we are only learning a single parameter p 1 since p 2 = 1 − p 1 . Let p = p 1 . Then p(1 − p) is the probability that both layers will be on and also the probability that both layers will be off. Note that the network has a strict upper bound on accuracy of 1 − p(1 − p) since with probability p(1 − p) none of the layers will activate and no output will be given. Then the expected accuracy of the network for any probability p ∈ [0, 1] is (1−2p+2p 2 )a 1 +p(1−p)a 2 , note that for p = 0, 1 the accuracy is simply a 1 . For a value p ∈ (0, 1) to be better than p ∈ {0, 1}, we need a 1 < (1 − 2p + 2p 2 )a 1 + p(1 − p)a 2 −2p 2 + 2p p(1 − p) < a 2 a 1 2 < a 2 a 1 S4 ImageNet Results We report all the data collected on ImageNet using the different gate strategies (independent, dependent), target loss strategies (per-batch, per-gate), and inference time strategies (threshold, always-on, stochastic, ensemble). Note that we try to include AIG for reference whenever it's a fair comparison. Also of note, for the ensemble technique we also include data from Snapshot Ensembles: Train 1, Get M For Free [15]. Note that using the stochastic networks, we outperform their ensemble technique. Also note that their technique is orthogonal to ours, so both could be utilized to identify an even better ensemble. In general, we observe that unsurprisingly, ensemble has the highest performance in terms of error; however, this requires multiple forward passes through the network, so the performance gain is somewhat offset by the inference time required. We also observe that threshold generally outperforms stochastic. This roughly makes sense if you consider stochastic inference as drawing a sample from a inference distribution -in this interpretation, threshold at .5 basically acts as an argmax. In addition to the improvement in performance, for the per-batch cases, threshold also tends to increase the number of activations. For all ImageNet results, we used the pretrained models provided by TorchVision 1 . S5.1 CIFAR10 Performance We report all the data collected on CIFAR10 using the different gate strategies (independent, dependent), target loss strategies (per-batch, per-gate). We report only the numbers for the stochastic inference time technique. We used CIFAR10 as a faster way to explore the space of parameters and combinations and as such have a more dense sweep of the combination and parameters. Note that for CIFAR10, we did not use a pretrained model; the entire model is trained from scratch. In general, we found that for a wide set of parameters per-batch outperforms per-gate. This includes independent per-batch outperforming dependent per-gate. The only exception to this is very high and very low target rates. However we note that at very high target rates, the accuracy of per-batch can be recovered through annealing. We attribute this to the fact that for CIFAR10 we train from scratch. Since the model is completely blank for the first several epochs, the per-batch loss can lower activations for any layers while still improving accuracy. In other words, at the beginning, the model is so inaccurate that any training on any subset of the model will result in a gain of accuracy; so when training from scratch, the per-batch loss will choose the layers to decrease activations for greedily and sub-optimally. One surprising result is that independent per-gate works at all for a wide range of target rates. This suggests that the redundancy effect described in [18] is so strong that the gates can be kept during inference time. This also suggests that at least for CIFAR10, most of the gains described in [44] were from regularization and not from specialization. We also report some variable rate target rates. We note that these tend to outperform the quadratic loss on a constant target rate. We believe that this is because variable target rate allows the optimizer to take the easiest and farther path down the manifold. We note that some of the variable target rates that worked on CIFAR10 did not work on ImageNet; namely, variable target rates which updated the target to previous mean quickly (within 5 epochs) lead to mode collapse to 1 for all gates. We attribute this to the much larger amount of training data for ImageNet and increased complexity of the task. However, both annealing and variable target rates merit further experimentation and research to truly understand how they perform on different datasets and with different training setups (from scratch vs from pretrained).
6,344
1812.04180
2905421523
We investigate learning to probabilistically bypass computations in a network architecture. Our approach is motivated by AIG, where layers are conditionally executed depending on their inputs, and the network is trained against a target bypass rate using a per-layer loss. We propose a per-batch loss function, and describe strategies for handling probabilistic bypass during inference as well as training. Per-batch loss allows the network additional flexibility. In particular, a form of mode collapse becomes plausible, where some layers are nearly always bypassed and some almost never; such a configuration is strongly discouraged by AIG's per-layer loss. We explore several inference-time strategies, including the natural MAP approach. With data-dependent bypass, we demonstrate improved performance over AIG. With data-independent bypass, as in stochastic depth, we observe mode collapse and effectively prune layers. We demonstrate our techniques on ResNet-50 and ResNet-101 for ImageNet , where our techniques produce improved accuracy (.15--.41 in precision@1) with substantially less computation (bypassing 25--40 of the layers).
Another approach to decreasing the computation time is network pruning. The earliest works attempted to determine the importance of specific weights @cite_7 @cite_44 or hidden units @cite_0 and remove those which are unimportant or redundant. Weight-based pruning on CNNs follows the same fundamental approach; @cite_9 prunes weights with small magnitude and @cite_48 incorporates these into a pipeline which also includes quantization and Huffman coding. Numerous techniques prune at the channel level, whether through heuristics @cite_42 @cite_19 or approximations to importance @cite_14 @cite_45 @cite_47 . @cite_49 prunes at the filter level using statistics from the following layer. @cite_4 applies binary mask variables to a layer's weight tensors, sorts the weights during train time, and then sends the lowest to zero.
{ "abstract": [ "In this paper, we introduce a new channel pruning method to accelerate very deep convolutional neural networks.Given a trained CNN model, we propose an iterative two-step algorithm to effectively prune each layer, by a LASSO regression based channel selection and least square reconstruction. We further generalize this algorithm to multi-layer and multi-branch cases. Our method reduces the accumulated error and enhance the compatibility with various architectures. Our pruned VGG-16 achieves the state-of-the-art results by 5x speed-up along with only 0.3 increase of error. More importantly, our method is able to accelerate modern networks like ResNet, Xception and suffers only 1.4 , 1.0 accuracy loss under 2x speed-up respectively, which is significant. Code has been made publicly available.", "Model pruning seeks to induce sparsity in a deep neural network's various connection matrices, thereby reducing the number of nonzero-valued parameters in the model. Recent reports (, 2015; , 2017) prune deep networks at the cost of only a marginal loss in accuracy and achieve a sizable reduction in model size. This hints at the possibility that the baseline models in these experiments are perhaps severely over-parameterized at the outset and a viable alternative for model compression might be to simply reduce the number of hidden units while maintaining the model's dense connection structure, exposing a similar trade-off in model size and accuracy. We investigate these two distinct paths for model compression within the context of energy-efficient inference in resource-constrained environments and propose a new gradual pruning technique that is simple and straightforward to apply across a variety of models datasets with minimal tuning and can be seamlessly incorporated within the training process. We compare the accuracy of large, but pruned models (large-sparse) and their smaller, but dense (small-dense) counterparts with identical memory footprint. Across a broad range of neural network architectures (deep CNNs, stacked LSTM, and seq2seq LSTM models), we find large-sparse models to consistently outperform small-dense models and achieve up to 10x reduction in number of non-zero parameters with minimal loss in accuracy.", "We investigate the use of information from all second order derivatives of the error function to perform network pruning (i.e., removing unimportant weights from a trained network) in order to improve generalization, simplify networks, reduce hardware or storage requirements, increase the speed of further training, and in some cases enable rule extraction. Our method, Optimal Brain Surgeon (OBS), is Significantly better than magnitude-based methods and Optimal Brain Damage [Le Cun, Denker and Solla, 1990], which often remove the wrong weights. OBS permits the pruning of more weights than other methods (for the same error on the training set), and thus yields better generalization on test data. Crucial to OBS is a recursion relation for calculating the inverse Hessian matrix H-1 from training data and structural information of the net. OBS permits a 90 , a 76 , and a 62 reduction in weights over backpropagation with weight decay on three benchmark MONK's problems [, 1991]. Of OBS, Optimal Brain Damage, and magnitude-based methods, only OBS deletes the correct weights from a trained XOR network in every case. Finally, whereas Sejnowski and Rosenberg [1987] used 18,000 weights in their NETtalk network, we used OBS to prune a network to just 1560 weights, yielding better generalization.", "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce \"deep compression\", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.", "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.", "State-of-the-art neural networks are getting deeper and wider. While their performance increases with the increasing number of layers and neurons, it is crucial to design an efficient deep architecture in order to reduce computational and memory costs. Designing an efficient neural network, however, is labor intensive requiring many experiments, and fine-tunings. In this paper, we introduce network trimming which iteratively optimizes the network by pruning unimportant neurons based on analysis of their outputs on a large dataset. Our algorithm is inspired by an observation that the outputs of a significant portion of neurons in a large network are mostly zero, regardless of what inputs the network received. These zero activation neurons are redundant, and can be removed without affecting the overall accuracy of the network. After pruning the zero activation neurons, we retrain the network using the weights before pruning as initialization. We alternate the pruning and retraining to further reduce zero activations in a network. Our experiments on the LeNet and VGG-16 show that we can achieve high compression ratio of parameters without losing or even achieving higher accuracy than the original network.", "We have used information-theoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewer training examples required, and improved speed of learning and or classification. The basic idea is to use second-derivative information to make a tradeoff between network complexity and training set error. Experiments confirm the usefulness of the methods on a real-world application.", "This paper proposes a means of using the knowledge in a network to determine the functionality or relevance of individual units, both for the purpose of understanding the network's behavior and improving its performance. The basic idea is to iteratively train the network to a certain performance criterion, compute a measure of relevance that identifies which input or hidden units are most critical to performance, and automatically trim the least relevant units. This skeletonization technique can be used to simplify networks by eliminating units that convey redundant information; to improve learning performance by first learning with spare hidden units and then trimming the unnecessary ones away, thereby constraining generalization; and to understand the behavior of networks in terms of minimal \"rules.\"", "The success of CNNs in various applications is accompanied by a significant increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various layers without hurting original accuracy. However, magnitude-based pruning of weights reduces a significant number of parameters from the fully connected layers and may not adequately reduce the computation costs in the convolutional layers due to irregular sparsity in the pruned networks. We present an acceleration method for CNNs, where we prune filters from CNNs that are identified as having a small effect on the output accuracy. By removing whole filters in the network together with their connecting feature maps, the computation costs are reduced significantly. In contrast to pruning weights, this approach does not result in sparse connectivity patterns. Hence, it does not need the support of sparse convolution libraries and can work with existing efficient BLAS libraries for dense matrix multiplications. We show that even simple filter pruning techniques can reduce inference costs for VGG-16 by up to 34 and ResNet-110 by up to 38 on CIFAR10 while regaining close to the original accuracy by retraining the networks.", "", "We propose an efficient and unified framework, namely ThiNet, to simultaneously accelerate and compress CNN models in both training and inference stages. We focus on the filter level pruning, i.e., the whole filter would be discarded if it is less important. Our method does not change the original network structure, thus it can be perfectly supported by any off-the-shelf deep learning libraries. We formally establish filter pruning as an optimization problem, and reveal that we need to prune filters based on statistics information computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. Experimental results demonstrate the effectiveness of this strategy, which has advanced the state-of-the-art. We also show the performance of ThiNet on ILSVRC-12 benchmark. ThiNet achieves 3.31 x FLOPs reduction and 16.63× compression on VGG-16, with only 0.52 top-5 accuracy drop. Similar experiments with ResNet-50 reveal that even for a compact network, ThiNet can also reduce more than half of the parameters and FLOPs, at the cost of roughly 1 top-5 accuracy drop. Moreover, the original VGG-16 model can be further pruned into a very small model with only 5.05MB model size, preserving AlexNet level accuracy but showing much stronger generalization ability.", "" ], "cite_N": [ "@cite_14", "@cite_4", "@cite_7", "@cite_48", "@cite_9", "@cite_42", "@cite_44", "@cite_0", "@cite_19", "@cite_45", "@cite_49", "@cite_47" ], "mid": [ "2737121650", "2952344559", "2125389748", "2119144962", "2963674932", "2495425901", "2114766824", "2134273960", "2515385951", "", "2964233199", "" ] }
Deep networks with probabilistic gates
Despite the enormous success of convolutional networks [11,23,37], they remain poorly understood and difficult to optimize. A natural line of investigation, which [1] called conditional computation, is to conditionally bypass parts of the network. While inference-time efficiency could obviously benefit [1], bypassing computations can improve training time or test performance [4,18,44], and can provide insight into network behavior [18,44]. In this paper we investigate several new probabilistic bypass techniques. We focus on ResNet [11] architectures since these are the mainstay of current deep learning techniques for image classification. The general architecture of a ResNet with probabilistic bypass gates is shown in figure 1. The main idea is that a residual layer, such as f 1 , can potentially be bypassed depending on the results of the gating computation g 1 , which controls the gate. The gating computation g i can be data-independent, or it can depend on its input. If a g i always executes its layer, this describes a conventional ResNet. A more interesting data-independent architecture is stochastic depth [18], where at training time g i executes a layer with probability defined by a hyperparameter. At inference time the stochastic depth network is deterministic, though the frequency with which a layer was bypassed during training is used to scale down its weight. With the introduction of Gumbel-Softmax (GS) [2,7,20,30] it became possible to train a network to bypass computations, given some target bypass rate that trades off against training set accuracy. In this sense, probabilistic bypass serves as an additional regularizer for the network. This is the approach taken by AIG [44], where the bypass decision is data-dependent and the loss is per-layer. We propose a per-batch loss function, which allows the network to more flexibly distribute bypass among different layers, compared to AIG's per-layer loss. This in turn leads to more advantageous tradeoffs between accuracy and inference speed. When our per-batch loss is applied with data-independent bypass, we observe a form of mode collapse where individual layers are either nearly always bypassed or nearly never bypassed. This effectively prunes layers, and again results in advantageous tradeoffs between accuracy and inference speed. Whether a network uses data-dependent or data-independent probabilistic bypass, there remains a question of how to perform inference. We explore several alternative inference strategies, and provide evidence that the natural MAP approach gives good performance. This paper is organized as follows. We begin by introducing notation and briefly reviewing related work. Section 3 introduces our per-batch loss function and our inference strategies. Experimental results on ImageNet and CIFAR are presented in section 4, followed by a discussion of some natural extensions of our work. Additional experiments and more details are included in the supplemental material. Notation We first introduce some notation that allows us to more precisely discuss probabilistic bypass. Following [44], we write the output of layer l ∈ {0, 1, . . . , L} as x l , with the input image being x 0 . We can express the effect of a residual layer F l in a feed-forward neural network as x l = x l−1 + F l (x l−1 ). Then a probabilistic bypass gate z l ∈ {0, 1} for this layer modifies the forward pass to either run or skip the layer: x l = x l−1 + z l F l (x l−1 ). There are many different gating computations to determine the value of z l , which can be set at training time, at inference time, or both. The degenerate case z l = 1 corresponds to the original ResNet architecture. Stochastic depth (SD) [18] can be formalized as setting z l = 1 − l L (1 − p L ) during training, where p L is a hyperparameter set to 0.5 in the SD experiments. During inference, SD sets z l = 1 but uses information gleaned during training to adjust the weights of each layer, where layers that were frequently bypassed are down-weighted. We are particularly interested in AIG [44], which uses probabilistic bypass during both training and inference, along with a gating computation that depends on the input data (i.e., z l is a function of x l−1 ). Let G be the set of gates in the network and B be the set of instances in some mini-batch. AIG uses a target loss rate during training, and computes this on a per-gate basis. Given a target rate t ∈ [0, 1] this is L G = 1 |G| g∈G t − 1 |B| i∈B z g,i 2 This loss function encourages each layer to be bypassed at the target rate. Note that this penalty is symmetric, so bypassing more layers is as expensive as bypassing fewer. The overall loss is the sum of the target loss (denoted as L target ) and the standard multi-class logistic loss L MC , L = L target + L MC . For AIG, the target loss is L G . AIG uses the straight through trick -the z's are categorical during the forward pass but treated as a Gumbel softmax during the backwards pass. At inference time AIG is stochastic, since a given layer l might be bypassed depending on its input x l−1 and the learned bypass probability for that layer. Related work Conditional computation has been well studied in computer vision. Cascaded classifiers [45] shorten computation by identifying easy negatives and have recently been adapted to deep learning [26,46]. More directly, [14] and [31] both propose a cascading architecture which computes features at multiple scales and allows for dynamic evaluation, where at inference time the user can trade off speed for accuracy. Similarly, [43] adds intermediate classifiers and returns a label once the network reaches a specified confidence. [4,6] both use the state of the network to adaptively decrease the number of computational steps during inference. [6] uses an intermediate state sequence and a halting unit to limit the number of blocks that can be executed in an RNN; [4] learns an image dependent stopping condition for each ResNet block that conditionally bypasses the rest of the layers in the block. [36] trains a large number of small networks, called Experts, and then uses gates to select a sparse combination of the experts for a given input. Another approach to decreasing the computation time is network pruning. The earliest works attempted to determine the importance of specific weights [10,24] or hidden units [34] and remove those which are unimportant or redundant. Weight-based pruning on CNNs follows the same fundamental approach; [9] prunes weights with small magnitude and [8] incorporates these into a pipeline which also includes quantization and Huffman coding. Numerous techniques prune at the channel level, whether through heuristics [13,25] or approximations to importance [12,33,42]. [29] prunes at the filter level using statistics from the following layer. [48] applies binary mask variables to a layer's weight tensors, sorts the weights during train time, and then sends the lowest to zero. [19] is the most related to our data-independent bypass. They add a sparsity regularization and then modifies stochastic Accelerated Proximal Gradient to prune the network in an end-to-end fashion. Our work differs from [19] by using GS to integrate the sparsity constraint into an additive loss which can be trained by any optimization technique; we use unmodified stochastic gradient descent with momentum (SGD), the typical technique for training classification. Recently, [27] suggests that the main benefits of pruning come primarily from the identified architecture. Our work is also related to regularization techniques such as Dropout [41] and Stochastic Depth [18]. Both techniques try to induce redundancy through stochastically removing parts of the network during training time. Dropout ignores individual units and Stochastic Depth (as described above) skips entire layers. Both provide evidence that the increased redundancy improves helps to prevent overfitting. These techniques can be seen as applying stochastic gates to units or layers, respectively, where the gate probabilities are hyperparameters. In the Bayesian machine learning community, data-independent gating is used as form of regularization. This line of work is cast as generalizing hyperparameter-per-weight dropout by learning individual dropout weights. [39] performs pruning by learning multipliers for weights, which are incentivized to be 0 − 1 by a sparsity-encouraging loss w(1 − w). [5] proposes per-weight regularization, using the straight-through Gumbel-Softmax trick. [38] uses a form of trainable dropout, learning a per-neuron gating probability. These are regularized by their likelihood against a beta distribution, and training is done with the straight-through trick. [40] learns sparsity at the weight level using a binary mask. They adopt a complexity loss which is L 0 on weights, plus a sparsification loss similar to [39]. This is similar to a per-batch loss. [28] extends the straight-through trick with a hard sigmoid to obtain less biased estimates of the gradient. They use a loss equal to the sum of Bernoulli weights, which is similar to a per-batch loss. [32] extends the variational dropout in [21] to allow dropout probabilities greater than a half. Training with the straight-through trick and placing a log-scale uniform prior on the dropout probabilities, they find substantial sparsification with minimal change in change in accuracy, including on some vision problems. Using probabilistic bypass in deep networks In this section we investigate ways to use probabilistic bypass in deep networks. We propose a per-batch loss function, which we use during training with Gumbel softmax following AIG. This frequently leads to mode collapse (not dissimilar to that encouraged by the sparsity-encouraging loss in [40]), which effectively prunes network layers. At inference time we take a deterministic approach, and observe that a simple MAP approach gives strong experimental results. Batch loss during training We note that z g,i can be viewed as a random variable depending on the instance, whose expectation is E B [z g ] = 1 |B| i∈B z g,i . Seen this way, AIG's per-gate loss is L G = E G (t − E B [z]) 2 , that is, a squared L 2 loss on (t − z). The intuition for per-gate target loss is that each gate should be on expectation open around target t amount of time. When the gates are data-independent, this loss encourages each layer to execute with probability t. When the gates are data-dependent, this loss encourages each layer to learn to execute on a fraction t of the training instances. This per-gate target loss was intended to caused the layers of the network to specialize [44]. However, it is not a-priori obvious that specialization is the best architecture from a performance perspective. Instead, the most natural approach is to allow optimizer to select the network configuration which performs the best given a target activation rate. With this intuition, we propose per-batch target loss, which can be trained against using Gumbel softmax following AIG. For a target rate t ∈ [0, 1] L B =   t − 1 |G||B| g∈G i∈B z g,i   2 . This can be interpreted as: (t − E G,B [z]) 2 that is, a squared L 1 loss on (t − z). This loss only induces the network to have an activation of t. The intuition is that each batch is given t capacity and distributes the capacity among the instances and gates however it chooses. For example, if there exists a per-gate configuration with zero training loss that was easily found by optimization techniques, then per-batch would converge to it. Mode collapse With AIG's per-gate loss, each gate independently tries to hit its target rate, which means that the bypass rates will in general be fairly similar among gates. Our per-batch loss, however, allows different layers to have very different bypass rates, a network configuration would be heavily penalized by AIG. In our experiments we frequently observe a form of mode collapse, where layers are nearly always bypassed or nearly never bypassed. In this situation, our loss function encourages a form of network pruning, where we start with an overcapacitated network and then determine which layers to remove during training. Surprisingly, our experiments demonstrate that we end up with improved accuracy. Inference strategies Once training has produced a deep network with stochastic gates, it is necessary to decide how to perform inference. The simplest approach is to leave the gates in the network and allow them to be stochastic during inference time. This is the technique that AIG uses. Experimentally, we observe a small variance so this may be sufficient for most use cases. In addition, one way to take advantage of the stochasticity is to create an ensemble composed of multiple runs with the same network. Then any kind of ensemble technique can be used to combine the different runs: voting, weighing, boosting, etc. In practice, we observe a small bump in accuracy from this ensemble technique, though there is obviously a computational penalty. However, stochasticity has the awkward consequence that multiple classification runs on the same image will often return different results. There are several techniques to remove stochasticity. The gates can be removed, setting z l = 1 at test time. This is natural when viewing these gates as a regularization technique, and is the technique used by Stochastic Depth. Alternately, inference can be made deterministic by using a threshold τ instead of sampling. Thresholding with value τ means that a layer will be executed if the learned probability is greater than τ . This also allows the user some small degree of dynamic control over the computational cost of inference. If the user passes in a very high τ , then fewer layers will activate and inference will be faster. In our experiments, we set τ = 1 2 Note that we observe mode collapse for a large number of our per-batch experiments (particularly with data-independent gates). In this situation, for a wide range of τ thresholding can be interpreted as a pruning technique, where layers below a certain probability τ are pruned. Experiments Our primary experiments centered around probabilistic bypass to ResNet [11] and running the resulting network on ImageNet [3]. Our main finding is that our techniques improve both accuracy and inference speed. We also perform an empirical investigation into our networks in order to better understand their performance. Additional experiments, including CIFAR [22] as well as ImageNet, as well as more details are included in the supplemental material. Improving speed and accuracy on ImageNet We have implemented several probabilistic bypass techniques on the ResNet-50 and ResNet-101, and explored their performance on ImageNet. Since ResNet-101 is so computationally demanding, we have done more experiments on ResNet-50. Our techniques demonstrate improvements in accuracy and inference time on both ResNet-50 and ResNet-101. Figure 3: CIFAR-10 results. Dependent Per-Gate is our implementation of [44]. Note that Dependent Per-Batch has both higher accuracy and lower activation than any of the other combinations. Architecture and training details We use the baseline architecture of ResNet-50 and ResNet-101 [11] and place gates at the start of each residual layer. We adopt the AIG [44] gate architecture. During training we explore different combinations of data-dependent or -independent gates. We used our per-batch loss, as well as AIG's per-gate loss, with target rates t = {0.4, 0.5, 0.6}. We kept the same training schedule as AIG, and followed the standard ResNet training procedure: mini-batch size of 256, momentum of 0.9, and weight decay of 10 −4 . We train for 100 epochs from a pretrained model of the appropriate architecture with step-wise learning rate starting at 0.1, and after every 30 epochs decay by 10 −1 . We use standard training data-augmentation, and rescale the images to 256×256 followed by a 224×224 center crop. We observe that configurations with low gate activations cause the batch norm estimates of mean and variance to be slightly unstable. Therefore before final evaluation, we run training with a learning rate of zero and a large batch size for 200 batches in order to improve the stability and performance of the BatchNorm layers. This general technique was also utilized by [44]. Experimental results Our results are shown in figures 2 and 4. The most interesting experimental results are obtained with data-dependent gates and our per-batch loss function L B , along with thresholding at inference time. This combination gives a 0.41 improvement in top 1 error over ResNet-101 while using 30% less computation. It also gives the same improvement in top 1 error over AIG. On ResNet-50, this technique saves significant computation compared to AIG or the baseline ResNet, albeit with a small loss of accuracy. On the ResNet-50 architecture, we also investigated data-independent gates with per-batch loss, with thresholding at inference time. This produces an improvement in accuracy over our data-dependent architecture, as well as over AIG and vanilla ResNet-50. It saves significant computation (33% fewer gFLOPs) over ResNet-50, and is slightly faster than AIG but slower than the data-dependent architecture. Figure 4 also shows the impact of the thresholding inference strategy, which is used for the 3 columns at right. We found that thresholding at inference time often gives the best performance, leading to roughly .1 − .2 percentage points improvement in top-1 accuracy. We report the result on CIFAR10 and ResNet-101 in table 3. We note, for target rate of 0.5, dependent per-batch has the best performance in both accuracy and average activation. Figures for the remaining target rates can be found in the supplemental materials. Empirical investigations We performed a number of experiments to try to better understand the performance of our architecture. In particular, examining our learned bypass probability provides some interesting insights into how the network behaves. Pruning With the per-batch loss, we often observe mode collapse, where some layers are nearly always on and some nearly always off. In the case of data-dependent bypass, we can measure the observed activation of a gate during training. For example, on a per-batch run on ResNet-50 (16 gates) on ImageNet, nearly all of the 16 gates mode collapse, as shown in figure 4.2.1: four gates collapsed to a mode of zero or one exactly; more than half were at their mode more than 99.9% of the time. Interestingly, we observe different activation behavior on different datasets. ImageNet leads to frequent and aggressive mode collapse, all networks exhibited some degree of mode collapse; CIFAR10 can induce mode collapse but does so much less frequently, approximately less than 40% of our runs. Mode collapse can effectively perform end-to-end network pruning. At inference time, layers with near zero activation can be permanently skipped and even removed from the network entirely, decreasing the number of parameters in the network. In the data-independent per-batch case, the threshold inference technique will permanently skip all layers with probability lower than threshold value τ , essentially pruning them from the network. Thus, we propose this combination as a pruning technique and report an experimental comparison 1 with other modern pruning 1 We note a discrepancy between the GFlops reported by the baseline ResNet-50 between [44] and [16]. We calculate our 8 Figure 5: Demonstration of mode collapse on (left) data-dependent, per-batch ResNet-50 on ImageNet with target rate of .5, and (right) data-independent per-batch with target rate of .4 (right). Nearly all of the 16 gates collapse. Note that full mode collapse is discouraged by the quadratic loss whenever the target rate t is not equal to an integer over the number of gates g. Even if the layers try to mode collapse, the network will either be penalized by gt mod 1 or learn activations that utilize the extra amount of target rate. techniques, shown in figure 6. Understanding networks The learned activation rates for various gates can be used to explore the relative importance of the gated layers. If the average activation for a layer in the dependent or independent case is low, this suggests the network has learned that layer is not very important. Counter-intuitively, our experiments show that early layers are not particularly important in both ResNet-50 and -101. As seen in figure 7 and figure 8, for low-level features, the network only keeps one layer out of the three available. This suggests that fewer low-level features are needed for classification than generally thought. For example, on the ResNet-101 architecture, AIG constrains the three coarsest layers to have a target rate of 1, which indicates that these layers are essential for the rest of the network and must be on. numbers the same way as [44]. To compare fairly to [16], we do the most conservative thing and add back the discrepancy. We can also experimentally investigate the extent to which layers specialize. AIG [44] uses their pergate loss to encourage specialization, hoping to reduce overall inference time by letting the network restrict certain layers be used on specific classes. Although we find that per-batch generally outperforms per-gate in terms of overall activation, we note that in layers which are not mode collapsed, we do observe this kind of specialization even with a per-batch loss. An interesting example of specialization is shown in figure 8. The figure shows activation rates for dependent per-batch ResNet-101 with a target rate of 0.5 using thresholding at inference time. The network has mostly mode collapsed -most layers' activations are either 1 or 0. However, the layers that did not mode collapse show an interesting specialization, similar to what AIG reported. Extensions There are a number of natural extensions to our work that we have started to explore. We have focused on the use of probabilistic bypass gates to provide an early exit, when the network is sufficiently certain of the answer. We are motivated by MSDNet [14], which investigated early exit for both ResNet [11] and DenseNet [17]. We tested the usage of probabilistic bypass gates for early exit on both ResNet and DenseNet. Consistent with [14], we found that ResNet tended to degrade with intermediate classifiers while DenseNet did not. An immediate challenge is that in DenseNet, unlike ResNet, there is no natural interpretation of skipping a layer. Instead, we simply use the gate as a masking term. When the layer computation is skipped, the layer's output is set to zero and then, as per the architecture's design, is passed to later layers. For early exit in DenseNet, we follow [43] and place gates and intermediate classifiers at the end of each dense block. At the gate, the network makes a discrete decision as to whether the the instance can be successfully classified at that stage. If the gate returns true, then the instance is run through the classifier and the answer is returned; if the gate returns false, then the instance continues throughout the network. The advantage of using GS here is that the early exit can be trained in an end-to-end fashion unlike [43] which uses reinforcement learning. In our experiment, we implemented both early exit and probabilistic bypass on a per-layer basis. We set a per-gate target for layers of .9 and a per-gate target for both early exit points of .3 using a piece-wise function with a quadratic before the target rate and a constant after. This function matches the intuition that we should not penalize the network if it can increase the number of early exits without affecting accuracy. We observe that these early exit gates can make good decisions regarding which instances to classify early; more specifically, the first classifier has a much higher accuracy on the instances chosen by the gate than on the entire test set. The network had an overall error of 5.61 while utilizing on average only 68.4% of the layers; our implementation of the original DenseNet architecture achieves an error of 4.76 ( [17] reports an error of 4.51). The results for each block classifier are seen in Figure 9. More than a third of examples exited early, while overall error was still low. This demonstrates the potential of early exit with probabilistic bypass for DenseNet. Conclusion and future work One intriguing direction to explore is to remove the notion of a target activation rate completely, since it is not obvious what a good target would be for a particular use case. In general a user would prefer an accurate network with fast inference. The exact tradeoff between speed and accuracy will in general vary between applications, but there is no natural way for a user to express such a preference in terms of a target activation Figure 8: Specialization on ResNet-101 with data-dependent per-batch at target rate of 0.5. The left heatmap uses the stochastic strategy technique and the right heatmap uses the threshold inference strategy. Each vertical stripe is one layer; each row is an ImageNet class. While most layers have mode collapsed, the ones that have not mode collapsed show similar specializations as seen in [44]. For example, layer 24 runs mostly on fish and lizards, while layer 28 runs specifically on cats and dogs. These layers are highlighted in green. rate. It might be possible to automatically chose a target rate that optimizes a particular combination of inference speed and accuracy. Another promising avenue is the idea of annealing the target rate down from 1. This makes the adjustment to the target loss more gradual and may encourage more possible configurations. Intuitively, this could gives the network a greater chance to 'change its mind' regarding a layer and alter the layer's representation instead of always skipping it. We have demonstrated that probabilistic bypass is a powerful tool for optimizing and understanding neural networks. The per-batch loss function that we have proposed, together with thresholding at inference time, has produced strong experimental results both in terms of speed and accuracy. Appendix: Additional extensions and experimental results S1 Ongoing Work Currently we are pursuing several different avenues of extending this project. S1.1 Taskonomy We are working to apply this work to Taskonomy [47], which tries to identify the most important features for a given task. One of the problems this paper faces is the combinatoral explosion in the number of higherorder transfers; specifically, to find the best k features for a given task, they need to exhaustively try |S| k . In the paper, they rely on a beam search using the performance of a single feature by itself as a proxy for how well the feature will do in a subset. However, this seems highly suboptimal since it is plausible that some feature will perform poorly on its own but perform well when matched with a complement feature. Instead, we propose to use all features as input and place probabilistic gates on the input. If the behavior of our data-independent gates remains the same (namely we observe mode collapse), then we can use our PerBatch training schedule to figure out the best subset of features. More specifically, we would restrict the number of features used through the target rate. For example, Taskonomy uses 26 features, so we expect that a target rate of k 26 will give the k best features for the task. S1.2 Multi-task Learning More generally, we plan to observe how probabilistic gates affects the common architecture for multitask learning. We plan to apply data-dependent gates to the different feature representations and allow the network to either use or ignore the representation depending on the input value. The main motivation for this line of research is that for some inputs a given feature representation may not be useful and in fact using it may lead to worse results. Therefore the network should be allowed to ignore it depending on the input. S1.3 MobileNet We are actively exploring applying these techniques to MobileNet [35] and have some initial results. Applying this technique as is gives results that roughly work as well as changing the expansion factor; more specifically, our results are approximately on the line for MobileNetV2 224 × 224 in Figure 5. We are now working on improving on the line given by the expansion factor. Specifically, we are exploring two directions: (1) increasing the granularity with which we apply the gates and (2) creating an over-capacitated version of MobileNet and then using our techniques to prune it to be the correct size. S2 Additional techniques for training S2.1 Annealing Additionally in the training stage, we propose annealing the target rate. In particular, we use a step-wise annealing target rate which decreases a amount after k epochs. Typical values are a = .05 and k = 5. The intuition behind annealing is that with per-batch activation loss, annealing prevents the network from greedily killing off the worst starting layers. Instead, layers which perform worse in the beginning have a chance to change their representation. In practice, we have observed that over time layer activations are not always monotonic and some layers will initially start to be less active but will recover. We observe this behavior more with an annealing schedule than for a fixed target rate. S2.2 Variable target rate As far as we are aware, previous work has used the L 2 term exclusively. This has the affect of forcing activations towards a specific target rate and letting them vary only if it leads to improvements in accuracy. However, this also prevents a situation where the network can decrease activation while retaining the same accuracy -a scenario which is clearly desirable. As a result, we propose a piece-wise activation loss composed of a constant and then quadratic which indicates that there is no penalty for decreasing activation. Let B i = i∈B z g,i and t be the target activation rate. For the per-batch setup, this loss is as follows. L CQ,B =    0, for B i ≤ t t − 1 |G||B| g∈G B i 2 , for B i > t Additionally, there are many cases where a target activation is not clear and the user simply wants an accurate network with reduced train time. For this training schedule, we propose Variable Target Rates, which treat the target rate as a moving average of the network utilization. For each epoch, the target rate starts at a specific hyperparameter (to prevent collapse to 1) and then is allowed to change according to the batch's activation rate. The two simplest possibilities for the update step are: 1) moving average, and 2) exponential moving average. S2.3 Granularity For more granularity, we consider gating blocks of filters separately. In this case, we assume that F has N filters, i.e., F(x l−1 ) has dimension (W × H × N ). Then we write x l = x l−1 +      z l,1 F l,1 (x l−1 ) z l,2 F l,2 (x l−1 ) . . . z l,n F l,n (x l−1 )      where F l,i has dimension (W × H × (N/n)). We note that it is not essential that the layers F are residual; we consider also dropping the added x l−1 on the right-hand side of these equations. S2.4 Additional Early Exit Details For our early exit classifiers, we use the same classifiers as [14]. For the gate structure, we use a stronger version of the gate described by [44]. The gates are comprised of the following: a 3 × 3 convolutional layer with stride of 1 and padding of 1 which takes the current state of the model and outputs 128 channels, a BatchNorm, another 3 × 3 convolutional layer with stride of 1 and padding of 1 which outputs 128 channels, a BatchNorm, a 4 × 4 average pool, a linear layer, and then finally a GumbleSoftmax. S3 Observations Observation S3.1 A layer with activation of p can only affect the final accuracy by p. Observation S3.2 Given a set of layers with activation p p p, we can apply the Inclusion-Exclusion principle to get an upper bound for the amount these layers can affect the final accuracy of the network. So for example, if Layer 1 and Layer 2 both run with probability p but always co-occur (activate at the same time), then the set of Layer 1 and Layer 2 can only affect final accuracy by p. We use these observations to motivate an explanation for mode collapse in the data-independent perbatch case. Consider a network with only two layers and the restriction that, on expectation, only one layer should be on. Then let p be the probability that layer 1 is on. Intuitively, if p ∈ {0, 1}, then we are in a high entropy state where the network must deal with a large amount of uncertainty regarding which layers will be active. Furthermore, some of the work training each individual layer will be wasted during inference some percentage of the time since that layer will be skipped with non-zero probability. More precisely: Observation S3.3 Consider a network with two layers with data-independent probability p 1 and p 2 of being on. The network is then given a hard capacity of 1 (ie. one layer can be on, or each layer can be on half the time). Let a 1 be the expected accuracy of a one-layer network, a 2 be the expected accuracy of a two-layer network. Then if the network is given a hard capacity of 1, in order for p 1 ∈ {0, 1}, we need that a 2 a 1 ≥ 2. Since the network is given a hard capacity of 1, then we are only learning a single parameter p 1 since p 2 = 1 − p 1 . Let p = p 1 . Then p(1 − p) is the probability that both layers will be on and also the probability that both layers will be off. Note that the network has a strict upper bound on accuracy of 1 − p(1 − p) since with probability p(1 − p) none of the layers will activate and no output will be given. Then the expected accuracy of the network for any probability p ∈ [0, 1] is (1−2p+2p 2 )a 1 +p(1−p)a 2 , note that for p = 0, 1 the accuracy is simply a 1 . For a value p ∈ (0, 1) to be better than p ∈ {0, 1}, we need a 1 < (1 − 2p + 2p 2 )a 1 + p(1 − p)a 2 −2p 2 + 2p p(1 − p) < a 2 a 1 2 < a 2 a 1 S4 ImageNet Results We report all the data collected on ImageNet using the different gate strategies (independent, dependent), target loss strategies (per-batch, per-gate), and inference time strategies (threshold, always-on, stochastic, ensemble). Note that we try to include AIG for reference whenever it's a fair comparison. Also of note, for the ensemble technique we also include data from Snapshot Ensembles: Train 1, Get M For Free [15]. Note that using the stochastic networks, we outperform their ensemble technique. Also note that their technique is orthogonal to ours, so both could be utilized to identify an even better ensemble. In general, we observe that unsurprisingly, ensemble has the highest performance in terms of error; however, this requires multiple forward passes through the network, so the performance gain is somewhat offset by the inference time required. We also observe that threshold generally outperforms stochastic. This roughly makes sense if you consider stochastic inference as drawing a sample from a inference distribution -in this interpretation, threshold at .5 basically acts as an argmax. In addition to the improvement in performance, for the per-batch cases, threshold also tends to increase the number of activations. For all ImageNet results, we used the pretrained models provided by TorchVision 1 . S5.1 CIFAR10 Performance We report all the data collected on CIFAR10 using the different gate strategies (independent, dependent), target loss strategies (per-batch, per-gate). We report only the numbers for the stochastic inference time technique. We used CIFAR10 as a faster way to explore the space of parameters and combinations and as such have a more dense sweep of the combination and parameters. Note that for CIFAR10, we did not use a pretrained model; the entire model is trained from scratch. In general, we found that for a wide set of parameters per-batch outperforms per-gate. This includes independent per-batch outperforming dependent per-gate. The only exception to this is very high and very low target rates. However we note that at very high target rates, the accuracy of per-batch can be recovered through annealing. We attribute this to the fact that for CIFAR10 we train from scratch. Since the model is completely blank for the first several epochs, the per-batch loss can lower activations for any layers while still improving accuracy. In other words, at the beginning, the model is so inaccurate that any training on any subset of the model will result in a gain of accuracy; so when training from scratch, the per-batch loss will choose the layers to decrease activations for greedily and sub-optimally. One surprising result is that independent per-gate works at all for a wide range of target rates. This suggests that the redundancy effect described in [18] is so strong that the gates can be kept during inference time. This also suggests that at least for CIFAR10, most of the gains described in [44] were from regularization and not from specialization. We also report some variable rate target rates. We note that these tend to outperform the quadratic loss on a constant target rate. We believe that this is because variable target rate allows the optimizer to take the easiest and farther path down the manifold. We note that some of the variable target rates that worked on CIFAR10 did not work on ImageNet; namely, variable target rates which updated the target to previous mean quickly (within 5 epochs) lead to mode collapse to 1 for all gates. We attribute this to the much larger amount of training data for ImageNet and increased complexity of the task. However, both annealing and variable target rates merit further experimentation and research to truly understand how they perform on different datasets and with different training setups (from scratch vs from pretrained).
6,344
1812.04180
2905421523
We investigate learning to probabilistically bypass computations in a network architecture. Our approach is motivated by AIG, where layers are conditionally executed depending on their inputs, and the network is trained against a target bypass rate using a per-layer loss. We propose a per-batch loss function, and describe strategies for handling probabilistic bypass during inference as well as training. Per-batch loss allows the network additional flexibility. In particular, a form of mode collapse becomes plausible, where some layers are nearly always bypassed and some almost never; such a configuration is strongly discouraged by AIG's per-layer loss. We explore several inference-time strategies, including the natural MAP approach. With data-dependent bypass, we demonstrate improved performance over AIG. With data-independent bypass, as in stochastic depth, we observe mode collapse and effectively prune layers. We demonstrate our techniques on ResNet-50 and ResNet-101 for ImageNet , where our techniques produce improved accuracy (.15--.41 in precision@1) with substantially less computation (bypassing 25--40 of the layers).
@cite_25 is the most related to our data-independent bypass. They add a sparsity regularization and then modifies stochastic Accelerated Proximal Gradient to prune the network in an end-to-end fashion. Our work differs from @cite_25 by using GS to integrate the sparsity constraint into an additive loss which can be trained by any optimization technique; we use unmodified stochastic gradient descent with momentum (SGD), the typical technique for training classification. Recently, @cite_29 suggests that the main benefits of pruning come primarily from the identified architecture.
{ "abstract": [ "Network pruning is widely used for reducing the heavy inference cost of deep models in low-resource settings. A typical pruning algorithm is a three-stage pipeline, i.e., training (a large model), pruning and fine-tuning. During pruning, according to a certain criterion, redundant weights are pruned and important weights are kept to best preserve the accuracy. In this work, we make several surprising observations which contradict common beliefs. For all state-of-the-art structured pruning algorithms we examined, fine-tuning a pruned model only gives comparable or worse performance than training that model with randomly initialized weights. For pruning algorithms which assume a predefined target network architecture, one can get rid of the full pipeline and directly train the target network from scratch. Our observations are consistent for multiple network architectures, datasets, and tasks, which imply that: 1) training a large, over-parameterized model is often not necessary to obtain an efficient final model, 2) learned \"important\" weights of the large model are typically not useful for the small pruned model, 3) the pruned architecture itself, rather than a set of inherited \"important\" weights, is more crucial to the efficiency in the final model, which suggests that in some cases pruning can be useful as an architecture search paradigm. Our results suggest the need for more careful baseline evaluations in future research on structured pruning methods. We also compare with the \"Lottery Ticket Hypothesis\" (Frankle & Carbin 2019), and find that with optimal learning rate, the \"winning ticket\" initialization as used in Frankle & Carbin (2019) does not bring improvement over random initialization.", "Deep convolutional neural networks have liberated its extraordinary power on various tasks. However, it is still very challenging to deploy state-of-the-art models into real-world applications due to their high computational complexity. How can we design a compact and effective network without massive experiments and expert knowledge? In this paper, we propose a simple and effective framework to learn and prune deep models in an end-to-end manner. In our framework, a new type of parameter -- scaling factor is first introduced to scale the outputs of specific structures, such as neurons, groups or residual blocks. Then we add sparsity regularizations on these factors, and solve this optimization problem by a modified stochastic Accelerated Proximal Gradient (APG) method. By forcing some of the factors to zero, we can safely remove the corresponding structures, thus prune the unimportant parts of a CNN. Comparing with other structure selection methods that may need thousands of trials or iterative fine-tuning, our method is trained fully end-to-end in one training pass without bells and whistles. We evaluate our method, Sparse Structure Selection with several state-of-the-art CNNs, and demonstrate very promising results with adaptive depth and width selection." ], "cite_N": [ "@cite_29", "@cite_25" ], "mid": [ "2951569836", "2734271713" ] }
Deep networks with probabilistic gates
Despite the enormous success of convolutional networks [11,23,37], they remain poorly understood and difficult to optimize. A natural line of investigation, which [1] called conditional computation, is to conditionally bypass parts of the network. While inference-time efficiency could obviously benefit [1], bypassing computations can improve training time or test performance [4,18,44], and can provide insight into network behavior [18,44]. In this paper we investigate several new probabilistic bypass techniques. We focus on ResNet [11] architectures since these are the mainstay of current deep learning techniques for image classification. The general architecture of a ResNet with probabilistic bypass gates is shown in figure 1. The main idea is that a residual layer, such as f 1 , can potentially be bypassed depending on the results of the gating computation g 1 , which controls the gate. The gating computation g i can be data-independent, or it can depend on its input. If a g i always executes its layer, this describes a conventional ResNet. A more interesting data-independent architecture is stochastic depth [18], where at training time g i executes a layer with probability defined by a hyperparameter. At inference time the stochastic depth network is deterministic, though the frequency with which a layer was bypassed during training is used to scale down its weight. With the introduction of Gumbel-Softmax (GS) [2,7,20,30] it became possible to train a network to bypass computations, given some target bypass rate that trades off against training set accuracy. In this sense, probabilistic bypass serves as an additional regularizer for the network. This is the approach taken by AIG [44], where the bypass decision is data-dependent and the loss is per-layer. We propose a per-batch loss function, which allows the network to more flexibly distribute bypass among different layers, compared to AIG's per-layer loss. This in turn leads to more advantageous tradeoffs between accuracy and inference speed. When our per-batch loss is applied with data-independent bypass, we observe a form of mode collapse where individual layers are either nearly always bypassed or nearly never bypassed. This effectively prunes layers, and again results in advantageous tradeoffs between accuracy and inference speed. Whether a network uses data-dependent or data-independent probabilistic bypass, there remains a question of how to perform inference. We explore several alternative inference strategies, and provide evidence that the natural MAP approach gives good performance. This paper is organized as follows. We begin by introducing notation and briefly reviewing related work. Section 3 introduces our per-batch loss function and our inference strategies. Experimental results on ImageNet and CIFAR are presented in section 4, followed by a discussion of some natural extensions of our work. Additional experiments and more details are included in the supplemental material. Notation We first introduce some notation that allows us to more precisely discuss probabilistic bypass. Following [44], we write the output of layer l ∈ {0, 1, . . . , L} as x l , with the input image being x 0 . We can express the effect of a residual layer F l in a feed-forward neural network as x l = x l−1 + F l (x l−1 ). Then a probabilistic bypass gate z l ∈ {0, 1} for this layer modifies the forward pass to either run or skip the layer: x l = x l−1 + z l F l (x l−1 ). There are many different gating computations to determine the value of z l , which can be set at training time, at inference time, or both. The degenerate case z l = 1 corresponds to the original ResNet architecture. Stochastic depth (SD) [18] can be formalized as setting z l = 1 − l L (1 − p L ) during training, where p L is a hyperparameter set to 0.5 in the SD experiments. During inference, SD sets z l = 1 but uses information gleaned during training to adjust the weights of each layer, where layers that were frequently bypassed are down-weighted. We are particularly interested in AIG [44], which uses probabilistic bypass during both training and inference, along with a gating computation that depends on the input data (i.e., z l is a function of x l−1 ). Let G be the set of gates in the network and B be the set of instances in some mini-batch. AIG uses a target loss rate during training, and computes this on a per-gate basis. Given a target rate t ∈ [0, 1] this is L G = 1 |G| g∈G t − 1 |B| i∈B z g,i 2 This loss function encourages each layer to be bypassed at the target rate. Note that this penalty is symmetric, so bypassing more layers is as expensive as bypassing fewer. The overall loss is the sum of the target loss (denoted as L target ) and the standard multi-class logistic loss L MC , L = L target + L MC . For AIG, the target loss is L G . AIG uses the straight through trick -the z's are categorical during the forward pass but treated as a Gumbel softmax during the backwards pass. At inference time AIG is stochastic, since a given layer l might be bypassed depending on its input x l−1 and the learned bypass probability for that layer. Related work Conditional computation has been well studied in computer vision. Cascaded classifiers [45] shorten computation by identifying easy negatives and have recently been adapted to deep learning [26,46]. More directly, [14] and [31] both propose a cascading architecture which computes features at multiple scales and allows for dynamic evaluation, where at inference time the user can trade off speed for accuracy. Similarly, [43] adds intermediate classifiers and returns a label once the network reaches a specified confidence. [4,6] both use the state of the network to adaptively decrease the number of computational steps during inference. [6] uses an intermediate state sequence and a halting unit to limit the number of blocks that can be executed in an RNN; [4] learns an image dependent stopping condition for each ResNet block that conditionally bypasses the rest of the layers in the block. [36] trains a large number of small networks, called Experts, and then uses gates to select a sparse combination of the experts for a given input. Another approach to decreasing the computation time is network pruning. The earliest works attempted to determine the importance of specific weights [10,24] or hidden units [34] and remove those which are unimportant or redundant. Weight-based pruning on CNNs follows the same fundamental approach; [9] prunes weights with small magnitude and [8] incorporates these into a pipeline which also includes quantization and Huffman coding. Numerous techniques prune at the channel level, whether through heuristics [13,25] or approximations to importance [12,33,42]. [29] prunes at the filter level using statistics from the following layer. [48] applies binary mask variables to a layer's weight tensors, sorts the weights during train time, and then sends the lowest to zero. [19] is the most related to our data-independent bypass. They add a sparsity regularization and then modifies stochastic Accelerated Proximal Gradient to prune the network in an end-to-end fashion. Our work differs from [19] by using GS to integrate the sparsity constraint into an additive loss which can be trained by any optimization technique; we use unmodified stochastic gradient descent with momentum (SGD), the typical technique for training classification. Recently, [27] suggests that the main benefits of pruning come primarily from the identified architecture. Our work is also related to regularization techniques such as Dropout [41] and Stochastic Depth [18]. Both techniques try to induce redundancy through stochastically removing parts of the network during training time. Dropout ignores individual units and Stochastic Depth (as described above) skips entire layers. Both provide evidence that the increased redundancy improves helps to prevent overfitting. These techniques can be seen as applying stochastic gates to units or layers, respectively, where the gate probabilities are hyperparameters. In the Bayesian machine learning community, data-independent gating is used as form of regularization. This line of work is cast as generalizing hyperparameter-per-weight dropout by learning individual dropout weights. [39] performs pruning by learning multipliers for weights, which are incentivized to be 0 − 1 by a sparsity-encouraging loss w(1 − w). [5] proposes per-weight regularization, using the straight-through Gumbel-Softmax trick. [38] uses a form of trainable dropout, learning a per-neuron gating probability. These are regularized by their likelihood against a beta distribution, and training is done with the straight-through trick. [40] learns sparsity at the weight level using a binary mask. They adopt a complexity loss which is L 0 on weights, plus a sparsification loss similar to [39]. This is similar to a per-batch loss. [28] extends the straight-through trick with a hard sigmoid to obtain less biased estimates of the gradient. They use a loss equal to the sum of Bernoulli weights, which is similar to a per-batch loss. [32] extends the variational dropout in [21] to allow dropout probabilities greater than a half. Training with the straight-through trick and placing a log-scale uniform prior on the dropout probabilities, they find substantial sparsification with minimal change in change in accuracy, including on some vision problems. Using probabilistic bypass in deep networks In this section we investigate ways to use probabilistic bypass in deep networks. We propose a per-batch loss function, which we use during training with Gumbel softmax following AIG. This frequently leads to mode collapse (not dissimilar to that encouraged by the sparsity-encouraging loss in [40]), which effectively prunes network layers. At inference time we take a deterministic approach, and observe that a simple MAP approach gives strong experimental results. Batch loss during training We note that z g,i can be viewed as a random variable depending on the instance, whose expectation is E B [z g ] = 1 |B| i∈B z g,i . Seen this way, AIG's per-gate loss is L G = E G (t − E B [z]) 2 , that is, a squared L 2 loss on (t − z). The intuition for per-gate target loss is that each gate should be on expectation open around target t amount of time. When the gates are data-independent, this loss encourages each layer to execute with probability t. When the gates are data-dependent, this loss encourages each layer to learn to execute on a fraction t of the training instances. This per-gate target loss was intended to caused the layers of the network to specialize [44]. However, it is not a-priori obvious that specialization is the best architecture from a performance perspective. Instead, the most natural approach is to allow optimizer to select the network configuration which performs the best given a target activation rate. With this intuition, we propose per-batch target loss, which can be trained against using Gumbel softmax following AIG. For a target rate t ∈ [0, 1] L B =   t − 1 |G||B| g∈G i∈B z g,i   2 . This can be interpreted as: (t − E G,B [z]) 2 that is, a squared L 1 loss on (t − z). This loss only induces the network to have an activation of t. The intuition is that each batch is given t capacity and distributes the capacity among the instances and gates however it chooses. For example, if there exists a per-gate configuration with zero training loss that was easily found by optimization techniques, then per-batch would converge to it. Mode collapse With AIG's per-gate loss, each gate independently tries to hit its target rate, which means that the bypass rates will in general be fairly similar among gates. Our per-batch loss, however, allows different layers to have very different bypass rates, a network configuration would be heavily penalized by AIG. In our experiments we frequently observe a form of mode collapse, where layers are nearly always bypassed or nearly never bypassed. In this situation, our loss function encourages a form of network pruning, where we start with an overcapacitated network and then determine which layers to remove during training. Surprisingly, our experiments demonstrate that we end up with improved accuracy. Inference strategies Once training has produced a deep network with stochastic gates, it is necessary to decide how to perform inference. The simplest approach is to leave the gates in the network and allow them to be stochastic during inference time. This is the technique that AIG uses. Experimentally, we observe a small variance so this may be sufficient for most use cases. In addition, one way to take advantage of the stochasticity is to create an ensemble composed of multiple runs with the same network. Then any kind of ensemble technique can be used to combine the different runs: voting, weighing, boosting, etc. In practice, we observe a small bump in accuracy from this ensemble technique, though there is obviously a computational penalty. However, stochasticity has the awkward consequence that multiple classification runs on the same image will often return different results. There are several techniques to remove stochasticity. The gates can be removed, setting z l = 1 at test time. This is natural when viewing these gates as a regularization technique, and is the technique used by Stochastic Depth. Alternately, inference can be made deterministic by using a threshold τ instead of sampling. Thresholding with value τ means that a layer will be executed if the learned probability is greater than τ . This also allows the user some small degree of dynamic control over the computational cost of inference. If the user passes in a very high τ , then fewer layers will activate and inference will be faster. In our experiments, we set τ = 1 2 Note that we observe mode collapse for a large number of our per-batch experiments (particularly with data-independent gates). In this situation, for a wide range of τ thresholding can be interpreted as a pruning technique, where layers below a certain probability τ are pruned. Experiments Our primary experiments centered around probabilistic bypass to ResNet [11] and running the resulting network on ImageNet [3]. Our main finding is that our techniques improve both accuracy and inference speed. We also perform an empirical investigation into our networks in order to better understand their performance. Additional experiments, including CIFAR [22] as well as ImageNet, as well as more details are included in the supplemental material. Improving speed and accuracy on ImageNet We have implemented several probabilistic bypass techniques on the ResNet-50 and ResNet-101, and explored their performance on ImageNet. Since ResNet-101 is so computationally demanding, we have done more experiments on ResNet-50. Our techniques demonstrate improvements in accuracy and inference time on both ResNet-50 and ResNet-101. Figure 3: CIFAR-10 results. Dependent Per-Gate is our implementation of [44]. Note that Dependent Per-Batch has both higher accuracy and lower activation than any of the other combinations. Architecture and training details We use the baseline architecture of ResNet-50 and ResNet-101 [11] and place gates at the start of each residual layer. We adopt the AIG [44] gate architecture. During training we explore different combinations of data-dependent or -independent gates. We used our per-batch loss, as well as AIG's per-gate loss, with target rates t = {0.4, 0.5, 0.6}. We kept the same training schedule as AIG, and followed the standard ResNet training procedure: mini-batch size of 256, momentum of 0.9, and weight decay of 10 −4 . We train for 100 epochs from a pretrained model of the appropriate architecture with step-wise learning rate starting at 0.1, and after every 30 epochs decay by 10 −1 . We use standard training data-augmentation, and rescale the images to 256×256 followed by a 224×224 center crop. We observe that configurations with low gate activations cause the batch norm estimates of mean and variance to be slightly unstable. Therefore before final evaluation, we run training with a learning rate of zero and a large batch size for 200 batches in order to improve the stability and performance of the BatchNorm layers. This general technique was also utilized by [44]. Experimental results Our results are shown in figures 2 and 4. The most interesting experimental results are obtained with data-dependent gates and our per-batch loss function L B , along with thresholding at inference time. This combination gives a 0.41 improvement in top 1 error over ResNet-101 while using 30% less computation. It also gives the same improvement in top 1 error over AIG. On ResNet-50, this technique saves significant computation compared to AIG or the baseline ResNet, albeit with a small loss of accuracy. On the ResNet-50 architecture, we also investigated data-independent gates with per-batch loss, with thresholding at inference time. This produces an improvement in accuracy over our data-dependent architecture, as well as over AIG and vanilla ResNet-50. It saves significant computation (33% fewer gFLOPs) over ResNet-50, and is slightly faster than AIG but slower than the data-dependent architecture. Figure 4 also shows the impact of the thresholding inference strategy, which is used for the 3 columns at right. We found that thresholding at inference time often gives the best performance, leading to roughly .1 − .2 percentage points improvement in top-1 accuracy. We report the result on CIFAR10 and ResNet-101 in table 3. We note, for target rate of 0.5, dependent per-batch has the best performance in both accuracy and average activation. Figures for the remaining target rates can be found in the supplemental materials. Empirical investigations We performed a number of experiments to try to better understand the performance of our architecture. In particular, examining our learned bypass probability provides some interesting insights into how the network behaves. Pruning With the per-batch loss, we often observe mode collapse, where some layers are nearly always on and some nearly always off. In the case of data-dependent bypass, we can measure the observed activation of a gate during training. For example, on a per-batch run on ResNet-50 (16 gates) on ImageNet, nearly all of the 16 gates mode collapse, as shown in figure 4.2.1: four gates collapsed to a mode of zero or one exactly; more than half were at their mode more than 99.9% of the time. Interestingly, we observe different activation behavior on different datasets. ImageNet leads to frequent and aggressive mode collapse, all networks exhibited some degree of mode collapse; CIFAR10 can induce mode collapse but does so much less frequently, approximately less than 40% of our runs. Mode collapse can effectively perform end-to-end network pruning. At inference time, layers with near zero activation can be permanently skipped and even removed from the network entirely, decreasing the number of parameters in the network. In the data-independent per-batch case, the threshold inference technique will permanently skip all layers with probability lower than threshold value τ , essentially pruning them from the network. Thus, we propose this combination as a pruning technique and report an experimental comparison 1 with other modern pruning 1 We note a discrepancy between the GFlops reported by the baseline ResNet-50 between [44] and [16]. We calculate our 8 Figure 5: Demonstration of mode collapse on (left) data-dependent, per-batch ResNet-50 on ImageNet with target rate of .5, and (right) data-independent per-batch with target rate of .4 (right). Nearly all of the 16 gates collapse. Note that full mode collapse is discouraged by the quadratic loss whenever the target rate t is not equal to an integer over the number of gates g. Even if the layers try to mode collapse, the network will either be penalized by gt mod 1 or learn activations that utilize the extra amount of target rate. techniques, shown in figure 6. Understanding networks The learned activation rates for various gates can be used to explore the relative importance of the gated layers. If the average activation for a layer in the dependent or independent case is low, this suggests the network has learned that layer is not very important. Counter-intuitively, our experiments show that early layers are not particularly important in both ResNet-50 and -101. As seen in figure 7 and figure 8, for low-level features, the network only keeps one layer out of the three available. This suggests that fewer low-level features are needed for classification than generally thought. For example, on the ResNet-101 architecture, AIG constrains the three coarsest layers to have a target rate of 1, which indicates that these layers are essential for the rest of the network and must be on. numbers the same way as [44]. To compare fairly to [16], we do the most conservative thing and add back the discrepancy. We can also experimentally investigate the extent to which layers specialize. AIG [44] uses their pergate loss to encourage specialization, hoping to reduce overall inference time by letting the network restrict certain layers be used on specific classes. Although we find that per-batch generally outperforms per-gate in terms of overall activation, we note that in layers which are not mode collapsed, we do observe this kind of specialization even with a per-batch loss. An interesting example of specialization is shown in figure 8. The figure shows activation rates for dependent per-batch ResNet-101 with a target rate of 0.5 using thresholding at inference time. The network has mostly mode collapsed -most layers' activations are either 1 or 0. However, the layers that did not mode collapse show an interesting specialization, similar to what AIG reported. Extensions There are a number of natural extensions to our work that we have started to explore. We have focused on the use of probabilistic bypass gates to provide an early exit, when the network is sufficiently certain of the answer. We are motivated by MSDNet [14], which investigated early exit for both ResNet [11] and DenseNet [17]. We tested the usage of probabilistic bypass gates for early exit on both ResNet and DenseNet. Consistent with [14], we found that ResNet tended to degrade with intermediate classifiers while DenseNet did not. An immediate challenge is that in DenseNet, unlike ResNet, there is no natural interpretation of skipping a layer. Instead, we simply use the gate as a masking term. When the layer computation is skipped, the layer's output is set to zero and then, as per the architecture's design, is passed to later layers. For early exit in DenseNet, we follow [43] and place gates and intermediate classifiers at the end of each dense block. At the gate, the network makes a discrete decision as to whether the the instance can be successfully classified at that stage. If the gate returns true, then the instance is run through the classifier and the answer is returned; if the gate returns false, then the instance continues throughout the network. The advantage of using GS here is that the early exit can be trained in an end-to-end fashion unlike [43] which uses reinforcement learning. In our experiment, we implemented both early exit and probabilistic bypass on a per-layer basis. We set a per-gate target for layers of .9 and a per-gate target for both early exit points of .3 using a piece-wise function with a quadratic before the target rate and a constant after. This function matches the intuition that we should not penalize the network if it can increase the number of early exits without affecting accuracy. We observe that these early exit gates can make good decisions regarding which instances to classify early; more specifically, the first classifier has a much higher accuracy on the instances chosen by the gate than on the entire test set. The network had an overall error of 5.61 while utilizing on average only 68.4% of the layers; our implementation of the original DenseNet architecture achieves an error of 4.76 ( [17] reports an error of 4.51). The results for each block classifier are seen in Figure 9. More than a third of examples exited early, while overall error was still low. This demonstrates the potential of early exit with probabilistic bypass for DenseNet. Conclusion and future work One intriguing direction to explore is to remove the notion of a target activation rate completely, since it is not obvious what a good target would be for a particular use case. In general a user would prefer an accurate network with fast inference. The exact tradeoff between speed and accuracy will in general vary between applications, but there is no natural way for a user to express such a preference in terms of a target activation Figure 8: Specialization on ResNet-101 with data-dependent per-batch at target rate of 0.5. The left heatmap uses the stochastic strategy technique and the right heatmap uses the threshold inference strategy. Each vertical stripe is one layer; each row is an ImageNet class. While most layers have mode collapsed, the ones that have not mode collapsed show similar specializations as seen in [44]. For example, layer 24 runs mostly on fish and lizards, while layer 28 runs specifically on cats and dogs. These layers are highlighted in green. rate. It might be possible to automatically chose a target rate that optimizes a particular combination of inference speed and accuracy. Another promising avenue is the idea of annealing the target rate down from 1. This makes the adjustment to the target loss more gradual and may encourage more possible configurations. Intuitively, this could gives the network a greater chance to 'change its mind' regarding a layer and alter the layer's representation instead of always skipping it. We have demonstrated that probabilistic bypass is a powerful tool for optimizing and understanding neural networks. The per-batch loss function that we have proposed, together with thresholding at inference time, has produced strong experimental results both in terms of speed and accuracy. Appendix: Additional extensions and experimental results S1 Ongoing Work Currently we are pursuing several different avenues of extending this project. S1.1 Taskonomy We are working to apply this work to Taskonomy [47], which tries to identify the most important features for a given task. One of the problems this paper faces is the combinatoral explosion in the number of higherorder transfers; specifically, to find the best k features for a given task, they need to exhaustively try |S| k . In the paper, they rely on a beam search using the performance of a single feature by itself as a proxy for how well the feature will do in a subset. However, this seems highly suboptimal since it is plausible that some feature will perform poorly on its own but perform well when matched with a complement feature. Instead, we propose to use all features as input and place probabilistic gates on the input. If the behavior of our data-independent gates remains the same (namely we observe mode collapse), then we can use our PerBatch training schedule to figure out the best subset of features. More specifically, we would restrict the number of features used through the target rate. For example, Taskonomy uses 26 features, so we expect that a target rate of k 26 will give the k best features for the task. S1.2 Multi-task Learning More generally, we plan to observe how probabilistic gates affects the common architecture for multitask learning. We plan to apply data-dependent gates to the different feature representations and allow the network to either use or ignore the representation depending on the input value. The main motivation for this line of research is that for some inputs a given feature representation may not be useful and in fact using it may lead to worse results. Therefore the network should be allowed to ignore it depending on the input. S1.3 MobileNet We are actively exploring applying these techniques to MobileNet [35] and have some initial results. Applying this technique as is gives results that roughly work as well as changing the expansion factor; more specifically, our results are approximately on the line for MobileNetV2 224 × 224 in Figure 5. We are now working on improving on the line given by the expansion factor. Specifically, we are exploring two directions: (1) increasing the granularity with which we apply the gates and (2) creating an over-capacitated version of MobileNet and then using our techniques to prune it to be the correct size. S2 Additional techniques for training S2.1 Annealing Additionally in the training stage, we propose annealing the target rate. In particular, we use a step-wise annealing target rate which decreases a amount after k epochs. Typical values are a = .05 and k = 5. The intuition behind annealing is that with per-batch activation loss, annealing prevents the network from greedily killing off the worst starting layers. Instead, layers which perform worse in the beginning have a chance to change their representation. In practice, we have observed that over time layer activations are not always monotonic and some layers will initially start to be less active but will recover. We observe this behavior more with an annealing schedule than for a fixed target rate. S2.2 Variable target rate As far as we are aware, previous work has used the L 2 term exclusively. This has the affect of forcing activations towards a specific target rate and letting them vary only if it leads to improvements in accuracy. However, this also prevents a situation where the network can decrease activation while retaining the same accuracy -a scenario which is clearly desirable. As a result, we propose a piece-wise activation loss composed of a constant and then quadratic which indicates that there is no penalty for decreasing activation. Let B i = i∈B z g,i and t be the target activation rate. For the per-batch setup, this loss is as follows. L CQ,B =    0, for B i ≤ t t − 1 |G||B| g∈G B i 2 , for B i > t Additionally, there are many cases where a target activation is not clear and the user simply wants an accurate network with reduced train time. For this training schedule, we propose Variable Target Rates, which treat the target rate as a moving average of the network utilization. For each epoch, the target rate starts at a specific hyperparameter (to prevent collapse to 1) and then is allowed to change according to the batch's activation rate. The two simplest possibilities for the update step are: 1) moving average, and 2) exponential moving average. S2.3 Granularity For more granularity, we consider gating blocks of filters separately. In this case, we assume that F has N filters, i.e., F(x l−1 ) has dimension (W × H × N ). Then we write x l = x l−1 +      z l,1 F l,1 (x l−1 ) z l,2 F l,2 (x l−1 ) . . . z l,n F l,n (x l−1 )      where F l,i has dimension (W × H × (N/n)). We note that it is not essential that the layers F are residual; we consider also dropping the added x l−1 on the right-hand side of these equations. S2.4 Additional Early Exit Details For our early exit classifiers, we use the same classifiers as [14]. For the gate structure, we use a stronger version of the gate described by [44]. The gates are comprised of the following: a 3 × 3 convolutional layer with stride of 1 and padding of 1 which takes the current state of the model and outputs 128 channels, a BatchNorm, another 3 × 3 convolutional layer with stride of 1 and padding of 1 which outputs 128 channels, a BatchNorm, a 4 × 4 average pool, a linear layer, and then finally a GumbleSoftmax. S3 Observations Observation S3.1 A layer with activation of p can only affect the final accuracy by p. Observation S3.2 Given a set of layers with activation p p p, we can apply the Inclusion-Exclusion principle to get an upper bound for the amount these layers can affect the final accuracy of the network. So for example, if Layer 1 and Layer 2 both run with probability p but always co-occur (activate at the same time), then the set of Layer 1 and Layer 2 can only affect final accuracy by p. We use these observations to motivate an explanation for mode collapse in the data-independent perbatch case. Consider a network with only two layers and the restriction that, on expectation, only one layer should be on. Then let p be the probability that layer 1 is on. Intuitively, if p ∈ {0, 1}, then we are in a high entropy state where the network must deal with a large amount of uncertainty regarding which layers will be active. Furthermore, some of the work training each individual layer will be wasted during inference some percentage of the time since that layer will be skipped with non-zero probability. More precisely: Observation S3.3 Consider a network with two layers with data-independent probability p 1 and p 2 of being on. The network is then given a hard capacity of 1 (ie. one layer can be on, or each layer can be on half the time). Let a 1 be the expected accuracy of a one-layer network, a 2 be the expected accuracy of a two-layer network. Then if the network is given a hard capacity of 1, in order for p 1 ∈ {0, 1}, we need that a 2 a 1 ≥ 2. Since the network is given a hard capacity of 1, then we are only learning a single parameter p 1 since p 2 = 1 − p 1 . Let p = p 1 . Then p(1 − p) is the probability that both layers will be on and also the probability that both layers will be off. Note that the network has a strict upper bound on accuracy of 1 − p(1 − p) since with probability p(1 − p) none of the layers will activate and no output will be given. Then the expected accuracy of the network for any probability p ∈ [0, 1] is (1−2p+2p 2 )a 1 +p(1−p)a 2 , note that for p = 0, 1 the accuracy is simply a 1 . For a value p ∈ (0, 1) to be better than p ∈ {0, 1}, we need a 1 < (1 − 2p + 2p 2 )a 1 + p(1 − p)a 2 −2p 2 + 2p p(1 − p) < a 2 a 1 2 < a 2 a 1 S4 ImageNet Results We report all the data collected on ImageNet using the different gate strategies (independent, dependent), target loss strategies (per-batch, per-gate), and inference time strategies (threshold, always-on, stochastic, ensemble). Note that we try to include AIG for reference whenever it's a fair comparison. Also of note, for the ensemble technique we also include data from Snapshot Ensembles: Train 1, Get M For Free [15]. Note that using the stochastic networks, we outperform their ensemble technique. Also note that their technique is orthogonal to ours, so both could be utilized to identify an even better ensemble. In general, we observe that unsurprisingly, ensemble has the highest performance in terms of error; however, this requires multiple forward passes through the network, so the performance gain is somewhat offset by the inference time required. We also observe that threshold generally outperforms stochastic. This roughly makes sense if you consider stochastic inference as drawing a sample from a inference distribution -in this interpretation, threshold at .5 basically acts as an argmax. In addition to the improvement in performance, for the per-batch cases, threshold also tends to increase the number of activations. For all ImageNet results, we used the pretrained models provided by TorchVision 1 . S5.1 CIFAR10 Performance We report all the data collected on CIFAR10 using the different gate strategies (independent, dependent), target loss strategies (per-batch, per-gate). We report only the numbers for the stochastic inference time technique. We used CIFAR10 as a faster way to explore the space of parameters and combinations and as such have a more dense sweep of the combination and parameters. Note that for CIFAR10, we did not use a pretrained model; the entire model is trained from scratch. In general, we found that for a wide set of parameters per-batch outperforms per-gate. This includes independent per-batch outperforming dependent per-gate. The only exception to this is very high and very low target rates. However we note that at very high target rates, the accuracy of per-batch can be recovered through annealing. We attribute this to the fact that for CIFAR10 we train from scratch. Since the model is completely blank for the first several epochs, the per-batch loss can lower activations for any layers while still improving accuracy. In other words, at the beginning, the model is so inaccurate that any training on any subset of the model will result in a gain of accuracy; so when training from scratch, the per-batch loss will choose the layers to decrease activations for greedily and sub-optimally. One surprising result is that independent per-gate works at all for a wide range of target rates. This suggests that the redundancy effect described in [18] is so strong that the gates can be kept during inference time. This also suggests that at least for CIFAR10, most of the gains described in [44] were from regularization and not from specialization. We also report some variable rate target rates. We note that these tend to outperform the quadratic loss on a constant target rate. We believe that this is because variable target rate allows the optimizer to take the easiest and farther path down the manifold. We note that some of the variable target rates that worked on CIFAR10 did not work on ImageNet; namely, variable target rates which updated the target to previous mean quickly (within 5 epochs) lead to mode collapse to 1 for all gates. We attribute this to the much larger amount of training data for ImageNet and increased complexity of the task. However, both annealing and variable target rates merit further experimentation and research to truly understand how they perform on different datasets and with different training setups (from scratch vs from pretrained).
6,344
1812.04180
2905421523
We investigate learning to probabilistically bypass computations in a network architecture. Our approach is motivated by AIG, where layers are conditionally executed depending on their inputs, and the network is trained against a target bypass rate using a per-layer loss. We propose a per-batch loss function, and describe strategies for handling probabilistic bypass during inference as well as training. Per-batch loss allows the network additional flexibility. In particular, a form of mode collapse becomes plausible, where some layers are nearly always bypassed and some almost never; such a configuration is strongly discouraged by AIG's per-layer loss. We explore several inference-time strategies, including the natural MAP approach. With data-dependent bypass, we demonstrate improved performance over AIG. With data-independent bypass, as in stochastic depth, we observe mode collapse and effectively prune layers. We demonstrate our techniques on ResNet-50 and ResNet-101 for ImageNet , where our techniques produce improved accuracy (.15--.41 in precision@1) with substantially less computation (bypassing 25--40 of the layers).
Our work is also related to regularization techniques such as Dropout @cite_33 and Stochastic Depth @cite_17 . Both techniques try to induce redundancy through stochastically removing parts of the network during training time. Dropout ignores individual units and Stochastic Depth (as described above) skips entire layers. Both provide evidence that the increased redundancy improves helps to prevent overfitting. These techniques can be seen as applying stochastic gates to units or layers, respectively, where the gate probabilities are hyperparameters.
{ "abstract": [ "Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different \"thinned\" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.", "Very deep convolutional networks with hundreds of layers have led to significant reductions in error on competitive benchmarks. Although the unmatched expressiveness of the many layers can be highly desirable at test time, training very deep networks comes with its own set of challenges. The gradients can vanish, the forward flow often diminishes, and the training time can be painfully slow. To address these problems, we propose stochastic depth, a training procedure that enables the seemingly contradictory setup to train short networks and use deep networks at test time. We start with very deep networks but during training, for each mini-batch, randomly drop a subset of layers and bypass them with the identity function. This simple approach complements the recent success of residual networks. It reduces training time substantially and improves the test error significantly on almost all data sets that we used for evaluation. With stochastic depth we can increase the depth of residual networks even beyond 1200 layers and still yield meaningful improvements in test error (4.91 on CIFAR-10)." ], "cite_N": [ "@cite_33", "@cite_17" ], "mid": [ "2095705004", "2949892913" ] }
Deep networks with probabilistic gates
Despite the enormous success of convolutional networks [11,23,37], they remain poorly understood and difficult to optimize. A natural line of investigation, which [1] called conditional computation, is to conditionally bypass parts of the network. While inference-time efficiency could obviously benefit [1], bypassing computations can improve training time or test performance [4,18,44], and can provide insight into network behavior [18,44]. In this paper we investigate several new probabilistic bypass techniques. We focus on ResNet [11] architectures since these are the mainstay of current deep learning techniques for image classification. The general architecture of a ResNet with probabilistic bypass gates is shown in figure 1. The main idea is that a residual layer, such as f 1 , can potentially be bypassed depending on the results of the gating computation g 1 , which controls the gate. The gating computation g i can be data-independent, or it can depend on its input. If a g i always executes its layer, this describes a conventional ResNet. A more interesting data-independent architecture is stochastic depth [18], where at training time g i executes a layer with probability defined by a hyperparameter. At inference time the stochastic depth network is deterministic, though the frequency with which a layer was bypassed during training is used to scale down its weight. With the introduction of Gumbel-Softmax (GS) [2,7,20,30] it became possible to train a network to bypass computations, given some target bypass rate that trades off against training set accuracy. In this sense, probabilistic bypass serves as an additional regularizer for the network. This is the approach taken by AIG [44], where the bypass decision is data-dependent and the loss is per-layer. We propose a per-batch loss function, which allows the network to more flexibly distribute bypass among different layers, compared to AIG's per-layer loss. This in turn leads to more advantageous tradeoffs between accuracy and inference speed. When our per-batch loss is applied with data-independent bypass, we observe a form of mode collapse where individual layers are either nearly always bypassed or nearly never bypassed. This effectively prunes layers, and again results in advantageous tradeoffs between accuracy and inference speed. Whether a network uses data-dependent or data-independent probabilistic bypass, there remains a question of how to perform inference. We explore several alternative inference strategies, and provide evidence that the natural MAP approach gives good performance. This paper is organized as follows. We begin by introducing notation and briefly reviewing related work. Section 3 introduces our per-batch loss function and our inference strategies. Experimental results on ImageNet and CIFAR are presented in section 4, followed by a discussion of some natural extensions of our work. Additional experiments and more details are included in the supplemental material. Notation We first introduce some notation that allows us to more precisely discuss probabilistic bypass. Following [44], we write the output of layer l ∈ {0, 1, . . . , L} as x l , with the input image being x 0 . We can express the effect of a residual layer F l in a feed-forward neural network as x l = x l−1 + F l (x l−1 ). Then a probabilistic bypass gate z l ∈ {0, 1} for this layer modifies the forward pass to either run or skip the layer: x l = x l−1 + z l F l (x l−1 ). There are many different gating computations to determine the value of z l , which can be set at training time, at inference time, or both. The degenerate case z l = 1 corresponds to the original ResNet architecture. Stochastic depth (SD) [18] can be formalized as setting z l = 1 − l L (1 − p L ) during training, where p L is a hyperparameter set to 0.5 in the SD experiments. During inference, SD sets z l = 1 but uses information gleaned during training to adjust the weights of each layer, where layers that were frequently bypassed are down-weighted. We are particularly interested in AIG [44], which uses probabilistic bypass during both training and inference, along with a gating computation that depends on the input data (i.e., z l is a function of x l−1 ). Let G be the set of gates in the network and B be the set of instances in some mini-batch. AIG uses a target loss rate during training, and computes this on a per-gate basis. Given a target rate t ∈ [0, 1] this is L G = 1 |G| g∈G t − 1 |B| i∈B z g,i 2 This loss function encourages each layer to be bypassed at the target rate. Note that this penalty is symmetric, so bypassing more layers is as expensive as bypassing fewer. The overall loss is the sum of the target loss (denoted as L target ) and the standard multi-class logistic loss L MC , L = L target + L MC . For AIG, the target loss is L G . AIG uses the straight through trick -the z's are categorical during the forward pass but treated as a Gumbel softmax during the backwards pass. At inference time AIG is stochastic, since a given layer l might be bypassed depending on its input x l−1 and the learned bypass probability for that layer. Related work Conditional computation has been well studied in computer vision. Cascaded classifiers [45] shorten computation by identifying easy negatives and have recently been adapted to deep learning [26,46]. More directly, [14] and [31] both propose a cascading architecture which computes features at multiple scales and allows for dynamic evaluation, where at inference time the user can trade off speed for accuracy. Similarly, [43] adds intermediate classifiers and returns a label once the network reaches a specified confidence. [4,6] both use the state of the network to adaptively decrease the number of computational steps during inference. [6] uses an intermediate state sequence and a halting unit to limit the number of blocks that can be executed in an RNN; [4] learns an image dependent stopping condition for each ResNet block that conditionally bypasses the rest of the layers in the block. [36] trains a large number of small networks, called Experts, and then uses gates to select a sparse combination of the experts for a given input. Another approach to decreasing the computation time is network pruning. The earliest works attempted to determine the importance of specific weights [10,24] or hidden units [34] and remove those which are unimportant or redundant. Weight-based pruning on CNNs follows the same fundamental approach; [9] prunes weights with small magnitude and [8] incorporates these into a pipeline which also includes quantization and Huffman coding. Numerous techniques prune at the channel level, whether through heuristics [13,25] or approximations to importance [12,33,42]. [29] prunes at the filter level using statistics from the following layer. [48] applies binary mask variables to a layer's weight tensors, sorts the weights during train time, and then sends the lowest to zero. [19] is the most related to our data-independent bypass. They add a sparsity regularization and then modifies stochastic Accelerated Proximal Gradient to prune the network in an end-to-end fashion. Our work differs from [19] by using GS to integrate the sparsity constraint into an additive loss which can be trained by any optimization technique; we use unmodified stochastic gradient descent with momentum (SGD), the typical technique for training classification. Recently, [27] suggests that the main benefits of pruning come primarily from the identified architecture. Our work is also related to regularization techniques such as Dropout [41] and Stochastic Depth [18]. Both techniques try to induce redundancy through stochastically removing parts of the network during training time. Dropout ignores individual units and Stochastic Depth (as described above) skips entire layers. Both provide evidence that the increased redundancy improves helps to prevent overfitting. These techniques can be seen as applying stochastic gates to units or layers, respectively, where the gate probabilities are hyperparameters. In the Bayesian machine learning community, data-independent gating is used as form of regularization. This line of work is cast as generalizing hyperparameter-per-weight dropout by learning individual dropout weights. [39] performs pruning by learning multipliers for weights, which are incentivized to be 0 − 1 by a sparsity-encouraging loss w(1 − w). [5] proposes per-weight regularization, using the straight-through Gumbel-Softmax trick. [38] uses a form of trainable dropout, learning a per-neuron gating probability. These are regularized by their likelihood against a beta distribution, and training is done with the straight-through trick. [40] learns sparsity at the weight level using a binary mask. They adopt a complexity loss which is L 0 on weights, plus a sparsification loss similar to [39]. This is similar to a per-batch loss. [28] extends the straight-through trick with a hard sigmoid to obtain less biased estimates of the gradient. They use a loss equal to the sum of Bernoulli weights, which is similar to a per-batch loss. [32] extends the variational dropout in [21] to allow dropout probabilities greater than a half. Training with the straight-through trick and placing a log-scale uniform prior on the dropout probabilities, they find substantial sparsification with minimal change in change in accuracy, including on some vision problems. Using probabilistic bypass in deep networks In this section we investigate ways to use probabilistic bypass in deep networks. We propose a per-batch loss function, which we use during training with Gumbel softmax following AIG. This frequently leads to mode collapse (not dissimilar to that encouraged by the sparsity-encouraging loss in [40]), which effectively prunes network layers. At inference time we take a deterministic approach, and observe that a simple MAP approach gives strong experimental results. Batch loss during training We note that z g,i can be viewed as a random variable depending on the instance, whose expectation is E B [z g ] = 1 |B| i∈B z g,i . Seen this way, AIG's per-gate loss is L G = E G (t − E B [z]) 2 , that is, a squared L 2 loss on (t − z). The intuition for per-gate target loss is that each gate should be on expectation open around target t amount of time. When the gates are data-independent, this loss encourages each layer to execute with probability t. When the gates are data-dependent, this loss encourages each layer to learn to execute on a fraction t of the training instances. This per-gate target loss was intended to caused the layers of the network to specialize [44]. However, it is not a-priori obvious that specialization is the best architecture from a performance perspective. Instead, the most natural approach is to allow optimizer to select the network configuration which performs the best given a target activation rate. With this intuition, we propose per-batch target loss, which can be trained against using Gumbel softmax following AIG. For a target rate t ∈ [0, 1] L B =   t − 1 |G||B| g∈G i∈B z g,i   2 . This can be interpreted as: (t − E G,B [z]) 2 that is, a squared L 1 loss on (t − z). This loss only induces the network to have an activation of t. The intuition is that each batch is given t capacity and distributes the capacity among the instances and gates however it chooses. For example, if there exists a per-gate configuration with zero training loss that was easily found by optimization techniques, then per-batch would converge to it. Mode collapse With AIG's per-gate loss, each gate independently tries to hit its target rate, which means that the bypass rates will in general be fairly similar among gates. Our per-batch loss, however, allows different layers to have very different bypass rates, a network configuration would be heavily penalized by AIG. In our experiments we frequently observe a form of mode collapse, where layers are nearly always bypassed or nearly never bypassed. In this situation, our loss function encourages a form of network pruning, where we start with an overcapacitated network and then determine which layers to remove during training. Surprisingly, our experiments demonstrate that we end up with improved accuracy. Inference strategies Once training has produced a deep network with stochastic gates, it is necessary to decide how to perform inference. The simplest approach is to leave the gates in the network and allow them to be stochastic during inference time. This is the technique that AIG uses. Experimentally, we observe a small variance so this may be sufficient for most use cases. In addition, one way to take advantage of the stochasticity is to create an ensemble composed of multiple runs with the same network. Then any kind of ensemble technique can be used to combine the different runs: voting, weighing, boosting, etc. In practice, we observe a small bump in accuracy from this ensemble technique, though there is obviously a computational penalty. However, stochasticity has the awkward consequence that multiple classification runs on the same image will often return different results. There are several techniques to remove stochasticity. The gates can be removed, setting z l = 1 at test time. This is natural when viewing these gates as a regularization technique, and is the technique used by Stochastic Depth. Alternately, inference can be made deterministic by using a threshold τ instead of sampling. Thresholding with value τ means that a layer will be executed if the learned probability is greater than τ . This also allows the user some small degree of dynamic control over the computational cost of inference. If the user passes in a very high τ , then fewer layers will activate and inference will be faster. In our experiments, we set τ = 1 2 Note that we observe mode collapse for a large number of our per-batch experiments (particularly with data-independent gates). In this situation, for a wide range of τ thresholding can be interpreted as a pruning technique, where layers below a certain probability τ are pruned. Experiments Our primary experiments centered around probabilistic bypass to ResNet [11] and running the resulting network on ImageNet [3]. Our main finding is that our techniques improve both accuracy and inference speed. We also perform an empirical investigation into our networks in order to better understand their performance. Additional experiments, including CIFAR [22] as well as ImageNet, as well as more details are included in the supplemental material. Improving speed and accuracy on ImageNet We have implemented several probabilistic bypass techniques on the ResNet-50 and ResNet-101, and explored their performance on ImageNet. Since ResNet-101 is so computationally demanding, we have done more experiments on ResNet-50. Our techniques demonstrate improvements in accuracy and inference time on both ResNet-50 and ResNet-101. Figure 3: CIFAR-10 results. Dependent Per-Gate is our implementation of [44]. Note that Dependent Per-Batch has both higher accuracy and lower activation than any of the other combinations. Architecture and training details We use the baseline architecture of ResNet-50 and ResNet-101 [11] and place gates at the start of each residual layer. We adopt the AIG [44] gate architecture. During training we explore different combinations of data-dependent or -independent gates. We used our per-batch loss, as well as AIG's per-gate loss, with target rates t = {0.4, 0.5, 0.6}. We kept the same training schedule as AIG, and followed the standard ResNet training procedure: mini-batch size of 256, momentum of 0.9, and weight decay of 10 −4 . We train for 100 epochs from a pretrained model of the appropriate architecture with step-wise learning rate starting at 0.1, and after every 30 epochs decay by 10 −1 . We use standard training data-augmentation, and rescale the images to 256×256 followed by a 224×224 center crop. We observe that configurations with low gate activations cause the batch norm estimates of mean and variance to be slightly unstable. Therefore before final evaluation, we run training with a learning rate of zero and a large batch size for 200 batches in order to improve the stability and performance of the BatchNorm layers. This general technique was also utilized by [44]. Experimental results Our results are shown in figures 2 and 4. The most interesting experimental results are obtained with data-dependent gates and our per-batch loss function L B , along with thresholding at inference time. This combination gives a 0.41 improvement in top 1 error over ResNet-101 while using 30% less computation. It also gives the same improvement in top 1 error over AIG. On ResNet-50, this technique saves significant computation compared to AIG or the baseline ResNet, albeit with a small loss of accuracy. On the ResNet-50 architecture, we also investigated data-independent gates with per-batch loss, with thresholding at inference time. This produces an improvement in accuracy over our data-dependent architecture, as well as over AIG and vanilla ResNet-50. It saves significant computation (33% fewer gFLOPs) over ResNet-50, and is slightly faster than AIG but slower than the data-dependent architecture. Figure 4 also shows the impact of the thresholding inference strategy, which is used for the 3 columns at right. We found that thresholding at inference time often gives the best performance, leading to roughly .1 − .2 percentage points improvement in top-1 accuracy. We report the result on CIFAR10 and ResNet-101 in table 3. We note, for target rate of 0.5, dependent per-batch has the best performance in both accuracy and average activation. Figures for the remaining target rates can be found in the supplemental materials. Empirical investigations We performed a number of experiments to try to better understand the performance of our architecture. In particular, examining our learned bypass probability provides some interesting insights into how the network behaves. Pruning With the per-batch loss, we often observe mode collapse, where some layers are nearly always on and some nearly always off. In the case of data-dependent bypass, we can measure the observed activation of a gate during training. For example, on a per-batch run on ResNet-50 (16 gates) on ImageNet, nearly all of the 16 gates mode collapse, as shown in figure 4.2.1: four gates collapsed to a mode of zero or one exactly; more than half were at their mode more than 99.9% of the time. Interestingly, we observe different activation behavior on different datasets. ImageNet leads to frequent and aggressive mode collapse, all networks exhibited some degree of mode collapse; CIFAR10 can induce mode collapse but does so much less frequently, approximately less than 40% of our runs. Mode collapse can effectively perform end-to-end network pruning. At inference time, layers with near zero activation can be permanently skipped and even removed from the network entirely, decreasing the number of parameters in the network. In the data-independent per-batch case, the threshold inference technique will permanently skip all layers with probability lower than threshold value τ , essentially pruning them from the network. Thus, we propose this combination as a pruning technique and report an experimental comparison 1 with other modern pruning 1 We note a discrepancy between the GFlops reported by the baseline ResNet-50 between [44] and [16]. We calculate our 8 Figure 5: Demonstration of mode collapse on (left) data-dependent, per-batch ResNet-50 on ImageNet with target rate of .5, and (right) data-independent per-batch with target rate of .4 (right). Nearly all of the 16 gates collapse. Note that full mode collapse is discouraged by the quadratic loss whenever the target rate t is not equal to an integer over the number of gates g. Even if the layers try to mode collapse, the network will either be penalized by gt mod 1 or learn activations that utilize the extra amount of target rate. techniques, shown in figure 6. Understanding networks The learned activation rates for various gates can be used to explore the relative importance of the gated layers. If the average activation for a layer in the dependent or independent case is low, this suggests the network has learned that layer is not very important. Counter-intuitively, our experiments show that early layers are not particularly important in both ResNet-50 and -101. As seen in figure 7 and figure 8, for low-level features, the network only keeps one layer out of the three available. This suggests that fewer low-level features are needed for classification than generally thought. For example, on the ResNet-101 architecture, AIG constrains the three coarsest layers to have a target rate of 1, which indicates that these layers are essential for the rest of the network and must be on. numbers the same way as [44]. To compare fairly to [16], we do the most conservative thing and add back the discrepancy. We can also experimentally investigate the extent to which layers specialize. AIG [44] uses their pergate loss to encourage specialization, hoping to reduce overall inference time by letting the network restrict certain layers be used on specific classes. Although we find that per-batch generally outperforms per-gate in terms of overall activation, we note that in layers which are not mode collapsed, we do observe this kind of specialization even with a per-batch loss. An interesting example of specialization is shown in figure 8. The figure shows activation rates for dependent per-batch ResNet-101 with a target rate of 0.5 using thresholding at inference time. The network has mostly mode collapsed -most layers' activations are either 1 or 0. However, the layers that did not mode collapse show an interesting specialization, similar to what AIG reported. Extensions There are a number of natural extensions to our work that we have started to explore. We have focused on the use of probabilistic bypass gates to provide an early exit, when the network is sufficiently certain of the answer. We are motivated by MSDNet [14], which investigated early exit for both ResNet [11] and DenseNet [17]. We tested the usage of probabilistic bypass gates for early exit on both ResNet and DenseNet. Consistent with [14], we found that ResNet tended to degrade with intermediate classifiers while DenseNet did not. An immediate challenge is that in DenseNet, unlike ResNet, there is no natural interpretation of skipping a layer. Instead, we simply use the gate as a masking term. When the layer computation is skipped, the layer's output is set to zero and then, as per the architecture's design, is passed to later layers. For early exit in DenseNet, we follow [43] and place gates and intermediate classifiers at the end of each dense block. At the gate, the network makes a discrete decision as to whether the the instance can be successfully classified at that stage. If the gate returns true, then the instance is run through the classifier and the answer is returned; if the gate returns false, then the instance continues throughout the network. The advantage of using GS here is that the early exit can be trained in an end-to-end fashion unlike [43] which uses reinforcement learning. In our experiment, we implemented both early exit and probabilistic bypass on a per-layer basis. We set a per-gate target for layers of .9 and a per-gate target for both early exit points of .3 using a piece-wise function with a quadratic before the target rate and a constant after. This function matches the intuition that we should not penalize the network if it can increase the number of early exits without affecting accuracy. We observe that these early exit gates can make good decisions regarding which instances to classify early; more specifically, the first classifier has a much higher accuracy on the instances chosen by the gate than on the entire test set. The network had an overall error of 5.61 while utilizing on average only 68.4% of the layers; our implementation of the original DenseNet architecture achieves an error of 4.76 ( [17] reports an error of 4.51). The results for each block classifier are seen in Figure 9. More than a third of examples exited early, while overall error was still low. This demonstrates the potential of early exit with probabilistic bypass for DenseNet. Conclusion and future work One intriguing direction to explore is to remove the notion of a target activation rate completely, since it is not obvious what a good target would be for a particular use case. In general a user would prefer an accurate network with fast inference. The exact tradeoff between speed and accuracy will in general vary between applications, but there is no natural way for a user to express such a preference in terms of a target activation Figure 8: Specialization on ResNet-101 with data-dependent per-batch at target rate of 0.5. The left heatmap uses the stochastic strategy technique and the right heatmap uses the threshold inference strategy. Each vertical stripe is one layer; each row is an ImageNet class. While most layers have mode collapsed, the ones that have not mode collapsed show similar specializations as seen in [44]. For example, layer 24 runs mostly on fish and lizards, while layer 28 runs specifically on cats and dogs. These layers are highlighted in green. rate. It might be possible to automatically chose a target rate that optimizes a particular combination of inference speed and accuracy. Another promising avenue is the idea of annealing the target rate down from 1. This makes the adjustment to the target loss more gradual and may encourage more possible configurations. Intuitively, this could gives the network a greater chance to 'change its mind' regarding a layer and alter the layer's representation instead of always skipping it. We have demonstrated that probabilistic bypass is a powerful tool for optimizing and understanding neural networks. The per-batch loss function that we have proposed, together with thresholding at inference time, has produced strong experimental results both in terms of speed and accuracy. Appendix: Additional extensions and experimental results S1 Ongoing Work Currently we are pursuing several different avenues of extending this project. S1.1 Taskonomy We are working to apply this work to Taskonomy [47], which tries to identify the most important features for a given task. One of the problems this paper faces is the combinatoral explosion in the number of higherorder transfers; specifically, to find the best k features for a given task, they need to exhaustively try |S| k . In the paper, they rely on a beam search using the performance of a single feature by itself as a proxy for how well the feature will do in a subset. However, this seems highly suboptimal since it is plausible that some feature will perform poorly on its own but perform well when matched with a complement feature. Instead, we propose to use all features as input and place probabilistic gates on the input. If the behavior of our data-independent gates remains the same (namely we observe mode collapse), then we can use our PerBatch training schedule to figure out the best subset of features. More specifically, we would restrict the number of features used through the target rate. For example, Taskonomy uses 26 features, so we expect that a target rate of k 26 will give the k best features for the task. S1.2 Multi-task Learning More generally, we plan to observe how probabilistic gates affects the common architecture for multitask learning. We plan to apply data-dependent gates to the different feature representations and allow the network to either use or ignore the representation depending on the input value. The main motivation for this line of research is that for some inputs a given feature representation may not be useful and in fact using it may lead to worse results. Therefore the network should be allowed to ignore it depending on the input. S1.3 MobileNet We are actively exploring applying these techniques to MobileNet [35] and have some initial results. Applying this technique as is gives results that roughly work as well as changing the expansion factor; more specifically, our results are approximately on the line for MobileNetV2 224 × 224 in Figure 5. We are now working on improving on the line given by the expansion factor. Specifically, we are exploring two directions: (1) increasing the granularity with which we apply the gates and (2) creating an over-capacitated version of MobileNet and then using our techniques to prune it to be the correct size. S2 Additional techniques for training S2.1 Annealing Additionally in the training stage, we propose annealing the target rate. In particular, we use a step-wise annealing target rate which decreases a amount after k epochs. Typical values are a = .05 and k = 5. The intuition behind annealing is that with per-batch activation loss, annealing prevents the network from greedily killing off the worst starting layers. Instead, layers which perform worse in the beginning have a chance to change their representation. In practice, we have observed that over time layer activations are not always monotonic and some layers will initially start to be less active but will recover. We observe this behavior more with an annealing schedule than for a fixed target rate. S2.2 Variable target rate As far as we are aware, previous work has used the L 2 term exclusively. This has the affect of forcing activations towards a specific target rate and letting them vary only if it leads to improvements in accuracy. However, this also prevents a situation where the network can decrease activation while retaining the same accuracy -a scenario which is clearly desirable. As a result, we propose a piece-wise activation loss composed of a constant and then quadratic which indicates that there is no penalty for decreasing activation. Let B i = i∈B z g,i and t be the target activation rate. For the per-batch setup, this loss is as follows. L CQ,B =    0, for B i ≤ t t − 1 |G||B| g∈G B i 2 , for B i > t Additionally, there are many cases where a target activation is not clear and the user simply wants an accurate network with reduced train time. For this training schedule, we propose Variable Target Rates, which treat the target rate as a moving average of the network utilization. For each epoch, the target rate starts at a specific hyperparameter (to prevent collapse to 1) and then is allowed to change according to the batch's activation rate. The two simplest possibilities for the update step are: 1) moving average, and 2) exponential moving average. S2.3 Granularity For more granularity, we consider gating blocks of filters separately. In this case, we assume that F has N filters, i.e., F(x l−1 ) has dimension (W × H × N ). Then we write x l = x l−1 +      z l,1 F l,1 (x l−1 ) z l,2 F l,2 (x l−1 ) . . . z l,n F l,n (x l−1 )      where F l,i has dimension (W × H × (N/n)). We note that it is not essential that the layers F are residual; we consider also dropping the added x l−1 on the right-hand side of these equations. S2.4 Additional Early Exit Details For our early exit classifiers, we use the same classifiers as [14]. For the gate structure, we use a stronger version of the gate described by [44]. The gates are comprised of the following: a 3 × 3 convolutional layer with stride of 1 and padding of 1 which takes the current state of the model and outputs 128 channels, a BatchNorm, another 3 × 3 convolutional layer with stride of 1 and padding of 1 which outputs 128 channels, a BatchNorm, a 4 × 4 average pool, a linear layer, and then finally a GumbleSoftmax. S3 Observations Observation S3.1 A layer with activation of p can only affect the final accuracy by p. Observation S3.2 Given a set of layers with activation p p p, we can apply the Inclusion-Exclusion principle to get an upper bound for the amount these layers can affect the final accuracy of the network. So for example, if Layer 1 and Layer 2 both run with probability p but always co-occur (activate at the same time), then the set of Layer 1 and Layer 2 can only affect final accuracy by p. We use these observations to motivate an explanation for mode collapse in the data-independent perbatch case. Consider a network with only two layers and the restriction that, on expectation, only one layer should be on. Then let p be the probability that layer 1 is on. Intuitively, if p ∈ {0, 1}, then we are in a high entropy state where the network must deal with a large amount of uncertainty regarding which layers will be active. Furthermore, some of the work training each individual layer will be wasted during inference some percentage of the time since that layer will be skipped with non-zero probability. More precisely: Observation S3.3 Consider a network with two layers with data-independent probability p 1 and p 2 of being on. The network is then given a hard capacity of 1 (ie. one layer can be on, or each layer can be on half the time). Let a 1 be the expected accuracy of a one-layer network, a 2 be the expected accuracy of a two-layer network. Then if the network is given a hard capacity of 1, in order for p 1 ∈ {0, 1}, we need that a 2 a 1 ≥ 2. Since the network is given a hard capacity of 1, then we are only learning a single parameter p 1 since p 2 = 1 − p 1 . Let p = p 1 . Then p(1 − p) is the probability that both layers will be on and also the probability that both layers will be off. Note that the network has a strict upper bound on accuracy of 1 − p(1 − p) since with probability p(1 − p) none of the layers will activate and no output will be given. Then the expected accuracy of the network for any probability p ∈ [0, 1] is (1−2p+2p 2 )a 1 +p(1−p)a 2 , note that for p = 0, 1 the accuracy is simply a 1 . For a value p ∈ (0, 1) to be better than p ∈ {0, 1}, we need a 1 < (1 − 2p + 2p 2 )a 1 + p(1 − p)a 2 −2p 2 + 2p p(1 − p) < a 2 a 1 2 < a 2 a 1 S4 ImageNet Results We report all the data collected on ImageNet using the different gate strategies (independent, dependent), target loss strategies (per-batch, per-gate), and inference time strategies (threshold, always-on, stochastic, ensemble). Note that we try to include AIG for reference whenever it's a fair comparison. Also of note, for the ensemble technique we also include data from Snapshot Ensembles: Train 1, Get M For Free [15]. Note that using the stochastic networks, we outperform their ensemble technique. Also note that their technique is orthogonal to ours, so both could be utilized to identify an even better ensemble. In general, we observe that unsurprisingly, ensemble has the highest performance in terms of error; however, this requires multiple forward passes through the network, so the performance gain is somewhat offset by the inference time required. We also observe that threshold generally outperforms stochastic. This roughly makes sense if you consider stochastic inference as drawing a sample from a inference distribution -in this interpretation, threshold at .5 basically acts as an argmax. In addition to the improvement in performance, for the per-batch cases, threshold also tends to increase the number of activations. For all ImageNet results, we used the pretrained models provided by TorchVision 1 . S5.1 CIFAR10 Performance We report all the data collected on CIFAR10 using the different gate strategies (independent, dependent), target loss strategies (per-batch, per-gate). We report only the numbers for the stochastic inference time technique. We used CIFAR10 as a faster way to explore the space of parameters and combinations and as such have a more dense sweep of the combination and parameters. Note that for CIFAR10, we did not use a pretrained model; the entire model is trained from scratch. In general, we found that for a wide set of parameters per-batch outperforms per-gate. This includes independent per-batch outperforming dependent per-gate. The only exception to this is very high and very low target rates. However we note that at very high target rates, the accuracy of per-batch can be recovered through annealing. We attribute this to the fact that for CIFAR10 we train from scratch. Since the model is completely blank for the first several epochs, the per-batch loss can lower activations for any layers while still improving accuracy. In other words, at the beginning, the model is so inaccurate that any training on any subset of the model will result in a gain of accuracy; so when training from scratch, the per-batch loss will choose the layers to decrease activations for greedily and sub-optimally. One surprising result is that independent per-gate works at all for a wide range of target rates. This suggests that the redundancy effect described in [18] is so strong that the gates can be kept during inference time. This also suggests that at least for CIFAR10, most of the gains described in [44] were from regularization and not from specialization. We also report some variable rate target rates. We note that these tend to outperform the quadratic loss on a constant target rate. We believe that this is because variable target rate allows the optimizer to take the easiest and farther path down the manifold. We note that some of the variable target rates that worked on CIFAR10 did not work on ImageNet; namely, variable target rates which updated the target to previous mean quickly (within 5 epochs) lead to mode collapse to 1 for all gates. We attribute this to the much larger amount of training data for ImageNet and increased complexity of the task. However, both annealing and variable target rates merit further experimentation and research to truly understand how they perform on different datasets and with different training setups (from scratch vs from pretrained).
6,344
1812.04109
2953257876
We propose Top-N-Rank, a novel family of list-wise Learning-to-Rank models for reliably recommending the N top-ranked items. The proposed models optimize a variant of the widely used discounted cumulative gain (DCG) objective function which differs from DCG in two important aspects: (i) It limits the evaluation of DCG only on the top N items in the ranked lists, thereby eliminating the impact of low-ranked items on the learned ranking function; and (ii) it incorporates weights that allow the model to leverage multiple types of implicit feedback with differing levels of reliability or trustworthiness. Because the resulting objective function is non-smooth and hence challenging to optimize, we consider two smooth approximations of the objective function, using the traditional sigmoid function and the rectified linear unit (ReLU). We propose a family of learning-to-rank algorithms (Top-N-Rank) that work with any smooth objective function. Then, a more efficient variant, Top-N-Rank.ReLU, is introduced, which effectively exploits the properties of ReLU function to reduce the computational complexity of Top-N-Rank from quadratic to linear in the average number of items rated by users. The results of our experiments using two widely used benchmarks, namely, the MovieLens data set and the Amazon Video Games data set demonstrate that: (i) The top-N truncation' of the objective function substantially improves the ranking quality of the top N recommendations; (ii) using the ReLU for smoothing the objective function yields significant improvement in both ranking quality as well as runtime as compared to using the sigmoid; and (iii) Top-N-Rank.ReLU substantially outperforms the well-performing list-wise ranking methods in terms of ranking quality.
Existing LTR approaches suffer from several limitations. Although, in practical applications, only the top (say N) items in the ranked list are of interest, and the lower-ranked ratings in the list are less reliable, most existing LTR methods are optimized on the ranks of the entire lists, which, could potentially reduce the ranking quality of the top-ranked items. Furthermore, the computational complexity of straightforward approaches to optimizing ranking measures (e.g., DCG @cite_5 , MRR @cite_20 , AUC @cite_11 or MAP @cite_6 ), scale quadratically with @math (the average number of observed items across all users), which renders such methods impractical in large-scale real-world settings.
{ "abstract": [ "", "In this paper, we tackle the problem of top-N context-aware recommendation for implicit feedback scenarios. We frame this challenge as a ranking problem in collaborative filtering (CF). Much of the past work on CF has not focused on evaluation metrics that lead to good top-N recommendation lists in designing recommendation models. In addition, previous work on context-aware recommendation has mainly focused on explicit feedback data, i.e., ratings. We propose TFMAP, a model that directly maximizes Mean Average Precision with the aim of creating an optimally ranked list of items for individual users under a given context. TFMAP uses tensor factorization to model implicit feedback data (e.g., purchases, clicks) with contextual information. The optimization of MAP in a large data collection is computationally too complex to be tractable in practice. To address this computational bottleneck, we present a fast learning algorithm that exploits several intrinsic properties of average precision to improve the learning efficiency of TFMAP, and to ensure its scalability. We experimentally verify the effectiveness of the proposed fast learning algorithm, and demonstrate that TFMAP significantly outperforms state-of-the-art recommendation approaches.", "In this paper we tackle the problem of recommendation in the scenarios with binary relevance data, when only a few (k) items are recommended to individual users. Past work on Collaborative Filtering (CF) has either not addressed the ranking problem for binary relevance datasets, or not specifically focused on improving top-k recommendations. To solve the problem we propose a new CF approach, Collaborative Less-is-More Filtering (CLiMF). In CLiMF the model parameters are learned by directly maximizing the Mean Reciprocal Rank (MRR), which is a well-known information retrieval metric for measuring the performance of top-k recommendations. We achieve linear computational complexity by introducing a lower bound of the smoothed reciprocal rank metric. Experiments on two social network datasets demonstrate the effectiveness and the scalability of CLiMF, and show that CLiMF significantly outperforms a naive baseline and two state-of-the-art CF methods.", "The ranking quality at the top of the list is crucial in many real-world applications of recommender systems. In this paper, we present a novel framework that allows for pointwise as well as listwise training with respect to various ranking metrics. This is based on a training objective function where we assume that, for given a user, the recommender system predicts scores for all items that follow approximately a Gaussian distribution. We motivate this assumption from the properties of implicit feedback data. As a model, we use matrix factorization and extend it by non-linear activation functions, as customary in the literature of artificial neural networks. In particular, we use non-linear activation functions derived from our Gaussian assumption. Our preliminary experimental results show that this approach is competitive with state-of-the-art methods with respect to optimizing the Area under the ROC curve, while it is particularly effective in optimizing the head of the ranked list." ], "cite_N": [ "@cite_5", "@cite_6", "@cite_20", "@cite_11" ], "mid": [ "", "1999956270", "2006822005", "1974318090" ] }
Top-N-Rank: A Scalable List-wise Ranking Method for Recommender Systems
Collaborative filtering (CF) is one of the most widely used methods in recommender systems. CF systems recommend similar items to users who share similar traits or similar tastes [1]. Learning to Rank (LTR) methods, which directly learn to accurately rank items based on the user's ratings, rankings, or implicit feedback over a set of items, are widely used to learn the perfect rankings for top-n recommendation scenarios [2], [3]. Part of this work was conducted when the first author was at the South China University of Technology. * Corresponding author ([email protected]). B. Overview and Contributions To address the limitations of existing LTR systems, we propose Top-N-Rank, a novel latent factor based list-wise ranking model for top-N recommendation problem which directly optimizes a novel weighted "top-heavy" truncated variant of the DCG ranking measure, namely, wDCG@N. Since in many situations, the users only refer to the topranked items in the list, the higher positions often have more impact on the ranking score than the lower ones. Our proposed measure, wDCG@N, differs from the conventional DCG in two important aspects: (i) It considers only on the top N items in the ranked lists, thereby eliminating the impact of lowranked items; and (ii) it incorporates weights that allow the model to learn from multiple kinds of implicit feedback. Because wDCG@N is non-smooth, we introduce the rectified linear unit (ReLU) as a smoothing function, which is more suited to top N ranking problems than the traditional sigmoid function. ReLU not only eliminates the contribution of the low-ranked items on our loss function, but also allows us to obtain a significantly faster variant of the wDCG@N-based LTR approach (Top-N-Rank.ReLU), yielding a substantial reduction in computational complexity from O(kn ′m2 ) to O(kn ′m ), where k denotes the dimension of latent factors, n ′ denotes the batch size of users for the stochastic gradient descent algorithm andm is the average number of (observed) items, making Top-N-Rank.ReLU scalable to large-scale realworld settings. The main contributions of this paper can be summarized as follows: We compared the performance of Top-N-Rank and Top-N-Rank.ReLU with several state-of-the-art list-wise LTR methods [3], [10], [12], [13], [16] using the MovieLens (20M) data set [17] and the Amazon video games data set [18]. All experiments were performed on Apache Spark cluster [19] and the raw data are stored on Hadoop Distribute File System (HDFS). II. PRELIMINARIES Let U = {u 1 , u 2 , . . . , u n } be the set of n users, I = {i 1 , i 2 , . . . , i m } be the set of m items and P = {p 1 , p 2 , . . . , p t } be the set of t types of implicit feedback. The interactions of users with items and the associated implicit feedback are represented by X = U × I × P, where the entry (u, i, p ui ) ∈ X denotes the interaction of user u with item i and the associated implicit feedback p ui . We further denote by I + u the subset of items actually observed by or presented to u. For each i ∈ I + u , we denote the rating of i by f ui and the position of i based on u's rank ordering of items by R + ui . We reserve the indexing letters u to indicate arbitrary user in U and i to represent arbitrary item in I. A. Latent Factor Model Latent factor models (LFMs) are state-of-the-art in terms of both the quality of recommendations as well as scalability [20]. LFMs represent both users and items using lowdimensional vectors of latent factors. Let θ be the set of latent factors such that θ user is an n × k matrix with the u-th row θ user u denoting the latent factors of u and θ item is an m × k matrix with the i-th row θ item i denoting the latent factors of i. The rank k, of the latent factor matrices is much smaller than n or m. The rating for u to i is predicted by the dot product of θ user u and θ item i . B. Discounted Cumulative Gain The discounted Cumulative Gain (DCG) [21] is a widely used measure of quality of recommendations, which measures the degree to which higher ranked items are placed ahead of the lower ranked ones, with the contribution of lower ranked items discounted by a logarithmic factor. Let y ui be a binary indicator to represent whether i is relevant to u, then DCG of u is computed by: DCG u = i∈I 2 yui − 1 log (R ui + 2)(1) Notice that the ranked position (start from zero) of item i can be computed by: R ui = j∈I 1(f ui < f uj )(2) where 1(x) is an indicator function with 1(x) = 1 if x is true and otherwise 1(x) = 0. Given our emphasis on getting the top rated items ranked correctly in the list of recommended items, DCG appears to be good criterion to optimize on. However, as evident from (1), DCG suffers from two important limitations: (i) Although DCG de-emphasizes the contribution of the lower ranked items, it does not eliminate the collective effect of a large number of lower ranked items, even if the ranking of such lower ranked items are less reliable. If the goal is to optimize the ranking of the N top rated items, it makes sense to tailor objective function to focus explicitly on the ranking of the N top-rated items and ignore the rest. (ii) Because DCG assigns equal weights to all implicit user feedback, it fails to account for differences in their trustworthiness. III. TOP-N-RANK We proceed to introduce wDCG@N, a variant of DCG that overcomes its drawbacks. We then describe two smoothing functions (sigmoid and rectified linear unit (ReLU)) that can convert wDCG@N to a smoothed function that is amenable to being optimized using the standard optimization techniques. Finally, we show how to use the ReLU approximation of wDCG@N to obtain a scalable LTR algorithm. A. Top-N-Rank Training Objective To address the limitations of DCG, we introduce wDCG@N, which is defined as follows: wDCG u @N = i∈I 1 (R ui < N ) · w pui (2 yui − 1) log (R ui + 2)(3) The first term in (3), 1 (R ui < N ) is an indicator function that selects only the N top-rated items and ignores the rest. The coefficient w pui in the second term denotes the weight of the implicit feedback p ui , which can model the reliability or importance of the feedback. The choice of w pui is application and data-dependent. For example, one can set w pui to the number of items rated by (or presented to) the user [22] or the conversion rate (the proportion between buyers and users who conducted the implicit feedback). The resulting ranking objective can be formulated as: L(θ) = max θ u∈U wDCG u @N − λ ||θ|| 2 2(4) where ||·|| 2 2 denotes the L 2 -norm and λ is the regularization coefficient that controls over-fitting. B. Smooth Approximations of Top-N-Rank Training Objective A non-smooth training objective such as the one in (4) is challenging to optimize. Hence, we replace the non-smooth training objective in (4) by its smooth approximation. Specifically, we approximate the indicator function in (2) by a smooth function h such that 1 (f ui < f uj ) ≈ h (∆ uji ) with ∆ uji = f uj − f ui . In what follows, we will consider two different smooth functions that accomplish this goal. Sigmoid function. The sigmoid function is widely used in existing list-wise LTR-based recommendation models (e.g., [3], [9]) for its appealing performance in practice. Instead of adopting the sigmoid function directly, we introduce a scaling constant C (C ≥ 1) to provides more accurate estimation, such that the indicator function is approximated by g (C∆ uji ) where g (x) = 1/(1 + exp (−x)). Rectifier function. The rectified linear units (ReLU) [23], is a nonlinear smooth function with several properties that make it attractive in our setting. The one-sided nature of ReLU (relu (x) = max {0, x}) eliminates the contribution of of the lower-rated items to the objective function. Second, ReLU is computationally simper: only comparison and addition operations are required. Third, the form of ReLU permits an efficient algorithm (see Algorithm 2) with computational complexity that is linear in the average number of (observed) items across all users (see section III-C2). When ReLU is used, we have 1 (f ui < f uj ) ≈ relu (∆ uji ) 1 ) Parameterization of the Smooth Functions: Recall that the "top-N term", 1 (R ui < N ), was introduced to indicate whether item i ranks among the top N item list. However, a poor choice of the hyper-parameters in the smooth function could lead to gross under-estimation or over-estimation of R ui and thus negate the utility of the "top-N term". Here we examine how to choose the parameters of the sigmoid and the ReLU functions so that they behave as intended. In the case of the sigmoid function, we see that a choice of C matters, with proper values of C (e.g., C = 7) yielding the desired behavior. In the case of ReLU, we can ensure the desired behavior by controlling the initial distribution of latent factors θ. Suppose that θ ∼ U (0, b), where b is the width of the uniform distribution. Then according to the Central Limit Theorem, for arbitrary u, i, f ui approximately follows a Gaussian distribution, i.e., f ui ∼ N µ, σ 2 with µ = E t θ user ut · θ item it = kb 2 /4 and σ 2 = E t θ user ut · θ item it 2 − µ 2 = 7kb 4 /144. In order to ensure that |f ui − µ| ≤ 1, making use of the fact that P (|f ui − µ| ≤ 3σ) ≈ 1, we find 3σ = 1, and hence b = 2/ 4 √ 7k, which provides the basic setting for all of the Top-N models using the ReLU as the smoothing function. C. Fast LTR Algorithms 1) Fast LTR Algorithm for Generic Smooth Function: To optimize the objective function reported in (4), we need to compute the predicted score of each item and then perform the pair-wise comparison to determine their positions in the rank-ordered list. Because in most cases, the number of items m far outnumbers the dimension of the latent factors k, the complexity of a single pass is O knm 2 . One common practice is to exploit the sparsity of X by considering only the predicted scores of the observed items, yielding a smooth objective function such as: L + (θ) = min θ − u∈U i∈I + u h(N − R + ui ) · w pui (2 yui − 1) log(R + ui + 2) + λ||θ|| 2 2(5) The gradient of L + (θ) w.r.t. θ is given by (6). ∂L + ∂θ = u∈U i∈I + u wDCG + u · ( j∈I + u h ′ (∆ uji ) ∂∆ uji ∂θ ) · (h ′ (N − R + ui ) + h(N − R + ui ) (R + ui + 2) log(R + ui + 2) ) + 2λθ For a single user, step 5 and 12 is computed in O(km +m logm). Note that R + uπi (step 7 and step 14) and j∈I + u h ′ (∆ ujπi ) ∂∆ujπ i ∂θ user u (step 8 and step 15) can be calculated in O (1) and O (k) respectively through step-by-step accumulation, the complexity of step 6-11 and step 13-17 are O(km). Therefore, the overall computational complexity of Top-N -Rank.ReLU for one iteration is O(n ′m (k + logm)). In practice, log m is usually very small (less than 20) even in large-scale systems, Thus, we can expect that k is of the Algorithm 2: Top-N-Rank.ReLU Input : User-item feedback X ⊆ U × I × P, the truncate coefficient N , smooth function h, dimension of latent factors k, learning rate α, regularization coefficient λ, batch size n ′ Output: The learned latent factors θ 1 Initialize θ (0) randomly from U (0, 2/ 4 √ 7k) and t = 0 2 while not converged do 3 U (t) = draw n ′ samples randomly from U 4 for u ∈ U (t) do 5 π= the descending orders of items indicated by the predicted score f + u 6 for i = 2, . . . , |π| do 7 R + uπi = j<i f uπj − f uπi 8 j∈I + u h ′ (∆ ujπi ) ∂∆ujπ i ∂θ user u = j<i θ item πj − θ item πi 9 compute and add the current gradient to R + uπi = j<i f uπj − f uπi 15 j∈I + u h ′ (∆ ujπi ) ∂∆ujπ i ∂θ item π i = (1 − i) θ user u 16 Update θ item πi (t+1) = θ item πi (t) − α ∂L + ∂θ item π i (t) based on (6) IV. EXPERIMENTS AND RESULTS We report results of two sets of experiments. The first set of experiments compare the performance of Top-N-Rank models using either the sigmoid and the ReLU functions for smoothing with or without the "top-N truncation". Our results show that Top-N-Rank.ReLU (using "top-N truncation" and the ReLU function, i.e., Algorithm 2) outperforms the other methods on both the benchmark data sets. The second set of experiments compare the performance of Top-N-Rank.ReLU with several state-of-the-art list-wise LTR CF approaches. Our results show that Top-N-Rank algorithms outperform the these methods on both the benchmark data sets. All of our experiments were performed on an Apache Spark cluster [19] with four compute nodes (Intel Xeon 2.1 GHz CPU with 20G RAM per node) with the raw data stored on Hadoop Distributed File System (HDFS). The model parameters were tuned to optimize performance on the training data. We describe below the details of the experiments and the results. A. Experimental Setup 1) Data Sets: We used two benchmark data sets in our experiments: (i) the Amazon video games data set [18], which contains a subset of video games product reviews (ratings, text, etc.) from Amazon. There are 7,077 users, 25,744 items and more than 1 million ratings in this data set. (ii) The MovieLens (20M) data set [17], which contains 138,493 users, 27,278 items and more than 20 millions of ratings. The ratings in both data sets are split to 1-5 stars, with more stars corresponding to higher ratings. We use only the user rating data to conduct the experiments. 2) Evaluation Procedure: We first remove users who rated fewer than 10 items. For the remaining users, we convert the ratings to implicit feedback based on the item ratings provided by each user. That is, for each u, we assign w pui = 1 when f ui ≥ 4 and otherwise w pui = −1 [10]. We randomly select half of the ratings provided by each user for training, and use the rest for evaluation. On each test run, we average the performance over all of the users. We repeat this process 5 times and report the performance averaged across the 5 independent experiments. We measure the performance based only on the rated items as in [10]. Because we focus on the placement of the toprated items in the rank-ordered list, it is natural to use the Normalized Discounted Cumulative Gain (NDCG) [24] as the performance measure. In this paper, we report the average of NDCG@1 through NCDG@N across all users. The definition of NDCG at the top-N positions for a user u is given by: NDCG u @N = DCG u @N IDCG u @N(7) where DCG u @N is the DCG value for the top-N ranked items as described in (1). IDCG u @N is the perfect ranking score which is obtained when the ranked list is created by sorting the items in descending order of their implicit feedback values (ratings). B. Comparison of Variants of Top-N-Rank We compare the performance of LTR models trained with the smoothed and regularized wDCG@N objective using either the sigmoid and the ReLU functions for smoothing, with or without the "top-N truncation": (i) Top-N-Rank.ReLU: our proposed Top-N-Rank model trained to optimize wDCG@N smoothed using the ReLU function (Algorithm 2); (ii) non-Top-N.ReLU: The LTR model trained to optimize wDCG smoothed using the ReLU; (iii) Top-N-Rank.sgm: our proposed top-N model trained to optimize wDCG@N smoothed using the sigmoid function (Algorithm 1); and (iv) non-Top-N.sgm: The LTR model trained to optimize wDCG smoothed using the sigmoid function. In these experiments, we set the number of latent factors k = 10 and the number of items ranked, N = 20. For the sigmoid function, C = 7 and for the ReLU function, b = 2/ 4 √ 7k (see section III-B1). The regularization coefficient λ is set to 0.1; and the batch size n ′ is set to 10% of the users in the training data. All methods are run until either maximum iteration maxR = 30 is reached or sum-of-square distance between parameters of two consecutive runs falls below the threshold ǫ = 0.1. The results of our experiments are summarized in Table I. Our results clearly show that the Top-N-Rank models with the "top-N truncation" term in the objective function consistently and statistically significantly (based on paired Student's ttest) outperform the non top-N counterparts. This confirms our intuition that Top-N-Rank models focus on correctly ordering the top-rated items, and hence are resistant to the cumulative effect (often unreliable) of lower-rated items. The results in Table I also show that Top-N-Rank.ReLU substantially outperforms Top-N-Rank.sgm. Moreover, the performance of Top-N-Rank.sgm is comparable to that of Non-Top-N.ReLU. We conclude that the ReLU function, with an appropriate choice of b is better able to more accurately rank the top-rated items. The runtime for Top-N-Rank.ReLU is significantly lower than that of Top-N-Rank.sgm (results not shown), proving the appealing efficiency of Top-N-Rank.ReLU. C. Top-N-Rank.ReLU Compared with the State-of-the-Art List-wise LTR Models We compare Top-N-Rank.ReLU with several state-of-theart list-wise LTR CF approaches: (i) MF-ADG: An algorithm that optimizes the Averaged Discounted Gain (ADG), which is obtained by averaging the DCG across all users [10]. Similar to our work, MF-ADG is designed to work with implicit feedback data sets. The sampling parameter γ is fixed at 100; (ii) CLiMF: A MF model that is designed to work with binarized implicit feedback data sets, which optimizes mean-reciprocal rank (MRR) [3]. Instead of directly optimizing MRR, CLiMF learns the latent factors by maximizing the smoothed lower bound of MRR; (iii) xCLiMF: An extension of CLiMF that optimizes the expected reciprocal rank (ERR), which is designed to work with graded user ratings [16]; (iv) ListRank: A MF model that optimizes the cross-entropy between the distribution of the observed and predicted ratings using topone probability, which is obtained using the softmax function [13]; and (v) ListPMF-PL: A list-wise probabilistic matrix factorization method that maximizes the log posterior over the predicted rank order with the observed preference order, using the Plackett-Luce model based permutation probability [12]. The results of our experiments are summarized in Table II. Top-N-Rank.ReLU consistently outperforms the baseline models on both the Amazon Video Game and MovieLens data sets, regardless of the length of recommended item lists. Student's t test further demonstrate the significance of our results (not shown). Although Top-N-Rank.ReLU maximize wDCG on the top-20 items, the results show that the model offers better quality of recommendations across the top 1-20 items relative to the baselines. This may be explained in part by the following limitations of the individual methods: CLiMF and xCLiMF are designed to optimize the smoothed reciprocal rank (RR), which does not fully exploit the user ratings, because of its emphasis on optimizing only a few of the relevant items for each user; MF-ADG maximizes an approximation of ADG, on a small set of sampled data which may limit the quality of the estimates; ListRank and ListPMF-PL are designed for rating data, but assign the same weight to all items with the same rating. Perhaps more importantly, all of the methods except Top-N-Rank.ReLU attempt to optimize the ranking over the entire set of the user-rated items, as opposed to only the N top-ranked items, which makes them susceptible to the noise in the ratings of low-ranked items. V. SUMMARY AND DISCUSSION In this paper, we proposed Top-N-Rank, a novel family of list-wise Learning-to-Rank models for reliably recommending the N top-ranked items. The proposed models optimize wDCG@N, a variant of the widely used cumulative discounted gain (DCG) objective function which differs from DCG in two important aspects: (1) It limits the evaluation of DCG only on the top N items in the ranked lists, thereby eliminating the impact of low-ranked items on the learned ranking function; and (2) it incorporates weights that allow the model to learn from multiple kinds of implicit user feedback with differing levels of reliability or trustworthiness. Because wDCG@N is non-smooth, we considered two smooth approximations of wDCG@N, using the traditional sigmoid function and the rectified linear unit (ReLU). We proposed a family of learning-to-rank algorithms (Top-N-Rank) that work with any smooth objective function (e.g., smooth approximations of wDCG@N). We designed Top-N-Rank.ReLU, a more efficient version of Top-N-Rank that exploits the properties of ReLU function to reduce the computational complexity of Top-N-Rank from quadratic to linear in the average number of items rated by users. The results of our experiments using two widely used benchmarks, namely, the Amazon Video Games data set and the MovieLens data set demonstrate that: (i) The "top-N truncation" of the objective function substantially improves the ranking quality; (ii) using the ReLU for smoothing the wDCG@N objective function yields significant improvement in both ranking quality as well as runtime as compared to using the sigmoid function; and (iii) Top-N-Rank.ReLU substantially outperforms the state-of-the-art list-wise ranking CF methods (MF-ADG, CLiMF, xCLiMF, ListRank, and ListPMF-PL) in terms of ranking quality. Some promising directions for further research include: (i) Fusing the proposed top-N truncation component and ReLU smoothing function with different list-wise LTR objectives (i.e., MAP, AUC or MRR); (ii) investigation of complex interaction structure of user-item pairs with the help of deep neural nets; (iii) extending the proposed model to tensor factorization or factorization machines to take in multiple types of features.
3,784
1907.01291
2954153097
A significant amount of connection establishments on the web require a prior domain name resolution by the client. Especially on high-latency access networks, these DNS lookups cause a significant delay on the client's connection establishment with a server. To reduce the overhead of QUIC's connection establishment with prior DNS lookup on these networks, we propose a novel QuicSocks proxy. Basically, the client delegates the domain name resolution towards the QuicSocks proxy. Our results indicate, that colocating our proxy with real-world ISP-provided DNS resolvers provides great performance gains. For example, 10 of our 474 sample nodes distributed across ISP's in Germany would save at least 30ms per QUIC connection establishment. The design of our proposal aims to be readily deployable on the Internet by avoiding IP address spoofing, anticipating Network Address Translators and using the standard DNS and QUIC protocols. In summary, our proposal fosters a faster establishment of QUIC connections for clients on high-latency access networks.
Furthermore, Miniproxy @cite_2 can be used to accelerate TCP's connection establishment. This approach places a proxy between the client and the web server, which doubles the number of required TCP handshakes. Miniproxy can provide a faster TCP connection establishment in case of a favorable network topology and significant RTTs between client and web server. The QUIC protocol includes computationally expensive cryptographic handshakes causing a significant delay compared to TCP's handshake @cite_20 . Therefore, this approach seems less feasible when QUIC is used.
{ "abstract": [ "Small TCP flows make up the majority of web flows. For them, the TCP three-way handshake represents a significant delay overhead. The TCP Fast Open (TFO) protocol provides zero round-trip time (0-RTT) handshakes for subsequent TCP connections to the same host. In this paper, we present real-world privacy and performance limitations of TFO. We investigated its deployment on popular websites and browsers. We found that a client revisiting a web site for the first time fails to use an abbreviated TFO handshake about 40 of the time due to web server load-balancing. Our analysis further reveals significant privacy problems in the protocol design and implementation. Network-based attackers and online trackers can exploit these shortcomings to track the online activities of users. As a countermeasure, we introduce a novel protocol called TCP Fast Open Privacy (FOP). It overcomes the performance and privacy limitations of TLS over TFO by utilizing a custom TLS extension. TCP FOP prevents tracking by network attackers and impedes third-party tracking, while still allowing for 0-RTT handshakes as in TFO. As a proof-of-concept, we have implemented the proposed protocol. Our measurements indicate that TCP FOP outperforms TLS over TFO when websites are served from multiple IP addresses.", "TCP proxies are basic building blocks for many advanced middleboxes. In this paper we present Miniproxy, a TCP proxy built on top of a specialized minimalistic cloud operating system. Miniproxy's connection handling performance is comparable to that of full-fledged GNU Linux TCP proxy implementations, but its minimalistic footprint enables new use cases. Specifically, Miniproxy requires as little as 6 MB to run and boots in tens of milliseconds, enabling massive consolidation, on-the-fly instantiation and edge cloud computing scenarios. We demonstrate the benefits of Miniproxy by implementing and evaluating a TCP acceleration use case." ], "cite_N": [ "@cite_20", "@cite_2" ], "mid": [ "2944754179", "2403321908" ] }
Accelerating QUIC's Connection Establishment on High-Latency Access Networks
Abstract-A significant amount of connection establishments on the web require a prior domain name resolution by the client. Especially on high-latency access networks, these DNS lookups cause a significant delay on the client's connection establishment with a server. To reduce the overhead of QUIC's connection establishment with prior DNS lookup on these networks, we propose a novel QuicSocks proxy. Basically, the client delegates the domain name resolution towards the QuicSocks proxy. Our results indicate, that colocating our proxy with real-world ISPprovided DNS resolvers provides great performance gains. For example, 10% of our 474 sample nodes distributed across ISP's in Germany would save at least 30 ms per QUIC connection establishment. The design of our proposal aims to be readily deployable on the Internet by avoiding IP address spoofing, anticipating Network Address Translators and using the standard DNS and QUIC protocols. In summary, our proposal fosters a faster establishment of QUIC connections for clients on highlatency access networks. Index Terms-QUIC Transport Protocol, SOCKS Proxy, DNS, QuicSocks Proxy I. INTRODUCTION For U.S. households latency is the main web performance bottleneck for broadband access networks exceeding a throughput of 16 Mbit/sec [1]. Depending on the user's location and the deployed access technology like cable, fiber, Digital Subscriber Line (DSL), Long-Term Evolution (LTE), or satellite, users may experience significant network latencies. High-latency links reduce the user's quality of experience during web browsing [2] and negatively impact the per-user revenue of online service provider [3]. Thus, optimizing the web performance on such existing high-latency network links is an important task. In this paper, we focus on improving the time to first byte which contributes up to 21% of the page load time for popular websites [1]. In detail, we improve the delay of QUIC's connection establishment with prior DNS lookup on high-latency links. QUIC replaces the TLS over TCP protocol stack within the upcoming HTTP/3 version [4]. As the web is built upon the Hypertext Transfer Protocol (HTTP) and the standardization of QUIC receives widespread support, the QUIC protocol is expected to be widely deployed on the Internet in the forthcoming years. Our proposal assumes, that Internet Service Providers (ISP) aim to improve their clients' quality of experience during web browsing. This assumption is substantiated by ISPs providing recursive DNS resolvers to accelerate their client's DNS lookups. In this work, we propose ISP-provided proxies to reduce the delay of their client's QUIC connection establishments. Instead of conducting a DNS lookup and waiting for the response, this design allows the client to directly send its initial QUIC messages to the novel QuicSocks proxy. Upon receiving these messages, the proxy resolves the domain name and forwards the messages to the respective QUIC server. After the end-to-end encrypted connection between the client and the server is established, the connection is seamlessly migrated to the direct path between these peers. These novel QuicSocks proxies can accelerate the client's connection establishment to a server, if they perform faster DNS lookups and/or have a lower network latency to the QUIC server compared to the client. A favorable network topology would place a QuicSocks proxy colocated with the ISP-provided DNS resolver in an on-path position between the client and the server. Our proposal can be applied to many high-latency links. Within the next years, enterprises like SpaceX, OneWeb, and Telesat plan to launch thousands of satellites for global broadband connectivity aiming to provide Internet access to millions of people [5]. This presents a well-suited application area for our proposal because of the significant latencies of about 30 ms between the client and the ISP's ground station [6]. In summary, this paper makes the following contributions: • We propose the novel QuicSocks design that allows clients to send initial handshake messages without a prior resolution of the domain name. The name resolution is conducted by a QuicSocks proxy from a more favorable position in the ISP's network to accelerate the connection establishment. • We evaluate our proposal by assuming a colocation of the ISP-provided DNS resolver with the QuicSocks proxy. Results based on our analytical model indicate for a QUIC connection establishment accelerations between 33% and 40% on a U.S. mobile LTE network. Furthermore, our measurements of real-world network topologies indicate the feasibility of significant performance gains for clients on high-latency access networks. For example. 10% of the investigated clients save at least 30 ms to complete their QUIC handshake. • We implemented a prototype of our proposal to demonstrate its real-world feasibility. Our results indicate, that the computations of the QuicSocks proxy itself are lightweight and contribute less than 1.2 ms to a QUIC connection establishment. The remainder of this paper is structured as follows: Section II introduces the QUIC and the SOCKS protocol and describes the performance problem that we aim to solve. Section III summarizes the proposed QuicSocks design and evaluation results are presented in Section IV. Related work is reviewed in Section V, and Section VI concludes the paper. A. QUIC transport protocol In this paper, we refer to QUIC's draft version 20 of the Internet Engineering Task Force (IETF) as the QUIC protocol [7]. The QUIC transport protocol will replace TLS over TCP in the upcoming HTTP/3 network protocol [4], which is closely tied to the world wide web. Thus, deployment of HTTP/3 on the web will significantly contribute to QUIC's adoption on the Internet in the forthcoming years. Compared to TLS over TCP, the UDP-based QUIC protocol allows for faster connection establishments [8], mitigates head-ofline blocking [9], and can be extended because of a lower interference through middleboxes [10]. In the following, we provide details on two mechanisms of the QUIC protocol that our proposed QuicSocks approach makes use of. These are a challenge-response mechanism in QUIC's connection establishment known as stateless retry and QUIC's connection migration that allows transferring an established connection to a new endpoint address. a) Stateless retry: The stateless retry mechanism can be optionally used by QUIC servers to validate the source address claimed by a client before proceeding with the cryptographic connection establishment. As shown in Figure 1, the server responds to the client's initial connection request with a retry message that contains a source address token. This token is opaque to the client and contains information about the client's source address. Subsequently, the client returns this token together with its previously sent ClientHello message. Upon receiving this message from the client, the server first validates the presented token. A token is valid for a connection request if the client's claimed source address matches the address encoded in the token. In this case, the server assumes that the client can receive packets at the claimed address and proceeds with the cryptographic connection establishment. A stateless retry presents a performance limitation as it adds a round-trip time to the connection establishment. However, it supports QUIC servers to protect against denial-of-service attacks. Therefore, QUIC servers are likely to use these optional stateless retries when experiencing many connection requests from source addresses with unresponsive clients. Client Server Clie ntH ello peers proceed with connection establishment … retr y, toke n Clie ntH ello , toke n b) Connection migration: QUIC connections are associated with connection IDs that allow the identification of a connection independently of the used source address and port number. Connection IDs allow connections to survive an endpoint's change of the IP address and/or port number which might occur because of NAT timeouts and rebinding [11], or clients changing their network connectivity. Only QUIC clients can initiate connection migrations to a different endpoint's IP address and/or port number. However, the client must wait until the handshake is completed and forward secure keys are established before initiating the connection migration. The endpoints might use multiple network paths simultaneously during the connection migration. The peers can optionally probe a new path for peer reachability before migrating a connection to it. However, when a connection is migrated to a new path the server must ensure the client is reachable via this path before sending large amounts of data. Furthermore, the peers need adapting their sending rate to the new path by resetting their congestion controller and round-trip time estimator. B. SOCKS protocol RFC 1928 describes the current version of the SOCKS protocol [12]. Usages of SOCKS proxies include the traversal of network firewalls [12], the translation between IPv6 and IPv4 address space [13], and privacy-enhancing technologies such as TOR onion routing [14]. Figure 2 provides a schematic of a connection between client and server through a SOCKS proxy. To begin with, the client establishes a TCP connection to the proxy's port 1080. This connection is used by the client and the SOCKS proxy to exchange control messages. For example, the client can use this control channel for authentication or to request a new connection to a server. The SOCKS protocol supports the exchange of UDP datagrams between the client and server. As a result, QUIC connections can be established via default SOCKS proxies. For this purpose, the client sends a UDP associate request to the socks proxy. If the client's request is approved, the SOCKS proxy responses with a source address and port number to which the client can send the UDP datagrams to be relayed to the server. Subsequently, the client attaches a SOCKS request header to its UDP datagrams and sends them to the indicated port number/ IP address. Upon receiving these UDP datagrams, the proxy will remove the request header and send them from its own source address to the server. The server will send its response to the proxy server, which will then relay it to the client. Note, that the SOCKS protocol allows clients to delegate the task of DNS name resolution. For this purpose, the client includes the domain name within its request header. Subsequently, the SOCKS proxy resolves this domain name and relays the packets to the corresponding destination. C. Delays caused by high latencies to recursive DNS resolvers The Domain Name System (DNS) is responsible for resolving domain names into IP addresses. Many operating systems or web browsers have a local DNS cache. However, between 12.9% and 20.4% of a user's established TCP connections directly follow a DNS query [15]. A popular website requires connections to about 20 different hostnames [16]. Hence, the user conducts on average between 2.6 and 4.1 fresh DNS queries per website retrieval. Each of these DNS queries delays the subsequent connection establishment to the server serving the queried hostname. Furthermore, websites usually have a nested hierarchy of requests to different hostnames [16]. If such nested requests to different hostnames require each a DNS query by the client, then the website loading is delayed by the time required for these sequential DNS queries. In this paper, we assume that the client can resolve either a domain name using its local cache or needs to query recursively its DNS resolver as shown in Figure 3. If the recursive resolver has a cache miss for the queried domain name, it starts an iterative query. The arrows two to seven in Figure 3 indicate such a complete iterative query involving the DNS root server, Top Level Domain (TLS) server, and finally the authoritative nameserver of the respective domain name. DNS recursive resolvers make extensive use of caching with reported hit rates larger than 80% [15]. Thus, the round-trip time (RTT) between the client and the recursive resolver can present a significant source of delay for a DNS query. Studies of home routers in the US indicate typical RTT between 5 ms and 15 ms to their ISP-provided DNS resolver [1]. However, a fraction of about 5% of the users experience a RTT longer than 20 ms [17]. Studies with the popular third-party resolver Google Public DNS indicate a median RTT of 23 and 25% of the measurement nodes experienced RRTs longer than 50 ms [17]. For users having a downstream throughput of more than 16 Mbits/sec, the page load time highly depends on their network latency and DNS query time compared to their available throughput [1]. As a result, especially clients having a high network latency to their resolver require technological improvements to reduce their experienced DNS query time to achieve faster website retrievals. III. QUICSOCKS In this section, we introduce the QuicSocks design. This novel approach improves the latency of QUIC's connection establishments that directly follow a DNS lookup. First, we summarize our design goals, before we present QuicSocks. Finally, we describe the implementation of our QuicSocks prototype. A. Design goals We aim to develop a solution that supports the following goals: 1) Deployable on today's Internet which excludes approaches requiring changes to middle-boxes, kernels of client machines, the DNS protocol, or the QUIC protocol. 2) Reduces the latency of QUIC's connection establishments that require a prior DNS lookup. 3) Does not make use of IP address spoofing as this practice conflicts with RFC 2827 [18]. 4) Supports clients behind Network Address Translators (NAT). 5) Guarantees confidentiality by assuring end-to-end encryption between the client and the web server. 6) Limits the consumption of the proxy's bandwidth. 7) Privacy assurances similar to using a recursive DNS resolver. B. Design In this section, we present the protocol flow of a connection establishment using a QuicSocks proxy. First, the client needs to establish a control channel with the QuicSocks proxy. The client learns via the control channel which port of the QuicSocks proxy can be used for its connection establishments. A single control channel can be used to establish several QUIC connections via the proxy. Furthermore, the control channel is used by the proxy to validate the client's claimed source address. Subsequently, the establishment of a single QUIC connection follows the protocol flow shown in Figure 4. Note, that each UDP datagram exchanged between the client and server is encapsulated and carries a request header as common in the SOCKS protocol [12]. To begin the connection establishment, the client sends its QUIC ClientHello message to the Quic-Socks proxy indicating the domain name of the destination server in the SOCKS' request header. Upon receiving this message, the proxy authenticates the client based on the datagrams' encapsulation and caches this message. Subsequently, the proxy does a DNS lookup for the presented domain name and forwards the ClientHello message to the destination's server IP address. Next, the proxy forwards also the obtained DNS response to the client. Note, that the QuicSocks proxy sends all forwarded datagrams from its own source address. Upon receiving the DNS response from the proxy, the client starts probing the direct path to the respective web server to prepare a seamless connection migration to this new path. Upon receiving the forwarded ClientHello, the server can optionally conduct a stateless retry as shown in Figure 4. In this case, the server returns a retry message and an address validation token to the proxy. Receiving such a request for a stateless retry, the proxy resends the cached ClientHello message and along with the received address validation token. This challenge-response mechanism allows the QUIC server to validate the claimed source address before proceeding with the cryptographic connection establishment. Following the default QUIC handshake, the server proceeds by sending messages including the ServerHello and the FIN, which signals that the server established forward-secure encryption keys. Receiving these messages of the cryptographic connection establishment, the proxy forwards them to the client. Based on these messages, the client validates the server's identity and computes its forward-secure encryption keys. To complete the handshake, the client sends its FIN message via the proxy to the server. Subsequently, the client migrates the established connection towards the direct path between client and server. The connection migration reduces the system utilization of the QuicSocks proxy and possibly leads to shorter round-trip times between client and server. C. Implementation The implementation of our proposal aims to demonstrate its real-world feasibility. Our implemented prototype is capable of establishing a connection via the default SOCKS protocol and subsequently migrate the connection to the direct path between QUIC server and client. Our modified client is written in about 350 lines of Rust code and make use of the rust-socks (v0.3.2) and quiche (v0.1.0-alpha3) libraries. Rust-socks provides an abstraction of a SOCKS connection with an interface that is similar to the operating system's UDP sockets and allows to Sea mle ss con nect ion mig ratio n tow ards the dire ct path Fig. 4. Protocol flow of a QUIC handshake via the proposed QuicSocks proxy, who resolves the server's IP address. The handshake includes an optional stateless retry initiated by the server. After the server and client established forward secure keys and exchanged FIN messages, the client initiates the connection migration towards the direct path. transparently use a SOCKS connection. Quiche is an experimental QUIC implementation that separates protocol messages from socket operations which accommodates our use-case of switching between SOCKS sockets and the operating system's UDP sockets within the same QUIC connection. In detail, we modified quiche's example client implementation to perform a QUIC handshake through a rust-socks socket. Once the connection establishment is completed, we switch to a new operating system UDP socket to communicate with the QUIC server over the direct path. Note, that the server's IP address required to conduct this switching is provided by the datagram header of the default SOCKS protocol. Furthermore, we adapted our client implementation to measure the required time for a connection establishment. The time is measured from the request to establish a connection until the QUIC handshake is completed. In total, we implemented these time measurements for three different connection situations. The first connection situation includes additionally the overhead required to establish the connection with the SOCKS proxy. The second connection situation assumes an established SOCKS connection and measures only the time required to establish a QUIC handshake employing a SOCKS proxy. Finally, the last connection situation conducts a time measurement for a plain QUIC connection establishment without using a SOCKS proxy. Note, that our prototype does not provide a complete QuicSocks implementation because we did not apply changes to the used SOCKS proxy. As a result, the used proxy does not support the stateless retry mechanism as proposed. Furthermore, our proxy does not provide the client with the resolved QUIC server address directly after the DNS lookup. Instead, within our test setup, the client retrieves the QUIC server address from the SOCKS encapsulation of the forwarded server response. Hence, our client implementation does not start validating the direct path between client and server before migrating the connection. IV. EVALUATION In this section, we evaluate the proposed connection establishment via QuicSocks proxies. To begin with, we investigate feasible performance improvements of our proposal compared to the status quo via an analytical model. Then, we conduct latency measurements between clients, servers, and DNS resolvers to approximate real-world delays for QuicSocks proxies that are colocated with the respective DNS resolver. Finally, we present performance measurements using our QuicSocks prototype. A. Analytical evaluation The performance benefit of employing a QuicSocks proxy for the connection establishment depends on the network topology. For reasons of clarity, we assume in our analytical model a colocation of the DNS resolver and the QuicSocks proxy (see Figure 5). Furthermore, our model is reduced to the network latency between the involved peers. As shown in Figure 5, we denote the round-trip time between client and DNS resolver/ QuicSocks proxy as RTT DNS . RTT direct and RTT Server describe the round-trip time between server and client, and between server and QuicSocks proxy, respectively. Table I presents the evaluation results for our analytical model. A connection without stateless retry requires RTT DNS to resolve the domain name and subsequently RTT direct to establish the connection between client and QUIC server using the status quo. However, to establish the same connection via a QuicSocks proxy the sum of RTT DNS and RTT Server is required. Note, that we define a connection to be established when the client and the server computed their forward-secure encryption keys and are ready to send application data. Thus, we may count a connection as established before the client's FIN message has been processed by the server. With respect to stateless retries, we observe that the delay of the connection establishments increase for the status quo and our proposal by a RTT direct and a RTT Server , respectively. In total, our analytical model indicates our proposal outperforms the current status quo if RTT Server is smaller than RTT direct . In this case, we find that the reduced delay of the connection establishment without stateless retry is equal to the difference between RTT Server and RTT direct . Moreover, the benefit of our proposal is doubled if the connection establishment requires a stateless retry. Our proposal achieves its worst performance when the client is colocated with the server. However, the best performance can be realized when the DNS resolver, the QuicSocks proxy, and the server are colocated. Client QuicSocks proxy/ DNS resolver Server RTT direct RTT Server RTT DNS In the following, we assume an ISP provides a DNS resolver/ QuicSocks proxy half-way, on-path between client and server. Note, that the client's latency to the first IP hop (last mile latency) contributes between 40% and 80% of a typical RTT direct [19]. A typical RTT direct in the LTE mobile network of the U.S. is 60 ms to reach popular online services [20]. For this example, we assume RTT DNS and RTT Server to be each 30 ms, while RTT direct is 60 ms. Table II provides the results for this example. We find, that our proposal accelerates the connection establishment by 30 ms and 60 ms depending on the requirement of a stateless retry. In summary, the total delay overhead of the connection establishment is reduced by up to 40% and at least 33.3%. Note, that the absolute benefit of our proposal is even higher for 3G networks where RTT direct in the U.S. is on average between 86 ms and 137 ms depending on the mobile network provider [20]. B. Real-world network topologies Our analytical evaluation indicates, that our proposal can significantly reduce the latency of a QUIC connection establishments with a prior DNS query if the QuicSocks proxy has a favorable position in the network topology. In this section, we investigate real-world network topologies to approximate the feasible performance benefit of QuicSocks proxies when they are colocated with ISP-provided DNS resolvers. We begin by describing our applied methodology to measure real-world network topologies. Subsequently, we evaluate the QuicSocks proposal based on our collected data. a) Data collection: Our data collection aims to measure RTT DNS , RTT Server , and RTT direct for different real-world clients. We use nodes of the RIPE Atlas network [21] to represent our clients. These RIPE Atlas nodes allow us to conduct custom ping measurements and DNS queries. The selected nodes are in different autonomous systems all over Germany including home networks and data centers. Furthermore, also our test server is in a data center in Germany operated by the Hetzner Online GmbH. The aim of this test setup is to be representative for a typical Internet connections in countries with a similar infrastructure like Germany. To measure the RTTs between the involved peers, we require the IP address of each peer to conduct corresponding ping measurements. While we have access to the IP address of our clients and the test server, we cannot look up the address of the client's locally configured DNS resolver. Furthermore, the DNS resolvers might use an anycast service for its IP address [22] that may return different physical endpoints when pinged from the client and the server, respectively. We used message 6 in Figure 3, where the recursive resolver sends a request to the authoritative nameserver to learn the IP address of the recursive DNS resolver. In detail, we announced a DNS authority section at our test server for a subdomain such as dnstest.example.com. Then, we conducted a DNS query from the client to a random subdomain in our authority section such as foobar.dnstest.example.com. At the same time, we captured the network traffic on the server and found a DNS query for this subdomain foobar.dnstest.example.com. We reasoned that the sender's address of this DNS query is resolving the client's DNS query. Depending on the DNS setup, the IP address of locally configured DNS resolver might mismatch the address sending the query to the authoritative nameserver. For these cases, we assume that both DNS resolvers are colocated yielding about the same RTT DNS and RTT Server with respect to our measurements. In total, we used 800 RIPE Atlas nodes in Germany to conduct our data collection on the 13th of June 2019. A successful measurement includes RTT DNS , RTT Server , and RTT direct for the nodes, where we used an average over five ping measurements to determine the respective RTT. In our data collection, we obtained successful results for 650 nodes. Failures can be mainly attributed to DNS resolver that did not respond to ping measurements. However, a small fraction of measurements experienced also failures during DNS measurements. To focus our data collection on ISP-provided DNS resolvers, we investigated the autonomous system numbers of the observed IP addresses. We assume, that an ISP-provided DNS resolver uses an IP address from the same autonomous system as the node does. This approach allows us to sort out configured public DNS resolvers such as Google DNS which will usually operate from an IP address assigned to a different autonomous system compared to the node. In total, our data collection successfully obtained measurements from 474 nodes in Germany using each an ISP-provided DNS resolver. b) Results: To accelerate a connection establishment via our proposal, we require RTT Server to be smaller than RTT direct . Our results indicate for almost all clients RTT Server is significantly smaller than RTT direct . For 51% of the considered RIPE Atlas nodes, RTT Server is at least 5 ms smaller than RTT direct . Furthermore, 36.7% of the nodes experience RTT Server to be at least 10 ms smaller than RTT direct . As can be observed in Figure 6, almost no nodes experiences RTT Server to be longer than 40 ms, while a tail of 10% of the respective RIPE Atlas nodes observe a longer RTT direct . In this long tail, we find 7.2% and 3.8% of the nodes to have a RTT Server that outperforms RTT direct by at least 40 ms and 50 ms, respectively. Furthermore, Figure 6 provides a plot of RTT DNS . We find, that 60% of the nodes have a RTT with their ISPprovided DNS resolver of less than 10 ms. Moreover, RTT DNS is almost always smaller than RTT direct for a specific node. This can be explained through RIPE Atlas nodes that are located towards the periphery of the Internet compared to their ISPprovided DNS resolvers holding a position closer to the core of the Internet. To evaluate our proposal compared to the status quo, we combine the equations provided in Table I with the measured RTT. Figure 7 plots these results as a cumulative distribution of the RIPE Atlas nodes in Germany using an ISP-provided DNS resolver over the required network latency to complete the QUIC connection establishment. In total, Figure 7 contains four plots. In the scenario of a QUIC connection establishment using a stateless retry, the solid and dashed line represent the status quo and our proposed solution, respectively. In the scenario of a QUIC handshake without stateless retry, the status quo and our proposal are marked as dash-dotted and dotted lines, respectively. In total, our results indicate our proposal accelerates the connection establishment for the great majority of investigated RIPE Atlas nodes. Furthermore, we observe the trend that performance improvements are higher for nodes with longer required network latency to complete the handshake. For example, we find that approximately 10% of the nodes save at least 30 ms establishing the connection without stateless retry and 60 ms with a stateless retry. Moreover, 24.3% of the investigated nodes save at least 15 ms without and 30 ms with a stateless retry during the connection establishment. Note, that approximately a third of the nodes experience a faster connection establishment using our proposal in a stateless retry connection establishment than having a status quo handshake without stateless retry. C. Prototype-based Measurements In this section, we compare the delay of a default QUIC connection establishment with handshakes using our proposal. However, the performance of our proposal significantly depends on the network topology of our test setup. This measurement neglects network topologies and investigates the delay caused by the computational overhead of introducing a QuicSocks proxy on a network link. a) Data collection: For our test setup, we use a publicly accessibly QUIC server, a Dante SOCKS proxy (v1.4.2) and our implemented prototype to represent the client. Our prototype and the Dante SOCKS proxy are run on the same virtual machine. The virtual machine is equipped with 1 vCPU and 0.6 GB RAM and runs Debian 9.9 (Stretch). The colocation of our client implementation with the proxy ensures that measurements using the proxy make use of the same network path as measurements conducted without the proxy. In detail, we conduct three different types of measurements on the 25th of June 2019 of which we repeat each measurement 1 000 times. The default measurements do not employ our proxy and investigate the required time to establish a QUIC connection with the server. The cold start measurements include the time required to establish the SOCKS connection and the subsequent QUIC handshake via the proxy. Note, that a single SOCKS connection can be used to establish several QUIC connections. The warm start measurement includes the time to establish a QUIC connection via our proxy but excludes the delay incurred by establishing the SOCKS connection. b) Results: Our data collection provided us with 1 000 values for each of the three measurement types. To evaluate our collected data, we retrieve the minimum and the median value of each measurement type. The default measurement has a minimum of 49.145 ms and a median of 51.309 ms. The warm start measurement has a minimum of 49.708 ms and a median of 52.471 ms. These values are between 1.1% and 2.3% higher than the default measurement. This can be explained by the additional overhead caused by the interaction with the proxy. Furthermore, these values indicate an absolute overhead of using a SOCKS proxy of less than 1.2 ms for the median value, if the SOCKS connection is already established. The cold start measurement yields a minimum value of 52.073 ms and a median of 54.772 ms. Comparing both measurements using the SOCKS proxy, we can attribute an additional overhead of about 2.3 ms in our test setup to establish the SOCKS connection. As a result, we recommend clients to early establish their SOCKS connection and to use the warm start approach to reduce the delays during their QUIC connection establishments. V. RELATED WORK There is much previous work on accelerating connection establishments on the web. For example, Google launched in 2019 its feature Chrome Lite Pages [23]. Lite Pages runs a proxy server that prefetches a website and forwards a compressed version of it to the client. This approach leads to significant performance improvements for clients experiencing high network latencies as they do only need to establish a single connection to the proxy server to retrieve the website. However, as major disadvantages compared to our proposal this leads to a significant load on the proxy server and breaks the principle of end-to-end transport encryption between the client and the web server. Furthermore, Miniproxy [24] can be used to accelerate TCP's connection establishment. This approach places a proxy between the client and the web server, which doubles the number of required TCP handshakes. Miniproxy can provide a faster TCP connection establishment in case of a favorable network topology and significant RTTs between client and web server. The QUIC protocol includes computationally expensive cryptographic handshakes causing a significant delay compared to TCP's handshake [25]. Therefore, this approach seems less feasible when QUIC is used. The ASAP [26] protocol piggybacks the first transport packet within the client's DNS query and the DNS server forwards it to the web server after resolving the IP address. However, this approach requires the DNS server to spoof the clients IP address which leads to a violation of the Best Current Practice RFC 2827 [18]. Furthermore, a deployment of ASAP requires significant infrastructural changes to the Internet because it uses a custom transport protocol. Further possible performance improvements can be achieved by sending replicated DNS queries to several DNS resolvers and occasionally receiving a faster response [27]. Another DNS-based mechanism aiming to reduce latency uses Server Push [28] where the resolver provides speculative DNS responses prior to the client's query. In total, these approaches tradeoff a higher system utilization versus a possibly reduced latency. VI. CONCLUSION We expect high-latency access networks to remain a web performance bottleneck for a significant number of users throughout the forthcoming years. The QUIC protocols aims to reduce the delay of connection establishments on the web. However, our measurements across a wide variety of access networks in Germany indicates a tail of users is affected by significant delays beyond 100 ms to complete a DNS lookup with a subsequent QUIC connection establishment. Our proposal exploits the fact that ISP-provided DNS resolvers are typically located further into the core Internet than clients. We find, that colocating a proxy with the ISPprovided DNS resolver provides significant performance gains for clients on high-latency access networks. For example, a client can delegate the task of DNS lookups to the proxy in a more favorable network position. Furthermore, the QUIC protocol provides features such as connection migration or the concept of stateless retries that allow further performanceoptimizations when employing a proxy. We hope that our work leads to an increased awareness of the performance problems experienced by a significant tail of users on highlatency access networks and spurs further research to reduce this web performance bottleneck.
5,679
1907.01291
2954153097
A significant amount of connection establishments on the web require a prior domain name resolution by the client. Especially on high-latency access networks, these DNS lookups cause a significant delay on the client's connection establishment with a server. To reduce the overhead of QUIC's connection establishment with prior DNS lookup on these networks, we propose a novel QuicSocks proxy. Basically, the client delegates the domain name resolution towards the QuicSocks proxy. Our results indicate, that colocating our proxy with real-world ISP-provided DNS resolvers provides great performance gains. For example, 10 of our 474 sample nodes distributed across ISP's in Germany would save at least 30ms per QUIC connection establishment. The design of our proposal aims to be readily deployable on the Internet by avoiding IP address spoofing, anticipating Network Address Translators and using the standard DNS and QUIC protocols. In summary, our proposal fosters a faster establishment of QUIC connections for clients on high-latency access networks.
The ASAP @cite_16 protocol piggybacks the first transport packet within the client's DNS query and the DNS server forwards it to the web server after resolving the IP address. However, this approach requires the DNS server to spoof the clients IP address which leads to a violation of the Best Current Practice RFC 2827 @cite_7 . Furthermore, a deployment of ASAP requires significant infrastructural changes to the Internet because it uses a custom transport protocol.
{ "abstract": [ "For interactive networked applications like web browsing, every round-trip time (RTT) matters. We introduce ASAP, a new naming and transport protocol that reduces latency by shortcutting DNS requests and eliminating TCP's three-way handshake, while ensuring the key security property of verifiable provenance of client requests. ASAP eliminates between one and two RTTs, cutting the delay of small requests by up to two-thirds.", "Recent occurrences of various Denial of Service (DoS) attacks which have employed forged source addresses have proven to be a troublesome issue for Internet Service Providers and the Internet community overall. This paper discusses a simple, effective, and straightforward method for using ingress traffic filtering to prohibit DoS attacks which use forged IP addresses to be propagated from 'behind' an Internet Service Provider's (ISP) aggregation point." ], "cite_N": [ "@cite_16", "@cite_7" ], "mid": [ "2005703481", "1867219652" ] }
Accelerating QUIC's Connection Establishment on High-Latency Access Networks
Abstract-A significant amount of connection establishments on the web require a prior domain name resolution by the client. Especially on high-latency access networks, these DNS lookups cause a significant delay on the client's connection establishment with a server. To reduce the overhead of QUIC's connection establishment with prior DNS lookup on these networks, we propose a novel QuicSocks proxy. Basically, the client delegates the domain name resolution towards the QuicSocks proxy. Our results indicate, that colocating our proxy with real-world ISPprovided DNS resolvers provides great performance gains. For example, 10% of our 474 sample nodes distributed across ISP's in Germany would save at least 30 ms per QUIC connection establishment. The design of our proposal aims to be readily deployable on the Internet by avoiding IP address spoofing, anticipating Network Address Translators and using the standard DNS and QUIC protocols. In summary, our proposal fosters a faster establishment of QUIC connections for clients on highlatency access networks. Index Terms-QUIC Transport Protocol, SOCKS Proxy, DNS, QuicSocks Proxy I. INTRODUCTION For U.S. households latency is the main web performance bottleneck for broadband access networks exceeding a throughput of 16 Mbit/sec [1]. Depending on the user's location and the deployed access technology like cable, fiber, Digital Subscriber Line (DSL), Long-Term Evolution (LTE), or satellite, users may experience significant network latencies. High-latency links reduce the user's quality of experience during web browsing [2] and negatively impact the per-user revenue of online service provider [3]. Thus, optimizing the web performance on such existing high-latency network links is an important task. In this paper, we focus on improving the time to first byte which contributes up to 21% of the page load time for popular websites [1]. In detail, we improve the delay of QUIC's connection establishment with prior DNS lookup on high-latency links. QUIC replaces the TLS over TCP protocol stack within the upcoming HTTP/3 version [4]. As the web is built upon the Hypertext Transfer Protocol (HTTP) and the standardization of QUIC receives widespread support, the QUIC protocol is expected to be widely deployed on the Internet in the forthcoming years. Our proposal assumes, that Internet Service Providers (ISP) aim to improve their clients' quality of experience during web browsing. This assumption is substantiated by ISPs providing recursive DNS resolvers to accelerate their client's DNS lookups. In this work, we propose ISP-provided proxies to reduce the delay of their client's QUIC connection establishments. Instead of conducting a DNS lookup and waiting for the response, this design allows the client to directly send its initial QUIC messages to the novel QuicSocks proxy. Upon receiving these messages, the proxy resolves the domain name and forwards the messages to the respective QUIC server. After the end-to-end encrypted connection between the client and the server is established, the connection is seamlessly migrated to the direct path between these peers. These novel QuicSocks proxies can accelerate the client's connection establishment to a server, if they perform faster DNS lookups and/or have a lower network latency to the QUIC server compared to the client. A favorable network topology would place a QuicSocks proxy colocated with the ISP-provided DNS resolver in an on-path position between the client and the server. Our proposal can be applied to many high-latency links. Within the next years, enterprises like SpaceX, OneWeb, and Telesat plan to launch thousands of satellites for global broadband connectivity aiming to provide Internet access to millions of people [5]. This presents a well-suited application area for our proposal because of the significant latencies of about 30 ms between the client and the ISP's ground station [6]. In summary, this paper makes the following contributions: • We propose the novel QuicSocks design that allows clients to send initial handshake messages without a prior resolution of the domain name. The name resolution is conducted by a QuicSocks proxy from a more favorable position in the ISP's network to accelerate the connection establishment. • We evaluate our proposal by assuming a colocation of the ISP-provided DNS resolver with the QuicSocks proxy. Results based on our analytical model indicate for a QUIC connection establishment accelerations between 33% and 40% on a U.S. mobile LTE network. Furthermore, our measurements of real-world network topologies indicate the feasibility of significant performance gains for clients on high-latency access networks. For example. 10% of the investigated clients save at least 30 ms to complete their QUIC handshake. • We implemented a prototype of our proposal to demonstrate its real-world feasibility. Our results indicate, that the computations of the QuicSocks proxy itself are lightweight and contribute less than 1.2 ms to a QUIC connection establishment. The remainder of this paper is structured as follows: Section II introduces the QUIC and the SOCKS protocol and describes the performance problem that we aim to solve. Section III summarizes the proposed QuicSocks design and evaluation results are presented in Section IV. Related work is reviewed in Section V, and Section VI concludes the paper. A. QUIC transport protocol In this paper, we refer to QUIC's draft version 20 of the Internet Engineering Task Force (IETF) as the QUIC protocol [7]. The QUIC transport protocol will replace TLS over TCP in the upcoming HTTP/3 network protocol [4], which is closely tied to the world wide web. Thus, deployment of HTTP/3 on the web will significantly contribute to QUIC's adoption on the Internet in the forthcoming years. Compared to TLS over TCP, the UDP-based QUIC protocol allows for faster connection establishments [8], mitigates head-ofline blocking [9], and can be extended because of a lower interference through middleboxes [10]. In the following, we provide details on two mechanisms of the QUIC protocol that our proposed QuicSocks approach makes use of. These are a challenge-response mechanism in QUIC's connection establishment known as stateless retry and QUIC's connection migration that allows transferring an established connection to a new endpoint address. a) Stateless retry: The stateless retry mechanism can be optionally used by QUIC servers to validate the source address claimed by a client before proceeding with the cryptographic connection establishment. As shown in Figure 1, the server responds to the client's initial connection request with a retry message that contains a source address token. This token is opaque to the client and contains information about the client's source address. Subsequently, the client returns this token together with its previously sent ClientHello message. Upon receiving this message from the client, the server first validates the presented token. A token is valid for a connection request if the client's claimed source address matches the address encoded in the token. In this case, the server assumes that the client can receive packets at the claimed address and proceeds with the cryptographic connection establishment. A stateless retry presents a performance limitation as it adds a round-trip time to the connection establishment. However, it supports QUIC servers to protect against denial-of-service attacks. Therefore, QUIC servers are likely to use these optional stateless retries when experiencing many connection requests from source addresses with unresponsive clients. Client Server Clie ntH ello peers proceed with connection establishment … retr y, toke n Clie ntH ello , toke n b) Connection migration: QUIC connections are associated with connection IDs that allow the identification of a connection independently of the used source address and port number. Connection IDs allow connections to survive an endpoint's change of the IP address and/or port number which might occur because of NAT timeouts and rebinding [11], or clients changing their network connectivity. Only QUIC clients can initiate connection migrations to a different endpoint's IP address and/or port number. However, the client must wait until the handshake is completed and forward secure keys are established before initiating the connection migration. The endpoints might use multiple network paths simultaneously during the connection migration. The peers can optionally probe a new path for peer reachability before migrating a connection to it. However, when a connection is migrated to a new path the server must ensure the client is reachable via this path before sending large amounts of data. Furthermore, the peers need adapting their sending rate to the new path by resetting their congestion controller and round-trip time estimator. B. SOCKS protocol RFC 1928 describes the current version of the SOCKS protocol [12]. Usages of SOCKS proxies include the traversal of network firewalls [12], the translation between IPv6 and IPv4 address space [13], and privacy-enhancing technologies such as TOR onion routing [14]. Figure 2 provides a schematic of a connection between client and server through a SOCKS proxy. To begin with, the client establishes a TCP connection to the proxy's port 1080. This connection is used by the client and the SOCKS proxy to exchange control messages. For example, the client can use this control channel for authentication or to request a new connection to a server. The SOCKS protocol supports the exchange of UDP datagrams between the client and server. As a result, QUIC connections can be established via default SOCKS proxies. For this purpose, the client sends a UDP associate request to the socks proxy. If the client's request is approved, the SOCKS proxy responses with a source address and port number to which the client can send the UDP datagrams to be relayed to the server. Subsequently, the client attaches a SOCKS request header to its UDP datagrams and sends them to the indicated port number/ IP address. Upon receiving these UDP datagrams, the proxy will remove the request header and send them from its own source address to the server. The server will send its response to the proxy server, which will then relay it to the client. Note, that the SOCKS protocol allows clients to delegate the task of DNS name resolution. For this purpose, the client includes the domain name within its request header. Subsequently, the SOCKS proxy resolves this domain name and relays the packets to the corresponding destination. C. Delays caused by high latencies to recursive DNS resolvers The Domain Name System (DNS) is responsible for resolving domain names into IP addresses. Many operating systems or web browsers have a local DNS cache. However, between 12.9% and 20.4% of a user's established TCP connections directly follow a DNS query [15]. A popular website requires connections to about 20 different hostnames [16]. Hence, the user conducts on average between 2.6 and 4.1 fresh DNS queries per website retrieval. Each of these DNS queries delays the subsequent connection establishment to the server serving the queried hostname. Furthermore, websites usually have a nested hierarchy of requests to different hostnames [16]. If such nested requests to different hostnames require each a DNS query by the client, then the website loading is delayed by the time required for these sequential DNS queries. In this paper, we assume that the client can resolve either a domain name using its local cache or needs to query recursively its DNS resolver as shown in Figure 3. If the recursive resolver has a cache miss for the queried domain name, it starts an iterative query. The arrows two to seven in Figure 3 indicate such a complete iterative query involving the DNS root server, Top Level Domain (TLS) server, and finally the authoritative nameserver of the respective domain name. DNS recursive resolvers make extensive use of caching with reported hit rates larger than 80% [15]. Thus, the round-trip time (RTT) between the client and the recursive resolver can present a significant source of delay for a DNS query. Studies of home routers in the US indicate typical RTT between 5 ms and 15 ms to their ISP-provided DNS resolver [1]. However, a fraction of about 5% of the users experience a RTT longer than 20 ms [17]. Studies with the popular third-party resolver Google Public DNS indicate a median RTT of 23 and 25% of the measurement nodes experienced RRTs longer than 50 ms [17]. For users having a downstream throughput of more than 16 Mbits/sec, the page load time highly depends on their network latency and DNS query time compared to their available throughput [1]. As a result, especially clients having a high network latency to their resolver require technological improvements to reduce their experienced DNS query time to achieve faster website retrievals. III. QUICSOCKS In this section, we introduce the QuicSocks design. This novel approach improves the latency of QUIC's connection establishments that directly follow a DNS lookup. First, we summarize our design goals, before we present QuicSocks. Finally, we describe the implementation of our QuicSocks prototype. A. Design goals We aim to develop a solution that supports the following goals: 1) Deployable on today's Internet which excludes approaches requiring changes to middle-boxes, kernels of client machines, the DNS protocol, or the QUIC protocol. 2) Reduces the latency of QUIC's connection establishments that require a prior DNS lookup. 3) Does not make use of IP address spoofing as this practice conflicts with RFC 2827 [18]. 4) Supports clients behind Network Address Translators (NAT). 5) Guarantees confidentiality by assuring end-to-end encryption between the client and the web server. 6) Limits the consumption of the proxy's bandwidth. 7) Privacy assurances similar to using a recursive DNS resolver. B. Design In this section, we present the protocol flow of a connection establishment using a QuicSocks proxy. First, the client needs to establish a control channel with the QuicSocks proxy. The client learns via the control channel which port of the QuicSocks proxy can be used for its connection establishments. A single control channel can be used to establish several QUIC connections via the proxy. Furthermore, the control channel is used by the proxy to validate the client's claimed source address. Subsequently, the establishment of a single QUIC connection follows the protocol flow shown in Figure 4. Note, that each UDP datagram exchanged between the client and server is encapsulated and carries a request header as common in the SOCKS protocol [12]. To begin the connection establishment, the client sends its QUIC ClientHello message to the Quic-Socks proxy indicating the domain name of the destination server in the SOCKS' request header. Upon receiving this message, the proxy authenticates the client based on the datagrams' encapsulation and caches this message. Subsequently, the proxy does a DNS lookup for the presented domain name and forwards the ClientHello message to the destination's server IP address. Next, the proxy forwards also the obtained DNS response to the client. Note, that the QuicSocks proxy sends all forwarded datagrams from its own source address. Upon receiving the DNS response from the proxy, the client starts probing the direct path to the respective web server to prepare a seamless connection migration to this new path. Upon receiving the forwarded ClientHello, the server can optionally conduct a stateless retry as shown in Figure 4. In this case, the server returns a retry message and an address validation token to the proxy. Receiving such a request for a stateless retry, the proxy resends the cached ClientHello message and along with the received address validation token. This challenge-response mechanism allows the QUIC server to validate the claimed source address before proceeding with the cryptographic connection establishment. Following the default QUIC handshake, the server proceeds by sending messages including the ServerHello and the FIN, which signals that the server established forward-secure encryption keys. Receiving these messages of the cryptographic connection establishment, the proxy forwards them to the client. Based on these messages, the client validates the server's identity and computes its forward-secure encryption keys. To complete the handshake, the client sends its FIN message via the proxy to the server. Subsequently, the client migrates the established connection towards the direct path between client and server. The connection migration reduces the system utilization of the QuicSocks proxy and possibly leads to shorter round-trip times between client and server. C. Implementation The implementation of our proposal aims to demonstrate its real-world feasibility. Our implemented prototype is capable of establishing a connection via the default SOCKS protocol and subsequently migrate the connection to the direct path between QUIC server and client. Our modified client is written in about 350 lines of Rust code and make use of the rust-socks (v0.3.2) and quiche (v0.1.0-alpha3) libraries. Rust-socks provides an abstraction of a SOCKS connection with an interface that is similar to the operating system's UDP sockets and allows to Sea mle ss con nect ion mig ratio n tow ards the dire ct path Fig. 4. Protocol flow of a QUIC handshake via the proposed QuicSocks proxy, who resolves the server's IP address. The handshake includes an optional stateless retry initiated by the server. After the server and client established forward secure keys and exchanged FIN messages, the client initiates the connection migration towards the direct path. transparently use a SOCKS connection. Quiche is an experimental QUIC implementation that separates protocol messages from socket operations which accommodates our use-case of switching between SOCKS sockets and the operating system's UDP sockets within the same QUIC connection. In detail, we modified quiche's example client implementation to perform a QUIC handshake through a rust-socks socket. Once the connection establishment is completed, we switch to a new operating system UDP socket to communicate with the QUIC server over the direct path. Note, that the server's IP address required to conduct this switching is provided by the datagram header of the default SOCKS protocol. Furthermore, we adapted our client implementation to measure the required time for a connection establishment. The time is measured from the request to establish a connection until the QUIC handshake is completed. In total, we implemented these time measurements for three different connection situations. The first connection situation includes additionally the overhead required to establish the connection with the SOCKS proxy. The second connection situation assumes an established SOCKS connection and measures only the time required to establish a QUIC handshake employing a SOCKS proxy. Finally, the last connection situation conducts a time measurement for a plain QUIC connection establishment without using a SOCKS proxy. Note, that our prototype does not provide a complete QuicSocks implementation because we did not apply changes to the used SOCKS proxy. As a result, the used proxy does not support the stateless retry mechanism as proposed. Furthermore, our proxy does not provide the client with the resolved QUIC server address directly after the DNS lookup. Instead, within our test setup, the client retrieves the QUIC server address from the SOCKS encapsulation of the forwarded server response. Hence, our client implementation does not start validating the direct path between client and server before migrating the connection. IV. EVALUATION In this section, we evaluate the proposed connection establishment via QuicSocks proxies. To begin with, we investigate feasible performance improvements of our proposal compared to the status quo via an analytical model. Then, we conduct latency measurements between clients, servers, and DNS resolvers to approximate real-world delays for QuicSocks proxies that are colocated with the respective DNS resolver. Finally, we present performance measurements using our QuicSocks prototype. A. Analytical evaluation The performance benefit of employing a QuicSocks proxy for the connection establishment depends on the network topology. For reasons of clarity, we assume in our analytical model a colocation of the DNS resolver and the QuicSocks proxy (see Figure 5). Furthermore, our model is reduced to the network latency between the involved peers. As shown in Figure 5, we denote the round-trip time between client and DNS resolver/ QuicSocks proxy as RTT DNS . RTT direct and RTT Server describe the round-trip time between server and client, and between server and QuicSocks proxy, respectively. Table I presents the evaluation results for our analytical model. A connection without stateless retry requires RTT DNS to resolve the domain name and subsequently RTT direct to establish the connection between client and QUIC server using the status quo. However, to establish the same connection via a QuicSocks proxy the sum of RTT DNS and RTT Server is required. Note, that we define a connection to be established when the client and the server computed their forward-secure encryption keys and are ready to send application data. Thus, we may count a connection as established before the client's FIN message has been processed by the server. With respect to stateless retries, we observe that the delay of the connection establishments increase for the status quo and our proposal by a RTT direct and a RTT Server , respectively. In total, our analytical model indicates our proposal outperforms the current status quo if RTT Server is smaller than RTT direct . In this case, we find that the reduced delay of the connection establishment without stateless retry is equal to the difference between RTT Server and RTT direct . Moreover, the benefit of our proposal is doubled if the connection establishment requires a stateless retry. Our proposal achieves its worst performance when the client is colocated with the server. However, the best performance can be realized when the DNS resolver, the QuicSocks proxy, and the server are colocated. Client QuicSocks proxy/ DNS resolver Server RTT direct RTT Server RTT DNS In the following, we assume an ISP provides a DNS resolver/ QuicSocks proxy half-way, on-path between client and server. Note, that the client's latency to the first IP hop (last mile latency) contributes between 40% and 80% of a typical RTT direct [19]. A typical RTT direct in the LTE mobile network of the U.S. is 60 ms to reach popular online services [20]. For this example, we assume RTT DNS and RTT Server to be each 30 ms, while RTT direct is 60 ms. Table II provides the results for this example. We find, that our proposal accelerates the connection establishment by 30 ms and 60 ms depending on the requirement of a stateless retry. In summary, the total delay overhead of the connection establishment is reduced by up to 40% and at least 33.3%. Note, that the absolute benefit of our proposal is even higher for 3G networks where RTT direct in the U.S. is on average between 86 ms and 137 ms depending on the mobile network provider [20]. B. Real-world network topologies Our analytical evaluation indicates, that our proposal can significantly reduce the latency of a QUIC connection establishments with a prior DNS query if the QuicSocks proxy has a favorable position in the network topology. In this section, we investigate real-world network topologies to approximate the feasible performance benefit of QuicSocks proxies when they are colocated with ISP-provided DNS resolvers. We begin by describing our applied methodology to measure real-world network topologies. Subsequently, we evaluate the QuicSocks proposal based on our collected data. a) Data collection: Our data collection aims to measure RTT DNS , RTT Server , and RTT direct for different real-world clients. We use nodes of the RIPE Atlas network [21] to represent our clients. These RIPE Atlas nodes allow us to conduct custom ping measurements and DNS queries. The selected nodes are in different autonomous systems all over Germany including home networks and data centers. Furthermore, also our test server is in a data center in Germany operated by the Hetzner Online GmbH. The aim of this test setup is to be representative for a typical Internet connections in countries with a similar infrastructure like Germany. To measure the RTTs between the involved peers, we require the IP address of each peer to conduct corresponding ping measurements. While we have access to the IP address of our clients and the test server, we cannot look up the address of the client's locally configured DNS resolver. Furthermore, the DNS resolvers might use an anycast service for its IP address [22] that may return different physical endpoints when pinged from the client and the server, respectively. We used message 6 in Figure 3, where the recursive resolver sends a request to the authoritative nameserver to learn the IP address of the recursive DNS resolver. In detail, we announced a DNS authority section at our test server for a subdomain such as dnstest.example.com. Then, we conducted a DNS query from the client to a random subdomain in our authority section such as foobar.dnstest.example.com. At the same time, we captured the network traffic on the server and found a DNS query for this subdomain foobar.dnstest.example.com. We reasoned that the sender's address of this DNS query is resolving the client's DNS query. Depending on the DNS setup, the IP address of locally configured DNS resolver might mismatch the address sending the query to the authoritative nameserver. For these cases, we assume that both DNS resolvers are colocated yielding about the same RTT DNS and RTT Server with respect to our measurements. In total, we used 800 RIPE Atlas nodes in Germany to conduct our data collection on the 13th of June 2019. A successful measurement includes RTT DNS , RTT Server , and RTT direct for the nodes, where we used an average over five ping measurements to determine the respective RTT. In our data collection, we obtained successful results for 650 nodes. Failures can be mainly attributed to DNS resolver that did not respond to ping measurements. However, a small fraction of measurements experienced also failures during DNS measurements. To focus our data collection on ISP-provided DNS resolvers, we investigated the autonomous system numbers of the observed IP addresses. We assume, that an ISP-provided DNS resolver uses an IP address from the same autonomous system as the node does. This approach allows us to sort out configured public DNS resolvers such as Google DNS which will usually operate from an IP address assigned to a different autonomous system compared to the node. In total, our data collection successfully obtained measurements from 474 nodes in Germany using each an ISP-provided DNS resolver. b) Results: To accelerate a connection establishment via our proposal, we require RTT Server to be smaller than RTT direct . Our results indicate for almost all clients RTT Server is significantly smaller than RTT direct . For 51% of the considered RIPE Atlas nodes, RTT Server is at least 5 ms smaller than RTT direct . Furthermore, 36.7% of the nodes experience RTT Server to be at least 10 ms smaller than RTT direct . As can be observed in Figure 6, almost no nodes experiences RTT Server to be longer than 40 ms, while a tail of 10% of the respective RIPE Atlas nodes observe a longer RTT direct . In this long tail, we find 7.2% and 3.8% of the nodes to have a RTT Server that outperforms RTT direct by at least 40 ms and 50 ms, respectively. Furthermore, Figure 6 provides a plot of RTT DNS . We find, that 60% of the nodes have a RTT with their ISPprovided DNS resolver of less than 10 ms. Moreover, RTT DNS is almost always smaller than RTT direct for a specific node. This can be explained through RIPE Atlas nodes that are located towards the periphery of the Internet compared to their ISPprovided DNS resolvers holding a position closer to the core of the Internet. To evaluate our proposal compared to the status quo, we combine the equations provided in Table I with the measured RTT. Figure 7 plots these results as a cumulative distribution of the RIPE Atlas nodes in Germany using an ISP-provided DNS resolver over the required network latency to complete the QUIC connection establishment. In total, Figure 7 contains four plots. In the scenario of a QUIC connection establishment using a stateless retry, the solid and dashed line represent the status quo and our proposed solution, respectively. In the scenario of a QUIC handshake without stateless retry, the status quo and our proposal are marked as dash-dotted and dotted lines, respectively. In total, our results indicate our proposal accelerates the connection establishment for the great majority of investigated RIPE Atlas nodes. Furthermore, we observe the trend that performance improvements are higher for nodes with longer required network latency to complete the handshake. For example, we find that approximately 10% of the nodes save at least 30 ms establishing the connection without stateless retry and 60 ms with a stateless retry. Moreover, 24.3% of the investigated nodes save at least 15 ms without and 30 ms with a stateless retry during the connection establishment. Note, that approximately a third of the nodes experience a faster connection establishment using our proposal in a stateless retry connection establishment than having a status quo handshake without stateless retry. C. Prototype-based Measurements In this section, we compare the delay of a default QUIC connection establishment with handshakes using our proposal. However, the performance of our proposal significantly depends on the network topology of our test setup. This measurement neglects network topologies and investigates the delay caused by the computational overhead of introducing a QuicSocks proxy on a network link. a) Data collection: For our test setup, we use a publicly accessibly QUIC server, a Dante SOCKS proxy (v1.4.2) and our implemented prototype to represent the client. Our prototype and the Dante SOCKS proxy are run on the same virtual machine. The virtual machine is equipped with 1 vCPU and 0.6 GB RAM and runs Debian 9.9 (Stretch). The colocation of our client implementation with the proxy ensures that measurements using the proxy make use of the same network path as measurements conducted without the proxy. In detail, we conduct three different types of measurements on the 25th of June 2019 of which we repeat each measurement 1 000 times. The default measurements do not employ our proxy and investigate the required time to establish a QUIC connection with the server. The cold start measurements include the time required to establish the SOCKS connection and the subsequent QUIC handshake via the proxy. Note, that a single SOCKS connection can be used to establish several QUIC connections. The warm start measurement includes the time to establish a QUIC connection via our proxy but excludes the delay incurred by establishing the SOCKS connection. b) Results: Our data collection provided us with 1 000 values for each of the three measurement types. To evaluate our collected data, we retrieve the minimum and the median value of each measurement type. The default measurement has a minimum of 49.145 ms and a median of 51.309 ms. The warm start measurement has a minimum of 49.708 ms and a median of 52.471 ms. These values are between 1.1% and 2.3% higher than the default measurement. This can be explained by the additional overhead caused by the interaction with the proxy. Furthermore, these values indicate an absolute overhead of using a SOCKS proxy of less than 1.2 ms for the median value, if the SOCKS connection is already established. The cold start measurement yields a minimum value of 52.073 ms and a median of 54.772 ms. Comparing both measurements using the SOCKS proxy, we can attribute an additional overhead of about 2.3 ms in our test setup to establish the SOCKS connection. As a result, we recommend clients to early establish their SOCKS connection and to use the warm start approach to reduce the delays during their QUIC connection establishments. V. RELATED WORK There is much previous work on accelerating connection establishments on the web. For example, Google launched in 2019 its feature Chrome Lite Pages [23]. Lite Pages runs a proxy server that prefetches a website and forwards a compressed version of it to the client. This approach leads to significant performance improvements for clients experiencing high network latencies as they do only need to establish a single connection to the proxy server to retrieve the website. However, as major disadvantages compared to our proposal this leads to a significant load on the proxy server and breaks the principle of end-to-end transport encryption between the client and the web server. Furthermore, Miniproxy [24] can be used to accelerate TCP's connection establishment. This approach places a proxy between the client and the web server, which doubles the number of required TCP handshakes. Miniproxy can provide a faster TCP connection establishment in case of a favorable network topology and significant RTTs between client and web server. The QUIC protocol includes computationally expensive cryptographic handshakes causing a significant delay compared to TCP's handshake [25]. Therefore, this approach seems less feasible when QUIC is used. The ASAP [26] protocol piggybacks the first transport packet within the client's DNS query and the DNS server forwards it to the web server after resolving the IP address. However, this approach requires the DNS server to spoof the clients IP address which leads to a violation of the Best Current Practice RFC 2827 [18]. Furthermore, a deployment of ASAP requires significant infrastructural changes to the Internet because it uses a custom transport protocol. Further possible performance improvements can be achieved by sending replicated DNS queries to several DNS resolvers and occasionally receiving a faster response [27]. Another DNS-based mechanism aiming to reduce latency uses Server Push [28] where the resolver provides speculative DNS responses prior to the client's query. In total, these approaches tradeoff a higher system utilization versus a possibly reduced latency. VI. CONCLUSION We expect high-latency access networks to remain a web performance bottleneck for a significant number of users throughout the forthcoming years. The QUIC protocols aims to reduce the delay of connection establishments on the web. However, our measurements across a wide variety of access networks in Germany indicates a tail of users is affected by significant delays beyond 100 ms to complete a DNS lookup with a subsequent QUIC connection establishment. Our proposal exploits the fact that ISP-provided DNS resolvers are typically located further into the core Internet than clients. We find, that colocating a proxy with the ISPprovided DNS resolver provides significant performance gains for clients on high-latency access networks. For example, a client can delegate the task of DNS lookups to the proxy in a more favorable network position. Furthermore, the QUIC protocol provides features such as connection migration or the concept of stateless retries that allow further performanceoptimizations when employing a proxy. We hope that our work leads to an increased awareness of the performance problems experienced by a significant tail of users on highlatency access networks and spurs further research to reduce this web performance bottleneck.
5,679
1907.01291
2954153097
A significant amount of connection establishments on the web require a prior domain name resolution by the client. Especially on high-latency access networks, these DNS lookups cause a significant delay on the client's connection establishment with a server. To reduce the overhead of QUIC's connection establishment with prior DNS lookup on these networks, we propose a novel QuicSocks proxy. Basically, the client delegates the domain name resolution towards the QuicSocks proxy. Our results indicate, that colocating our proxy with real-world ISP-provided DNS resolvers provides great performance gains. For example, 10 of our 474 sample nodes distributed across ISP's in Germany would save at least 30ms per QUIC connection establishment. The design of our proposal aims to be readily deployable on the Internet by avoiding IP address spoofing, anticipating Network Address Translators and using the standard DNS and QUIC protocols. In summary, our proposal fosters a faster establishment of QUIC connections for clients on high-latency access networks.
Further possible performance improvements can be achieved by sending replicated DNS queries to several DNS resolvers and occasionally receiving a faster response @cite_8 . Another DNS-based mechanism aiming to reduce latency uses Server Push @cite_5 where the resolver provides speculative DNS responses prior to the client's query. In total, these approaches tradeoff a higher system utilization versus a possibly reduced latency.
{ "abstract": [ "This document defines a protocol for sending DNS queries and getting DNS responses over HTTPS. Each DNS query-response pair is mapped into an HTTP exchange.", "Low latency is critical for interactive networked applications. But while we know how to scale systems to increase capacity, reducing latency --- especially the tail of the latency distribution --- can be much more difficult. In this paper, we argue that the use of redundancy is an effective way to convert extra capacity into reduced latency. By initiating redundant operations across diverse resources and using the first result which completes, redundancy improves a system's latency even under exceptional conditions. We study the tradeoff with added system utilization, characterizing the situations in which replicating all tasks reduces mean latency. We then demonstrate empirically that replicating all operations can result in significant mean and tail latency reduction in real-world systems including DNS queries, database servers, and packet forwarding within networks." ], "cite_N": [ "@cite_5", "@cite_8" ], "mid": [ "2806616617", "2107276343" ] }
Accelerating QUIC's Connection Establishment on High-Latency Access Networks
Abstract-A significant amount of connection establishments on the web require a prior domain name resolution by the client. Especially on high-latency access networks, these DNS lookups cause a significant delay on the client's connection establishment with a server. To reduce the overhead of QUIC's connection establishment with prior DNS lookup on these networks, we propose a novel QuicSocks proxy. Basically, the client delegates the domain name resolution towards the QuicSocks proxy. Our results indicate, that colocating our proxy with real-world ISPprovided DNS resolvers provides great performance gains. For example, 10% of our 474 sample nodes distributed across ISP's in Germany would save at least 30 ms per QUIC connection establishment. The design of our proposal aims to be readily deployable on the Internet by avoiding IP address spoofing, anticipating Network Address Translators and using the standard DNS and QUIC protocols. In summary, our proposal fosters a faster establishment of QUIC connections for clients on highlatency access networks. Index Terms-QUIC Transport Protocol, SOCKS Proxy, DNS, QuicSocks Proxy I. INTRODUCTION For U.S. households latency is the main web performance bottleneck for broadband access networks exceeding a throughput of 16 Mbit/sec [1]. Depending on the user's location and the deployed access technology like cable, fiber, Digital Subscriber Line (DSL), Long-Term Evolution (LTE), or satellite, users may experience significant network latencies. High-latency links reduce the user's quality of experience during web browsing [2] and negatively impact the per-user revenue of online service provider [3]. Thus, optimizing the web performance on such existing high-latency network links is an important task. In this paper, we focus on improving the time to first byte which contributes up to 21% of the page load time for popular websites [1]. In detail, we improve the delay of QUIC's connection establishment with prior DNS lookup on high-latency links. QUIC replaces the TLS over TCP protocol stack within the upcoming HTTP/3 version [4]. As the web is built upon the Hypertext Transfer Protocol (HTTP) and the standardization of QUIC receives widespread support, the QUIC protocol is expected to be widely deployed on the Internet in the forthcoming years. Our proposal assumes, that Internet Service Providers (ISP) aim to improve their clients' quality of experience during web browsing. This assumption is substantiated by ISPs providing recursive DNS resolvers to accelerate their client's DNS lookups. In this work, we propose ISP-provided proxies to reduce the delay of their client's QUIC connection establishments. Instead of conducting a DNS lookup and waiting for the response, this design allows the client to directly send its initial QUIC messages to the novel QuicSocks proxy. Upon receiving these messages, the proxy resolves the domain name and forwards the messages to the respective QUIC server. After the end-to-end encrypted connection between the client and the server is established, the connection is seamlessly migrated to the direct path between these peers. These novel QuicSocks proxies can accelerate the client's connection establishment to a server, if they perform faster DNS lookups and/or have a lower network latency to the QUIC server compared to the client. A favorable network topology would place a QuicSocks proxy colocated with the ISP-provided DNS resolver in an on-path position between the client and the server. Our proposal can be applied to many high-latency links. Within the next years, enterprises like SpaceX, OneWeb, and Telesat plan to launch thousands of satellites for global broadband connectivity aiming to provide Internet access to millions of people [5]. This presents a well-suited application area for our proposal because of the significant latencies of about 30 ms between the client and the ISP's ground station [6]. In summary, this paper makes the following contributions: • We propose the novel QuicSocks design that allows clients to send initial handshake messages without a prior resolution of the domain name. The name resolution is conducted by a QuicSocks proxy from a more favorable position in the ISP's network to accelerate the connection establishment. • We evaluate our proposal by assuming a colocation of the ISP-provided DNS resolver with the QuicSocks proxy. Results based on our analytical model indicate for a QUIC connection establishment accelerations between 33% and 40% on a U.S. mobile LTE network. Furthermore, our measurements of real-world network topologies indicate the feasibility of significant performance gains for clients on high-latency access networks. For example. 10% of the investigated clients save at least 30 ms to complete their QUIC handshake. • We implemented a prototype of our proposal to demonstrate its real-world feasibility. Our results indicate, that the computations of the QuicSocks proxy itself are lightweight and contribute less than 1.2 ms to a QUIC connection establishment. The remainder of this paper is structured as follows: Section II introduces the QUIC and the SOCKS protocol and describes the performance problem that we aim to solve. Section III summarizes the proposed QuicSocks design and evaluation results are presented in Section IV. Related work is reviewed in Section V, and Section VI concludes the paper. A. QUIC transport protocol In this paper, we refer to QUIC's draft version 20 of the Internet Engineering Task Force (IETF) as the QUIC protocol [7]. The QUIC transport protocol will replace TLS over TCP in the upcoming HTTP/3 network protocol [4], which is closely tied to the world wide web. Thus, deployment of HTTP/3 on the web will significantly contribute to QUIC's adoption on the Internet in the forthcoming years. Compared to TLS over TCP, the UDP-based QUIC protocol allows for faster connection establishments [8], mitigates head-ofline blocking [9], and can be extended because of a lower interference through middleboxes [10]. In the following, we provide details on two mechanisms of the QUIC protocol that our proposed QuicSocks approach makes use of. These are a challenge-response mechanism in QUIC's connection establishment known as stateless retry and QUIC's connection migration that allows transferring an established connection to a new endpoint address. a) Stateless retry: The stateless retry mechanism can be optionally used by QUIC servers to validate the source address claimed by a client before proceeding with the cryptographic connection establishment. As shown in Figure 1, the server responds to the client's initial connection request with a retry message that contains a source address token. This token is opaque to the client and contains information about the client's source address. Subsequently, the client returns this token together with its previously sent ClientHello message. Upon receiving this message from the client, the server first validates the presented token. A token is valid for a connection request if the client's claimed source address matches the address encoded in the token. In this case, the server assumes that the client can receive packets at the claimed address and proceeds with the cryptographic connection establishment. A stateless retry presents a performance limitation as it adds a round-trip time to the connection establishment. However, it supports QUIC servers to protect against denial-of-service attacks. Therefore, QUIC servers are likely to use these optional stateless retries when experiencing many connection requests from source addresses with unresponsive clients. Client Server Clie ntH ello peers proceed with connection establishment … retr y, toke n Clie ntH ello , toke n b) Connection migration: QUIC connections are associated with connection IDs that allow the identification of a connection independently of the used source address and port number. Connection IDs allow connections to survive an endpoint's change of the IP address and/or port number which might occur because of NAT timeouts and rebinding [11], or clients changing their network connectivity. Only QUIC clients can initiate connection migrations to a different endpoint's IP address and/or port number. However, the client must wait until the handshake is completed and forward secure keys are established before initiating the connection migration. The endpoints might use multiple network paths simultaneously during the connection migration. The peers can optionally probe a new path for peer reachability before migrating a connection to it. However, when a connection is migrated to a new path the server must ensure the client is reachable via this path before sending large amounts of data. Furthermore, the peers need adapting their sending rate to the new path by resetting their congestion controller and round-trip time estimator. B. SOCKS protocol RFC 1928 describes the current version of the SOCKS protocol [12]. Usages of SOCKS proxies include the traversal of network firewalls [12], the translation between IPv6 and IPv4 address space [13], and privacy-enhancing technologies such as TOR onion routing [14]. Figure 2 provides a schematic of a connection between client and server through a SOCKS proxy. To begin with, the client establishes a TCP connection to the proxy's port 1080. This connection is used by the client and the SOCKS proxy to exchange control messages. For example, the client can use this control channel for authentication or to request a new connection to a server. The SOCKS protocol supports the exchange of UDP datagrams between the client and server. As a result, QUIC connections can be established via default SOCKS proxies. For this purpose, the client sends a UDP associate request to the socks proxy. If the client's request is approved, the SOCKS proxy responses with a source address and port number to which the client can send the UDP datagrams to be relayed to the server. Subsequently, the client attaches a SOCKS request header to its UDP datagrams and sends them to the indicated port number/ IP address. Upon receiving these UDP datagrams, the proxy will remove the request header and send them from its own source address to the server. The server will send its response to the proxy server, which will then relay it to the client. Note, that the SOCKS protocol allows clients to delegate the task of DNS name resolution. For this purpose, the client includes the domain name within its request header. Subsequently, the SOCKS proxy resolves this domain name and relays the packets to the corresponding destination. C. Delays caused by high latencies to recursive DNS resolvers The Domain Name System (DNS) is responsible for resolving domain names into IP addresses. Many operating systems or web browsers have a local DNS cache. However, between 12.9% and 20.4% of a user's established TCP connections directly follow a DNS query [15]. A popular website requires connections to about 20 different hostnames [16]. Hence, the user conducts on average between 2.6 and 4.1 fresh DNS queries per website retrieval. Each of these DNS queries delays the subsequent connection establishment to the server serving the queried hostname. Furthermore, websites usually have a nested hierarchy of requests to different hostnames [16]. If such nested requests to different hostnames require each a DNS query by the client, then the website loading is delayed by the time required for these sequential DNS queries. In this paper, we assume that the client can resolve either a domain name using its local cache or needs to query recursively its DNS resolver as shown in Figure 3. If the recursive resolver has a cache miss for the queried domain name, it starts an iterative query. The arrows two to seven in Figure 3 indicate such a complete iterative query involving the DNS root server, Top Level Domain (TLS) server, and finally the authoritative nameserver of the respective domain name. DNS recursive resolvers make extensive use of caching with reported hit rates larger than 80% [15]. Thus, the round-trip time (RTT) between the client and the recursive resolver can present a significant source of delay for a DNS query. Studies of home routers in the US indicate typical RTT between 5 ms and 15 ms to their ISP-provided DNS resolver [1]. However, a fraction of about 5% of the users experience a RTT longer than 20 ms [17]. Studies with the popular third-party resolver Google Public DNS indicate a median RTT of 23 and 25% of the measurement nodes experienced RRTs longer than 50 ms [17]. For users having a downstream throughput of more than 16 Mbits/sec, the page load time highly depends on their network latency and DNS query time compared to their available throughput [1]. As a result, especially clients having a high network latency to their resolver require technological improvements to reduce their experienced DNS query time to achieve faster website retrievals. III. QUICSOCKS In this section, we introduce the QuicSocks design. This novel approach improves the latency of QUIC's connection establishments that directly follow a DNS lookup. First, we summarize our design goals, before we present QuicSocks. Finally, we describe the implementation of our QuicSocks prototype. A. Design goals We aim to develop a solution that supports the following goals: 1) Deployable on today's Internet which excludes approaches requiring changes to middle-boxes, kernels of client machines, the DNS protocol, or the QUIC protocol. 2) Reduces the latency of QUIC's connection establishments that require a prior DNS lookup. 3) Does not make use of IP address spoofing as this practice conflicts with RFC 2827 [18]. 4) Supports clients behind Network Address Translators (NAT). 5) Guarantees confidentiality by assuring end-to-end encryption between the client and the web server. 6) Limits the consumption of the proxy's bandwidth. 7) Privacy assurances similar to using a recursive DNS resolver. B. Design In this section, we present the protocol flow of a connection establishment using a QuicSocks proxy. First, the client needs to establish a control channel with the QuicSocks proxy. The client learns via the control channel which port of the QuicSocks proxy can be used for its connection establishments. A single control channel can be used to establish several QUIC connections via the proxy. Furthermore, the control channel is used by the proxy to validate the client's claimed source address. Subsequently, the establishment of a single QUIC connection follows the protocol flow shown in Figure 4. Note, that each UDP datagram exchanged between the client and server is encapsulated and carries a request header as common in the SOCKS protocol [12]. To begin the connection establishment, the client sends its QUIC ClientHello message to the Quic-Socks proxy indicating the domain name of the destination server in the SOCKS' request header. Upon receiving this message, the proxy authenticates the client based on the datagrams' encapsulation and caches this message. Subsequently, the proxy does a DNS lookup for the presented domain name and forwards the ClientHello message to the destination's server IP address. Next, the proxy forwards also the obtained DNS response to the client. Note, that the QuicSocks proxy sends all forwarded datagrams from its own source address. Upon receiving the DNS response from the proxy, the client starts probing the direct path to the respective web server to prepare a seamless connection migration to this new path. Upon receiving the forwarded ClientHello, the server can optionally conduct a stateless retry as shown in Figure 4. In this case, the server returns a retry message and an address validation token to the proxy. Receiving such a request for a stateless retry, the proxy resends the cached ClientHello message and along with the received address validation token. This challenge-response mechanism allows the QUIC server to validate the claimed source address before proceeding with the cryptographic connection establishment. Following the default QUIC handshake, the server proceeds by sending messages including the ServerHello and the FIN, which signals that the server established forward-secure encryption keys. Receiving these messages of the cryptographic connection establishment, the proxy forwards them to the client. Based on these messages, the client validates the server's identity and computes its forward-secure encryption keys. To complete the handshake, the client sends its FIN message via the proxy to the server. Subsequently, the client migrates the established connection towards the direct path between client and server. The connection migration reduces the system utilization of the QuicSocks proxy and possibly leads to shorter round-trip times between client and server. C. Implementation The implementation of our proposal aims to demonstrate its real-world feasibility. Our implemented prototype is capable of establishing a connection via the default SOCKS protocol and subsequently migrate the connection to the direct path between QUIC server and client. Our modified client is written in about 350 lines of Rust code and make use of the rust-socks (v0.3.2) and quiche (v0.1.0-alpha3) libraries. Rust-socks provides an abstraction of a SOCKS connection with an interface that is similar to the operating system's UDP sockets and allows to Sea mle ss con nect ion mig ratio n tow ards the dire ct path Fig. 4. Protocol flow of a QUIC handshake via the proposed QuicSocks proxy, who resolves the server's IP address. The handshake includes an optional stateless retry initiated by the server. After the server and client established forward secure keys and exchanged FIN messages, the client initiates the connection migration towards the direct path. transparently use a SOCKS connection. Quiche is an experimental QUIC implementation that separates protocol messages from socket operations which accommodates our use-case of switching between SOCKS sockets and the operating system's UDP sockets within the same QUIC connection. In detail, we modified quiche's example client implementation to perform a QUIC handshake through a rust-socks socket. Once the connection establishment is completed, we switch to a new operating system UDP socket to communicate with the QUIC server over the direct path. Note, that the server's IP address required to conduct this switching is provided by the datagram header of the default SOCKS protocol. Furthermore, we adapted our client implementation to measure the required time for a connection establishment. The time is measured from the request to establish a connection until the QUIC handshake is completed. In total, we implemented these time measurements for three different connection situations. The first connection situation includes additionally the overhead required to establish the connection with the SOCKS proxy. The second connection situation assumes an established SOCKS connection and measures only the time required to establish a QUIC handshake employing a SOCKS proxy. Finally, the last connection situation conducts a time measurement for a plain QUIC connection establishment without using a SOCKS proxy. Note, that our prototype does not provide a complete QuicSocks implementation because we did not apply changes to the used SOCKS proxy. As a result, the used proxy does not support the stateless retry mechanism as proposed. Furthermore, our proxy does not provide the client with the resolved QUIC server address directly after the DNS lookup. Instead, within our test setup, the client retrieves the QUIC server address from the SOCKS encapsulation of the forwarded server response. Hence, our client implementation does not start validating the direct path between client and server before migrating the connection. IV. EVALUATION In this section, we evaluate the proposed connection establishment via QuicSocks proxies. To begin with, we investigate feasible performance improvements of our proposal compared to the status quo via an analytical model. Then, we conduct latency measurements between clients, servers, and DNS resolvers to approximate real-world delays for QuicSocks proxies that are colocated with the respective DNS resolver. Finally, we present performance measurements using our QuicSocks prototype. A. Analytical evaluation The performance benefit of employing a QuicSocks proxy for the connection establishment depends on the network topology. For reasons of clarity, we assume in our analytical model a colocation of the DNS resolver and the QuicSocks proxy (see Figure 5). Furthermore, our model is reduced to the network latency between the involved peers. As shown in Figure 5, we denote the round-trip time between client and DNS resolver/ QuicSocks proxy as RTT DNS . RTT direct and RTT Server describe the round-trip time between server and client, and between server and QuicSocks proxy, respectively. Table I presents the evaluation results for our analytical model. A connection without stateless retry requires RTT DNS to resolve the domain name and subsequently RTT direct to establish the connection between client and QUIC server using the status quo. However, to establish the same connection via a QuicSocks proxy the sum of RTT DNS and RTT Server is required. Note, that we define a connection to be established when the client and the server computed their forward-secure encryption keys and are ready to send application data. Thus, we may count a connection as established before the client's FIN message has been processed by the server. With respect to stateless retries, we observe that the delay of the connection establishments increase for the status quo and our proposal by a RTT direct and a RTT Server , respectively. In total, our analytical model indicates our proposal outperforms the current status quo if RTT Server is smaller than RTT direct . In this case, we find that the reduced delay of the connection establishment without stateless retry is equal to the difference between RTT Server and RTT direct . Moreover, the benefit of our proposal is doubled if the connection establishment requires a stateless retry. Our proposal achieves its worst performance when the client is colocated with the server. However, the best performance can be realized when the DNS resolver, the QuicSocks proxy, and the server are colocated. Client QuicSocks proxy/ DNS resolver Server RTT direct RTT Server RTT DNS In the following, we assume an ISP provides a DNS resolver/ QuicSocks proxy half-way, on-path between client and server. Note, that the client's latency to the first IP hop (last mile latency) contributes between 40% and 80% of a typical RTT direct [19]. A typical RTT direct in the LTE mobile network of the U.S. is 60 ms to reach popular online services [20]. For this example, we assume RTT DNS and RTT Server to be each 30 ms, while RTT direct is 60 ms. Table II provides the results for this example. We find, that our proposal accelerates the connection establishment by 30 ms and 60 ms depending on the requirement of a stateless retry. In summary, the total delay overhead of the connection establishment is reduced by up to 40% and at least 33.3%. Note, that the absolute benefit of our proposal is even higher for 3G networks where RTT direct in the U.S. is on average between 86 ms and 137 ms depending on the mobile network provider [20]. B. Real-world network topologies Our analytical evaluation indicates, that our proposal can significantly reduce the latency of a QUIC connection establishments with a prior DNS query if the QuicSocks proxy has a favorable position in the network topology. In this section, we investigate real-world network topologies to approximate the feasible performance benefit of QuicSocks proxies when they are colocated with ISP-provided DNS resolvers. We begin by describing our applied methodology to measure real-world network topologies. Subsequently, we evaluate the QuicSocks proposal based on our collected data. a) Data collection: Our data collection aims to measure RTT DNS , RTT Server , and RTT direct for different real-world clients. We use nodes of the RIPE Atlas network [21] to represent our clients. These RIPE Atlas nodes allow us to conduct custom ping measurements and DNS queries. The selected nodes are in different autonomous systems all over Germany including home networks and data centers. Furthermore, also our test server is in a data center in Germany operated by the Hetzner Online GmbH. The aim of this test setup is to be representative for a typical Internet connections in countries with a similar infrastructure like Germany. To measure the RTTs between the involved peers, we require the IP address of each peer to conduct corresponding ping measurements. While we have access to the IP address of our clients and the test server, we cannot look up the address of the client's locally configured DNS resolver. Furthermore, the DNS resolvers might use an anycast service for its IP address [22] that may return different physical endpoints when pinged from the client and the server, respectively. We used message 6 in Figure 3, where the recursive resolver sends a request to the authoritative nameserver to learn the IP address of the recursive DNS resolver. In detail, we announced a DNS authority section at our test server for a subdomain such as dnstest.example.com. Then, we conducted a DNS query from the client to a random subdomain in our authority section such as foobar.dnstest.example.com. At the same time, we captured the network traffic on the server and found a DNS query for this subdomain foobar.dnstest.example.com. We reasoned that the sender's address of this DNS query is resolving the client's DNS query. Depending on the DNS setup, the IP address of locally configured DNS resolver might mismatch the address sending the query to the authoritative nameserver. For these cases, we assume that both DNS resolvers are colocated yielding about the same RTT DNS and RTT Server with respect to our measurements. In total, we used 800 RIPE Atlas nodes in Germany to conduct our data collection on the 13th of June 2019. A successful measurement includes RTT DNS , RTT Server , and RTT direct for the nodes, where we used an average over five ping measurements to determine the respective RTT. In our data collection, we obtained successful results for 650 nodes. Failures can be mainly attributed to DNS resolver that did not respond to ping measurements. However, a small fraction of measurements experienced also failures during DNS measurements. To focus our data collection on ISP-provided DNS resolvers, we investigated the autonomous system numbers of the observed IP addresses. We assume, that an ISP-provided DNS resolver uses an IP address from the same autonomous system as the node does. This approach allows us to sort out configured public DNS resolvers such as Google DNS which will usually operate from an IP address assigned to a different autonomous system compared to the node. In total, our data collection successfully obtained measurements from 474 nodes in Germany using each an ISP-provided DNS resolver. b) Results: To accelerate a connection establishment via our proposal, we require RTT Server to be smaller than RTT direct . Our results indicate for almost all clients RTT Server is significantly smaller than RTT direct . For 51% of the considered RIPE Atlas nodes, RTT Server is at least 5 ms smaller than RTT direct . Furthermore, 36.7% of the nodes experience RTT Server to be at least 10 ms smaller than RTT direct . As can be observed in Figure 6, almost no nodes experiences RTT Server to be longer than 40 ms, while a tail of 10% of the respective RIPE Atlas nodes observe a longer RTT direct . In this long tail, we find 7.2% and 3.8% of the nodes to have a RTT Server that outperforms RTT direct by at least 40 ms and 50 ms, respectively. Furthermore, Figure 6 provides a plot of RTT DNS . We find, that 60% of the nodes have a RTT with their ISPprovided DNS resolver of less than 10 ms. Moreover, RTT DNS is almost always smaller than RTT direct for a specific node. This can be explained through RIPE Atlas nodes that are located towards the periphery of the Internet compared to their ISPprovided DNS resolvers holding a position closer to the core of the Internet. To evaluate our proposal compared to the status quo, we combine the equations provided in Table I with the measured RTT. Figure 7 plots these results as a cumulative distribution of the RIPE Atlas nodes in Germany using an ISP-provided DNS resolver over the required network latency to complete the QUIC connection establishment. In total, Figure 7 contains four plots. In the scenario of a QUIC connection establishment using a stateless retry, the solid and dashed line represent the status quo and our proposed solution, respectively. In the scenario of a QUIC handshake without stateless retry, the status quo and our proposal are marked as dash-dotted and dotted lines, respectively. In total, our results indicate our proposal accelerates the connection establishment for the great majority of investigated RIPE Atlas nodes. Furthermore, we observe the trend that performance improvements are higher for nodes with longer required network latency to complete the handshake. For example, we find that approximately 10% of the nodes save at least 30 ms establishing the connection without stateless retry and 60 ms with a stateless retry. Moreover, 24.3% of the investigated nodes save at least 15 ms without and 30 ms with a stateless retry during the connection establishment. Note, that approximately a third of the nodes experience a faster connection establishment using our proposal in a stateless retry connection establishment than having a status quo handshake without stateless retry. C. Prototype-based Measurements In this section, we compare the delay of a default QUIC connection establishment with handshakes using our proposal. However, the performance of our proposal significantly depends on the network topology of our test setup. This measurement neglects network topologies and investigates the delay caused by the computational overhead of introducing a QuicSocks proxy on a network link. a) Data collection: For our test setup, we use a publicly accessibly QUIC server, a Dante SOCKS proxy (v1.4.2) and our implemented prototype to represent the client. Our prototype and the Dante SOCKS proxy are run on the same virtual machine. The virtual machine is equipped with 1 vCPU and 0.6 GB RAM and runs Debian 9.9 (Stretch). The colocation of our client implementation with the proxy ensures that measurements using the proxy make use of the same network path as measurements conducted without the proxy. In detail, we conduct three different types of measurements on the 25th of June 2019 of which we repeat each measurement 1 000 times. The default measurements do not employ our proxy and investigate the required time to establish a QUIC connection with the server. The cold start measurements include the time required to establish the SOCKS connection and the subsequent QUIC handshake via the proxy. Note, that a single SOCKS connection can be used to establish several QUIC connections. The warm start measurement includes the time to establish a QUIC connection via our proxy but excludes the delay incurred by establishing the SOCKS connection. b) Results: Our data collection provided us with 1 000 values for each of the three measurement types. To evaluate our collected data, we retrieve the minimum and the median value of each measurement type. The default measurement has a minimum of 49.145 ms and a median of 51.309 ms. The warm start measurement has a minimum of 49.708 ms and a median of 52.471 ms. These values are between 1.1% and 2.3% higher than the default measurement. This can be explained by the additional overhead caused by the interaction with the proxy. Furthermore, these values indicate an absolute overhead of using a SOCKS proxy of less than 1.2 ms for the median value, if the SOCKS connection is already established. The cold start measurement yields a minimum value of 52.073 ms and a median of 54.772 ms. Comparing both measurements using the SOCKS proxy, we can attribute an additional overhead of about 2.3 ms in our test setup to establish the SOCKS connection. As a result, we recommend clients to early establish their SOCKS connection and to use the warm start approach to reduce the delays during their QUIC connection establishments. V. RELATED WORK There is much previous work on accelerating connection establishments on the web. For example, Google launched in 2019 its feature Chrome Lite Pages [23]. Lite Pages runs a proxy server that prefetches a website and forwards a compressed version of it to the client. This approach leads to significant performance improvements for clients experiencing high network latencies as they do only need to establish a single connection to the proxy server to retrieve the website. However, as major disadvantages compared to our proposal this leads to a significant load on the proxy server and breaks the principle of end-to-end transport encryption between the client and the web server. Furthermore, Miniproxy [24] can be used to accelerate TCP's connection establishment. This approach places a proxy between the client and the web server, which doubles the number of required TCP handshakes. Miniproxy can provide a faster TCP connection establishment in case of a favorable network topology and significant RTTs between client and web server. The QUIC protocol includes computationally expensive cryptographic handshakes causing a significant delay compared to TCP's handshake [25]. Therefore, this approach seems less feasible when QUIC is used. The ASAP [26] protocol piggybacks the first transport packet within the client's DNS query and the DNS server forwards it to the web server after resolving the IP address. However, this approach requires the DNS server to spoof the clients IP address which leads to a violation of the Best Current Practice RFC 2827 [18]. Furthermore, a deployment of ASAP requires significant infrastructural changes to the Internet because it uses a custom transport protocol. Further possible performance improvements can be achieved by sending replicated DNS queries to several DNS resolvers and occasionally receiving a faster response [27]. Another DNS-based mechanism aiming to reduce latency uses Server Push [28] where the resolver provides speculative DNS responses prior to the client's query. In total, these approaches tradeoff a higher system utilization versus a possibly reduced latency. VI. CONCLUSION We expect high-latency access networks to remain a web performance bottleneck for a significant number of users throughout the forthcoming years. The QUIC protocols aims to reduce the delay of connection establishments on the web. However, our measurements across a wide variety of access networks in Germany indicates a tail of users is affected by significant delays beyond 100 ms to complete a DNS lookup with a subsequent QUIC connection establishment. Our proposal exploits the fact that ISP-provided DNS resolvers are typically located further into the core Internet than clients. We find, that colocating a proxy with the ISPprovided DNS resolver provides significant performance gains for clients on high-latency access networks. For example, a client can delegate the task of DNS lookups to the proxy in a more favorable network position. Furthermore, the QUIC protocol provides features such as connection migration or the concept of stateless retries that allow further performanceoptimizations when employing a proxy. We hope that our work leads to an increased awareness of the performance problems experienced by a significant tail of users on highlatency access networks and spurs further research to reduce this web performance bottleneck.
5,679
1907.01417
2954765265
Knowledge base construction is crucial for summarising, understanding and inferring relationships between biomedical entities. However, for many practical applications such as drug discovery, the scarcity of relevant facts (e.g. gene X is therapeutic target for disease Y) severely limits a domain expert's ability to create a usable knowledge base, either directly or by training a relation extraction model. In this paper, we present a simple and effective method of extracting new facts with a pre-specified binary relationship type from the biomedical literature, without requiring any training data or hand-crafted rules. Our system discovers, ranks and presents the most salient patterns to domain experts in an interpretable form. By marking patterns as compatible with the desired relationship type, experts indirectly batch-annotate candidate pairs whose relationship is expressed with such patterns in the literature. Even with a complete absence of seed data, experts are able to discover thousands of high-quality pairs with the desired relationship within minutes. When a small number of relevant pairs do exist - even when their relationship is more general (e.g. gene X is biologically associated with disease Y) than the relationship of interest - our system leverages them in order to i) learn a better ranking of the patterns to be annotated or ii) generate weakly labelled pairs in a fully automated manner. We evaluate our method both intrinsically and via a downstream knowledge base completion task, and show that it is an effective way of constructing knowledge bases when few or no relevant facts are already available.
The idea of extracting entity pairs by discovering textual patterns dates back to early work on bootstrapping for relation extraction with the DIPRE system @cite_1 . This system was designed to find co-occurrences of seed entity pairs of a known relationship type inside unlabelled text, then extract simple patterns (exact string matches) from these occurrences and use them to discover new entity pairs. introduced a pattern evaluation methodology based on the precision of a pattern on the set of entity pairs which had already been discovered; they also used the dot product between word vectors instead of an exact string match to allow for slight variations in text. Later work @cite_3 @cite_2 @cite_5 has proposed more sophisticated pattern extraction methods (based on dependency graphs or kernel methods on word vectors) and different pattern evaluation frameworks (document relevance scores).
{ "abstract": [ "", "The World Wide Web is a vast resource for information. At the same time it is extremely distributed. A particular type of data such as restaurant lists may be scattered across thousands of independent information sources in many different formats. In this paper, we consider the problem of extracting a relation for such a data type from all of these sources automatically. We present a technique which exploits the duality between sets of patterns and relations to grow the target relation starting from a small sample. To test our technique we use it to extract a relation of (author,title) pairs from the World Wide Web.", "This paper presents a novel approach to the semi-supervised learning of Information Extraction patterns. The method makes use of more complex patterns than previous approaches and determines their similarity using a measure inspired by recent work using kernel methods (Culotta and Sorensen, 2004). Experiments show that the proposed similarity measure outperforms a previously reported measure based on cosine similarity when used to perform binary relation extraction.", "Text documents often contain valuable structured data that is hidden Yin regular English sentences. This data is best exploited infavailable as arelational table that we could use for answering precise queries or running data mining tasks.We explore a technique for extracting such tables from document collections that requires only a handful of training examples from users. These examples are used to generate extraction patterns, that in turn result in new tuples being extracted from the document collection.We build on this idea and present our Snowball system. Snowball introduces novel strategies for generating patterns and extracting tuples from plain-text documents.At each iteration of the extraction process, Snowball evaluates the quality of these patterns and tuples without human intervention,and keeps only the most reliable ones for the next iteration. In this paper we also develop a scalable evaluation methodology and metrics for our task, and present a thorough experimental evaluation of Snowball and comparable techniques over a collection of more than 300,000 newspaper documents." ], "cite_N": [ "@cite_5", "@cite_1", "@cite_3", "@cite_2" ], "mid": [ "", "1489949474", "2146960529", "2103931177" ] }
0
1907.01417
2954765265
Knowledge base construction is crucial for summarising, understanding and inferring relationships between biomedical entities. However, for many practical applications such as drug discovery, the scarcity of relevant facts (e.g. gene X is therapeutic target for disease Y) severely limits a domain expert's ability to create a usable knowledge base, either directly or by training a relation extraction model. In this paper, we present a simple and effective method of extracting new facts with a pre-specified binary relationship type from the biomedical literature, without requiring any training data or hand-crafted rules. Our system discovers, ranks and presents the most salient patterns to domain experts in an interpretable form. By marking patterns as compatible with the desired relationship type, experts indirectly batch-annotate candidate pairs whose relationship is expressed with such patterns in the literature. Even with a complete absence of seed data, experts are able to discover thousands of high-quality pairs with the desired relationship within minutes. When a small number of relevant pairs do exist - even when their relationship is more general (e.g. gene X is biologically associated with disease Y) than the relationship of interest - our system leverages them in order to i) learn a better ranking of the patterns to be annotated or ii) generate weakly labelled pairs in a fully automated manner. We evaluate our method both intrinsically and via a downstream knowledge base completion task, and show that it is an effective way of constructing knowledge bases when few or no relevant facts are already available.
A well known body of work, OpenIE @cite_18 @cite_0 @cite_10 @cite_15 aims to extract patterns between entity mentions in sentences, thereby discovering new surface forms which can be clustered @cite_12 @cite_9 in order to reveal new meaningful relationship types. In the biomedical domain, Percha and Altman attempt something similar by extracting and clustering dependency patterns between pairs of biomedical entities (e.g. chemical-gene, chemical-disease, gene-disease). Our work differs from these approaches in that we extract pairs for a pre-specified relationship type (either from scratch or by augmenting existing data written with specific guidelines), which is not guaranteed to correspond to a cluster of discovered surface forms.
{ "abstract": [ "Traditionally, Information Extraction (IE) has focused on satisfying precise, narrow, pre-specified requests from small homogeneous corpora (e.g., extract the location and time of seminars from a set of announcements). Shifting to a new domain requires the user to name the target relations and to manually create new extraction rules or hand-tag new training examples. This manual labor scales linearly with the number of target relations. This paper introduces Open IE (OIE), a new extraction paradigm where the system makes a single data-driven pass over its corpus and extracts a large set of relational tuples without requiring any human input. The paper also introduces TEXTRUNNER, a fully implemented, highly scalable OIE system where the tuples are assigned a probability and indexed to support efficient extraction and exploration via user queries. We report on experiments over a 9,000,000 Web page corpus that compare TEXTRUNNER with KNOWITALL, a state-of-the-art Web IE system. TEXTRUNNER achieves an error reduction of 33 on a comparable set of extractions. Furthermore, in the amount of time it takes KNOWITALL to perform extraction for a handful of pre-specified relations, TEXTRUNNER extracts a far broader set of facts reflecting orders of magnitude more relations, discovered on the fly. We report statistics on TEXTRUNNER's 11,000,000 highest probability tuples, and show that they contain over 1,000,000 concrete facts and over 6,500,000 more abstract assertions.", "We propose a demonstration of PATTY, a system for learning semantic relationships from the Web. PATTY is a collection of relations learned automatically from text. It aims to be to patterns what WordNet is to words. The semantic types of PATTY relations enable advanced search over subject-predicate-object data. With the ongoing trends of enriching Web data (both text and tables) with entity-relationship-oriented semantic annotations, we believe a demo of the PATTY system will be of interest to the database community.", "Open Information Extraction (IE) is the task of extracting assertions from massive corpora without requiring a pre-specified vocabulary. This paper shows that the output of state-of-the-art Open IE systems is rife with uninformative and incoherent extractions. To overcome these problems, we introduce two simple syntactic and lexical constraints on binary relations expressed by verbs. We implemented the constraints in the ReVerb Open IE system, which more than doubles the area under the precision-recall curve relative to previous extractors such as TextRunner and woepos. More than 30 of ReVerb's extractions are at precision 0.8 or higher---compared to virtually none for earlier systems. The paper concludes with a detailed analysis of ReVerb's errors, suggesting directions for future work.", "Relation triples produced by open domain information extraction (open IE) systems are useful for question answering, inference, and other IE tasks. Traditionally these are extracted using a large set of patterns; however, this approach is brittle on out-of-domain text and long-range dependencies, and gives no insight into the substructure of the arguments. We replace this large pattern set with a few patterns for canonically structured sentences, and shift the focus to a classifier which learns to extract self-contained clauses from longer sentences. We then run natural logic inference over these short clauses to determine the maximally specific arguments for each candidate triple. We show that our approach outperforms a state-of-the-art open IE system on the end-to-end TAC-KBP 2013 Slot Filling task.", "Open Information Extraction (IE) systems extract relational tuples from text, without requiring a pre-specified vocabulary, by identifying relation phrases and associated arguments in arbitrary sentences. However, state-of-the-art Open IE systems such as ReVerb and woe share two important weaknesses -- (1) they extract only relations that are mediated by verbs, and (2) they ignore context, thus extracting tuples that are not asserted as factual. This paper presents ollie, a substantially improved Open IE system that addresses both these limitations. First, ollie achieves high yield by extracting relations mediated by nouns, adjectives, and more. Second, a context-analysis step increases precision by including contextual information from the sentence in the extractions. ollie obtains 2.7 times the area under precision-yield curve (AUC) compared to ReVerb and 1.9 times the AUC of woeparse.", "Traditional approaches to Relation Extraction from text require manually defining the relations to be extracted. We propose here an approach to automatically discovering relevant relations, given a large text corpus plus an initial ontology defining hundreds of noun categories (e.g., Athlete, Musician, Instrument). Our approach discovers frequently stated relations between pairs of these categories, using a two step process. For each pair of categories (e.g., Musician and Instrument) it first co-clusters the text contexts that connect known instances of the two categories, generating a candidate relation for each resulting cluster. It then applies a trained classifier to determine which of these candidate relations is semantically valid. Our experiments apply this to a text corpus containing approximately 200 million web pages and an ontology containing 122 categories from the NELL system [, 2010b], producing a set of 781 proposed candidate relations, approximately half of which are semantically valid. We conclude this is a useful approach to semi-automatic extension of the ontology for large-scale information extraction systems such as NELL." ], "cite_N": [ "@cite_18", "@cite_9", "@cite_0", "@cite_15", "@cite_10", "@cite_12" ], "mid": [ "1493490255", "2083545228", "2167187514", "2251913848", "2129842875", "2109718074" ] }
0
1907.01150
2953924693
We propose a novel multi-scale template matching method which is robust against both scaling and rotation in unconstrained environments. The key component behind is a similarity measure referred to as scalable diversity similarity (SDS). Specifically, SDS exploits bidirectional diversity of the nearest neighbor (NN) matches between two sets of points. To address the scale-robustness of the similarity measure, local appearance and rank information are jointly used for the NN search. Furthermore, by introducing penalty term on the scale change, and polar radius term into the similarity measure, SDS is shown to be a well-performing similarity measure against overall size and rotation changes, as well as non-rigid geometric deformations, background clutter, and occlusions. The properties of SDS are statistically justified, and experiments on both synthetic and real-world data show that SDS can significantly outperform state-of-the-art methods.
In unconstrained environments, to deal with nonrigid transformations and other noises, involving global information instead of pixel-wise local information for designing a robust similarity is a key cue. Histogram matching (HM) @cite_18 @cite_21 @cite_17 , which mainly measure the similarity between two color histograms, is not restricted by geometric transformation. However, it is usually not a good choice when background clutter and occlusions appear within the windows. Earth mover's distance (EMD) @cite_9 is proposed to measure the similarity between two probability distributions. Furthermore, a more robust approach @cite_6 is proposed by using spatial-appearance representation to measure the EMD. Tone mapping similarity measure @cite_12 is proposed for handling noise, which is approximated by a piece-wise constant linear function. Asymmetric correlation @cite_7 is proposed to deal with both the noise and illumination changes. Other measures focus on improving the robustness against noise as proposed in M-estimator @cite_0 @cite_5 and Hamming-based distance @cite_10 @cite_14 . We refer the interested readers to a comprehensive survey @cite_4 .
{ "abstract": [ "In image retrieval based on color, the weighted distance between color histograms of two images, represented as a quadratic form, may be defined as a match measure. However, this distance measure is computationally expensive and it operates on high dimensional features (O(N)). We propose the use of low-dimensional, simple to compute distance measures between the color distributions, and show that these are lower bounds on the histogram distance measure. Results on color histogram matching in large image databases show that prefiltering with the simpler distance measures leads to significantly less time complexity because the quadratic histogram distance is now computed on a smaller set of images. The low-dimensional distance measure can also be used for indexing into the database. >", "This paper describes a method for robust real-time pattern matching. We first introduce a family of image distance measures, the Image Hamming Distance Family. Members of this family are robust to occlusion, small geometrical transforms, light changes, and nonrigid deformations. We then present a novel Bayesian framework for sequential hypothesis testing on finite populations. Based on this framework, we design an optimal rejection acceptance sampling algorithm. This algorithm quickly determines whether two images are similar with respect to a member of the Image Hamming Distance Family. We also present a fast framework that designs a near- optimal sampling algorithm. Extensive experimental results show that the sequential sampling algorithm's performance is excellent. Implemented on a Pentium IV 3 GHz processor, the detection of a pattern with 2,197 pixels in 640times480 pixel frames, where in each frame the pattern rotated and was highly occluded, proceeds at only 0.022 seconds per frame.", "", "We present an efficient and noise robust template matching method based on asymmetric correlation (ASC). The ASC similarity function is invariant to affine illumination changes and robust to extreme noise. It correlates the given non-normalized template with a normalized version of each image window in the frequency domain. We show that this asymmetric normalization is more robust to noise than other cross correlation variants, such as the correlation coefficient. Direct computation of ASC is very slow, as a DFT needs to be calculated for each image window independently. To make the template matching efficient, we develop a much faster algorithm, which carries out a prediction step in linear time and then computes DFTs for only a few promising candidate windows. We extend the proposed template matching scheme to deal with partial occlusion and spatially varying light change. Experimental results demonstrate the robustness of the proposed ASC similarity measure compared to state-of-the-art template matching methods.", "We investigate the properties of a metric between two distributions, the Earth Mover's Distance (EMD), for content-based image retrieval. The EMD is based on the minimal cost that must be paid to transform one distribution into the other, in a precise sense, and was first proposed for certain vision problems by Peleg, Werman, and Rom. For image retrieval, we combine this idea with a representation scheme for distributions that is based on vector quantization. This combination leads to an image comparison framework that often accounts for perceptual similarity better than other previously proposed methods. The EMD is based on a solution to the transportation problem from linear optimization, for which efficient algorithms are available, and also allows naturally for partial matching. It is more robust than histogram matching techniques, in that it can operate on variable-length representations of the distributions that avoid quantization and other binning problems typical of histograms. When used to compare distributions with the same overall mass, the EMD is a true metric. In this paper we focus on applications to color and texture, and we compare the retrieval performance of the EMD with that of other distances.", "A new method for real time tracking of non-rigid objects seen from a moving camera is proposed. The central computational module is based on the mean shift iterations and finds the most probable target position in the current frame. The dissimilarity between the target model (its color distribution) and the target candidates is expressed by a metric derived from the Bhattacharyya coefficient. The theoretical analysis of the approach shows that it relates to the Bayesian framework while providing a practical, fast and efficient solution. The capability of the tracker to handle in real time partial occlusions, significant clutter, and target scale variations, is demonstrated for several image sequences.", "Locally Orderless Tracking (LOT) is a visual tracking algorithm that automatically estimates the amount of local (dis)order in the target. This lets the tracker specialize in both rigid and deformable objects on-line and with no prior assumptions. We provide a probabilistic model of the target variations over time. We then rigorously show that this model is a special case of the Earth Mover's Distance optimization problem where the ground distance is governed by some underlying noise model. This noise model has several parameters that control the cost of moving pixels and changing their color. We develop two such noise models and demonstrate how their parameters can be estimated on-line during tracking to account for the amount of local (dis)order in the target. We also discuss the significance of this on-line parameter update and demonstrate its contribution to the performance. Finally we show LOT's tracking capabilities on challenging video sequences, both commonly used and new, displaying performance comparable to state-of-the-art methods.", "We propose a fast algorithm for speeding up the process of template matching that uses M-estimators for dealing with outliers. We propose a particular image hierarchy called the p-pyramid that can be exploited to generate a list of ascending lower bounds of the minimal matching errors when a nondecreasing robust error measure is adopted. Then, the set of lower bounds can be used to prune the search of the p-pyramid, and a fast algorithm is thereby developed in this paper. This fast algorithm ensures finding the global minimum of the robust template matching problem in which a nondecreasing M-estimator serves as an error measure. Experimental results demonstrate the effectiveness of our method.", "This paper proposes a new template matching method that is robust to outliers and fast enough for real-time operation. The template and image are densely transformed in binary code form by projecting and quantizing histograms of oriented gradients. The binary codes are matched by a generic method of robust similarity applicable to additive match measures, such as L p - and Hamming distances. The robust similarity map is computed efficiently via a proposed Inverted Location Index structure that stores pixel locations indexed by their values. The method is experimentally justified in large image patch datasets. Challenging applications, such as intra-category object detection, object tracking, and multimodal image matching are demonstrated.", "In the majority of robot applications, including human-computer interaction, template matching is used to find a specific area in a given image or a frame of video stream. Flexible and robust template matching algorithm necessitates feature extraction, for example gradient calculation. This requires complex calculation which causes bad response time of the system. An alternative solution is the use of index table, which stores coordinates that have the same grey level. However, due to the mechanism of the matching algorithm, it is necessary to have several disadvantages in the algorithm. But these restrictions are less important, and there is an idea that copes with these limitations. This paper proposes fast and robust template matching algorithm that uses grey level index table and image rank technique. This algorithm can find specific area under the given template query image with 30 Gaussian noise.", "A fast pattern matching scheme termed matching by tone mapping (MTM) is introduced which allows matching under nonlinear tone mappings. We show that, when tone mapping is approximated by a piecewise constant linear function, a fast computational scheme is possible requiring computational time similar to the fast implementation of normalized cross correlation (NCC). In fact, the MTM measure can be viewed as a generalization of the NCC for nonlinear mappings and actually reduces to NCC when mappings are restricted to be linear. We empirically show that the MTM is highly discriminative and robust to noise with comparable performance capability to that of the well performing mutual information, but on par with NCC in terms of computation time.", "Color-based trackers recently proposed in [3,4,5] have been proved robust and versatile for a modest computational cost. They are especially appealing for tracking tasks where the spatial structure of the tracked objects exhibits such a dramatic variability that trackers based on a space-dependent appearance reference would break down very fast. Trackers in [3,4,5] rely on the deterministic search of a window whose color content matches a reference histogram color model.Relying on the same principle of color histogram distance, but within a probabilistic framework, we introduce a new Monte Carlo tracking technique. The use of a particle filter allows us to better handle color clutter in the background, as well as complete occlusion of the tracked entities over a few frames.This probabilistic approach is very flexible and can be extended in a number of useful ways. In particular, we introduce the following ingredients: multi-part color modeling to capture a rough spatial layout ignored by global histograms, incorporation of a background color model when relevant, and extension to multiple objects." ], "cite_N": [ "@cite_18", "@cite_14", "@cite_4", "@cite_7", "@cite_9", "@cite_21", "@cite_6", "@cite_0", "@cite_5", "@cite_10", "@cite_12", "@cite_17" ], "mid": [ "2118783153", "2137277421", "", "2019704568", "2143668817", "2159128898", "2034938692", "2171194336", "2082769906", "2168327389", "2070240696", "1513768190" ] }
Multi-scale Template Matching with Scalable Diversity Similarity in an Unconstrained Environment
Template matching is a basic component in a variety of computer vision applications. In this paper, we address the problem of template matching in unconstrained scenarios. That is, a rigid/nonrigid object moves in 3D space, with variant/invariant background and the object may undergo rigid/nonrigid deformations and partial occlusions, as demonstrated in Figure. 1. As the most crucial technique in template matching tasks, similarity measure has been studied for decades and yields in various methods from the classic sum of absolute differences(SAD), the sum of squared distances (SSD) to recent best buddies similarity (BBS) [10] and deformable diversity similarity (DDIS) [18]. However, several aspects still need to be improved: (1) Most real applications prefer showing matching results with bounding boxes in variable sizes to include object regions than a fixed size. Nevertheless, setting geometric parameters can result in an expansion of candidates for evaluation, which requires a distinctive similarity measure against scaling change. (2) Template matching is usually dense and all the pixels/features within the template and a candidate window are taken into account c 2019. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms. to measure the similarity even some parts are not desirable (e.g., occlusions, appearance changes brought by deformation), this requires a similarity measure to deal with noises and outliers. (3) Due to the possible deformation with the template, a good similarity measure is expected to be independent with the spatial correlation (e.g., when the object within a candidate window is strongly rotated compared to the template, the spatial correlation between the template and the candidate in raster scan order can become untrustworthy). In this paper, scalable diversity similarity (SDS) is proposed to address the above problems. SDS can be applied with the multi-scale sliding window and is not limited by any specific parametric deformation models. Both BBS and DDIS focus on settling the above problem (2) by exploiting the properties of the nearest neighbor (NN). Each NN is defined by a pair of patches between template and target. In BBS, if and only if each patch in a patch pair is the NN of the other, a match is defined and the number of such matches determines the BBS score. DDIS further improves the BBS by introducing relevant diversity of patch subsets between the target and template, which leads to the robustness of BBS against the occlusions and deformation. Although these methods can deal with deformation within a window to some extent, there are limitations especially on the problem (1) and (3). We extend DDIS to propose SDS based on the relevant diversity statistics. SDS has the following two advantages concerning the problem (1) and (3). The first is that SDS allows similarity measure between two sets of points in different sizes, and the magnitude of the score is scale-robust. Usually, the magnitude of the DDIS or BBS score grows with the increase of the point set's scale, which makes the larger candidate windows more favorable to be selected as final results. To alleviate the unfairness, SDS introduces bidirectional relevant diversity and penalizes on the change of scales to make the employment of multi-scale sliding window feasible, and the score can converge to the correct scale. This property of SDS is well statistically justified in Sec. 2.4. The second advantage of SDS is its robustness to the intense rotation. Both BBS and DDIS involve a spatial distance term in NN search or the final similarity calculation, which poses a limitation that the NN of a point must be spatially close. The limitation is a strong prior that can indeed reduce the number of outliers, but at the same time decrease the robustness against intense rotation. In this paper, instead of Cartesian coordinate, we exploit the polar angle of the polar coordinate for the calculation of spatial distance, which releases the limitation brought by the prior. Besides, rank information of appearance within a local circle is employed for searching NN along with local appearance, which helps to find more confident NN and yields in a significant improvement for intense rotation cases. This property of SDS is also statistically justified in Sec. 2.4. To summarize, the main contributions of this paper can be concluded as (a) SDS introduces bidirectional relevant diversity and penalizes on the change of scales to deal with scaling. (b) The rank of local appearance information and the polar radius is exploited to make the SDS robust against intense rotation change. (c) We originally collect a comprehensive dataset with 498 template-target pairs in the unconstrained environments for testing the matching performance, which includes 166 image pairs for scaling, rotation, scal-ing+rotation, respectively. Methodology Given a template cropped from a reference image and a target image related by unknown geometric and photometric transformations, our purpose is to design a similarity measure, which can distinctively localize a region in the target image that includes the same object with the template by finding the maximum value. Each candidate region in the target image is defined by a rectangular window, and the candidate windows in the target image are generated in a multiple-scale sliding window fashion. Taking the template image T = {t i } n i=1 and a candidate window Q = q j m j=1 from target image Q = {q l } M l=1 as inputs, a SDS score in real number can be calculated, where the t i and q j represent non-overlapped patch from the template and a candidate window, respectively. t i and q j can also be treated as points when T and Q are treated as point sets. Q ⊆ Q, and m ≤ M. Nearest neighbor has been shown to be a strong feature for designing similarity measure in some prior researches. To better address the difference, we first recall BBS [3] which counts the number of bidirectional NN matches between T and Q: BBS = c {∃t i ∈ T, ∃q j ∈ Q : NN(t i , Q) = q j ∧ NN(q j , T ) = t i } ,(1) where NN(t i , Q) = argmin q j ∈Q d(t i , q j ) is a function returns the NN of t i with respect to Q, and the d(·) is a distance function. The |{·}| denotes the size of a set, and the c = 1/min{n, m} is a normalization factor. We are now ready to introduce our method in a bottom-up fashion: from NN search to bidirectional diversity, and finally the SDS similarity. Rank of Local Appearance for Rotation Robust NN search The distance function in Eq. 1 is defined by d (p i , q j ) = p (A) i − q (A) j 2 2 + λ p (L) i − q (L) j 2 2 ,(2) where (A) denotes pixel appearance (e.g., RGB) and (L) denotes pixel location (x, y) within the patch normalized to the range [0, 1]. In the stage of NN searching, under the assumption that intense deformation such as rotation do not occur within the patch, the spatial term can contribute to improving the confidence of NN by confirming the consistency of appearance and position. We propose d (p i , q j ) = p (A) i − q (A) j 2 2 + λ p (R) i − q (R) j 2 2 ,(3) to incorporate (R) instead of (L), which denotes the rank with respect to the appearance of pixels within a circle. The origin of the circle is p i , with a support radius of r. Specifically, p (R) i = ∑ p∈circle(p i ,r) I p (A) i ≥ p (A) /r 2 ,(4) where I(·) is an indicator function that turns true and false into 1 and 0. Equation in the same form is applied to q j . Unlike pixel location, the appearance rank defined by Eq. 4 is invariant to rotation, which can also be considered as structural information (e.g., the shape of the distribution of pixel values) extracted from a local region. As the rotation will not destroy the structure, it is reasonable to explain its invariance against rotation. Furthermore, the Euclidean distance of orders emphasizes the influence of local extremes, which also contributes to keeping the local features well. Bidirectional Diversity for Discriminative Similarity Measure We first extend the diversity similarity (DIS) defined in [18] to a bidirectional way. The DIS is defined as DIS = c {t i ∈ T : ∃q j ∈ Q, NN(q j , T ) = t i } ,(5) which counts the types of points in T that have NN in Q with the same pixel type (i.e., defined as diversity in direction T → Q). The authors claim that this one direction diversity provides a good approximation to BBS with less computation. However, the number of candidates increase explosively by allowing multi-scale candidate windows Q, therefore a more discriminative similarity measure is needed. We exploit both diversity calculated with respect to T and Q (i.e., T → Q and Q → T ). Specifically, we first define the following function ε(t i ) which indicates the number of points q j ∈ Q whose NNs are equal to t i in direction T → Q, ε(t i ) = q j ∈ Q : NN(q j , T ) = t i ,(6) where NN(·) here is calculated with distance defined in Eq. 3. To understand the equation, we analyze its relationship with diversity from two situations. For |T | = |Q|: (1) When ε(t i ) ≥ 1, the value is inversely proportional to the diversity contribution. That is, large value of ε(t i ) indicates that many points in Q have the same NN of t i , which will lower the diversity defined in Eq. 5. (2) When ε(t i ) = 0, it indicates that a t i is not a NN of any q i , which also hinders the increase of diversity. An ideal situation is that for each t i , ε(t i ) = 1. For s|T | = |Q|, the situations become more complex. (1) when ε(t i ) = 0, similarly it means low contribution to the diversity. (2) Due to the scaling s between Q and T , one point can be the NN of multiple points, when 1 ≤ ε(t i ) ≤ s, it contributes to the diversity. (3) When ε(t i ) > s, it will lower the diversity. We propose to simultaneously introduce this statistic to direction Q → T . However, it is not straightforward in the case of template matching. Because the candidate window Q usually belongs to a target image Q, where |Q| |Q|. That is, when finding NNs in the case of T → Q, as T is fixed and the preprocessing (e.g., sorting for brute force search, building kd-tree, etc.) only need to be conducted once. In the case of Q → T , as such preprocessing for NN search has to be conducted over each Q, it will suffer from time cost. To tackle this problem, we pose an assumption that NN(t i , Q) has a high probability to be included in the set of k approximate NNs with respect to Q, which is denoted by ANN(t i , Q). Formally, we define the following function which counts the number of points (i.e., patches in the image) t i ∈ T whose ANNs include q j in direction Q → T , τ(q j ) = t i ∈ T, Q ∈ Q : q j ∈ ANN k (t i , Q) .(7) Scalable Diversity Similarity With bidirectional diversity ε(t i ) and τ(q j ) defined, we define the SDS to quantify the the similarity between template T and candidate Q with given target image Q and scaling s as follows, where s can be calculated from T and Q, SDS(T, Q, s, Q) = λ 1 ∑ q j I(τ(q j ) = 0) ∑ t i I(ε(t i ) = 0) ∑ q j |ρ(q j ) − sρ(NN(q j , T ))| U.(8) Where parameter λ 1 is a normalization factor inversely proportional to the increase of s (e.g., λ 1 = s −1 ). As analyzed in Sec. 2.2, only points in T which hold ε(t i ) = 0, and points in Q which hold τ(q j ) = 0 can possibly contribute to the increase of the diversity. ρ(·) returns the radius of a pixel in polor coordinate, with the pole set as the according geometric center of T and Q. The denominator of Eq. 8 penalizes the spatial consistency in polar coordinate, to further increase the robustness against in-plan rotation. Term U is a normalization term for the number of NNs with respect to scaling. Following the analysis in Sec. 2.2, in our implementation, U is defined as ∑ t i ,ε(t i )>0 exp (I(s/ε(t i ) ≥ 1) + I(s/ε(t i ) < 1)s/ε(t i ) − 1) , which increases when more t i holds s/ε(t i ) ≥ 1. In conclusion, SDS can be viewed as a cooperation of three terms: (1) The numerator term to evaluate the bidirectional diversity, (2) the denominator term to evaluate the spatial consistency, (3) the U term to normalize the number of NNs with respect to s . Statistical Analysis Analysis of scaling-robustness. To assert the effectiveness of SDS in measuring the similarity between scale-variant point sets, we first provide a 1D statistical analysis following [3,18]. The expectations of similarity between two point sets drawn from two 1D Gaussian models are calculated for comparison, where point sets are cast as template/candidate window, points are cast as patches. Our goal is to show that the expectation of SDS is maximal when the two Gaussian models are the same and decrease fastest when models separate. We further analyze the expectations of point sets in different scaling size to show the scalingrobustness of SDS. As suggested by [18], Monte-Carlo integration is exploited for approximating the expectation. Figure 2 (a) to Figure. 2 (d) show the illustration of approximated expectation maps when two point sets have same size (s = 1). It can be obviously observed that the expectation of SDS drops faster than either SSD, BBS, or DDIS when the parameters of the second Gaussian (µ and σ ) get away from the parameters of the first Gaussian (µ = 0 and σ = 1). Figure 2 (d) to Figure. 2 (f) show the comparison of expectation map when two point sets are in different sizes (s = 1, 0.5, 2), which provides a strong evidence that SDS is highly robust against scaling as the expectation maps almost remain the same despite the scaling change. To further show that the scale of target with respect to T can be estimated by maximizing SDS, we provide a statistical result in Figure. 3. Similar with Figure. 2, T is drawn from N(0, 1) and Q is generated for expectation approximation. The difference is, we further prepare Q which involves background points to simulate the template matching task. It can be clearly observed that the expectation of SDS is almost invariant to rotation while BBS drops most when T and Q overlap least (i.e., θ = −π/2). Here, Q = T ∪ B, GT s |T | + |B| = |Q| and B is composed of background points drawn from N(µ, σ ), with µ ∈ [0, 10], σ ∈ [0, 10]. In this demonstration, |T | and |Q| are set to 100 and 200 respectively. |Q| = s|T | and s varies from 0.5 to 2 with step of 0.1. The Q can be treated as a candidate window in the template matching task and is sampled from Q by preferentially sample points in T (i.e., nearest neighbor interpolation). For example, when s = 1.5, 150 points need to be sampled to formulate Q, with 100 points from T and 50 points from B. Estimatedŝ = arg max s SDS(T, Q, s, Q) is supposed to approximate the ground truth scale GT s well. This statistical analysis clearly prove the robustness of SDS against scaling, and the ability for estimating proper scale of the target. Analysis of rotation robustness. To show the robustness against rotation, we analyze the expectation of similarity between two sets T and Q drawn from 2D Gaussian models, as shown in Figure. 4, we fix the parameters except θ and σ 2 to validate the effect of rotation angle along with the shape of the Gaussian. In the case of BBS, as we can observe from Figure. 4 (c), when σ 2 is extremely small, the points drawn are likely to form a line, which is sensitive to rotation as lines overlap little after rotation. This is also the case when σ 2 σ 1 , as it can be observed that the expectation decreases gradually with the increase of σ 2 . Also, isotropic Gaussian is supposed to be unaffected by the rotation, which can be convinced from Figure. 4 (c) that when σ 1 = σ 2 = 1, the expectation keeps well with respect to the rotation. On the other hand, SDS shows the invariance to the rotation despite the shape change of distribution in Figure. 4 (d). Experiment Results We conduct a comprehensive experiment with both qualitative and quantitative tests to validate the superiority of SDS comparing with the state-of-the-art methods BBS [3,10] and DDIS [18], as well as several conventional methods. We follow the same procedure as suggested in [3,18] for a fair comparison. Note that as SDS can be employed with multi-scale windows, we simultaneously compare the performance of SDS with fixed scale, which is referred to as NSDS. In addition, similar to SDS, we also employed DDIS to the multi-scale candidate windows for comparison, denoted as SDDIS. Multiple datasets are utilized for comparison. We originally collected 42 videos under different unconstrained environments and extract frames to create a benchmark for evaluating the performance of template matching involving overall rotation and scaling on the object. Ground truths are scale-variable and annotated manually image by image. Besides, this benchmark also includes other challenges like complex deformations, occlusion, background clutter, etc. The benchmark is subdivided into three datasets: (1) rotation dataset, (2) scaling dataset and (3) rotation-scaling dataset for detail evaluation, each of them includes 166 reference-target image pairs, respectively. It is noteworthy that each dataset also includes other photometric and geometric transformations as they are taken under unconstrained environments. As to the evaluation criteria, following previous works [3,18], we employ the success ratio based on the overlap rate between ground truth W g and matching result W r to measure the accuracy, which is defined as: W r ∩W g / W r ∪W g . Here, the operator |·| is to count the number of pixels within a window. We compare our proposed methods (SDS and NSDS) to DDIS and its multi-scale imple- Figure. 6. 1st and 2nd rows show that SDS is robust against overall rotation. 3rd and 4th rows demonstrate that SDS can deal with scaling problem well. The likelihood maps show that SDS/NSDS is more distinct and yields in better-localized modes compared to other methods. Conclusion We proposed a novel multi-scale template matching method in unconstrained environments, which is robust against overall scaling, intense rotation while taking advantage of global statistic based similarity measure to deal with complex deformations, occlusions, etc. Extended bidirectional diversity combined with rank based nearest neighbor search forms a scale-robust similarity measure, and the exploit of polar coordinate further improves the robustness against rotation. The experimental results have shown that SDS can remarkably outperform other competitive methods. On the other hand, SDS may fail when the template is too small to achieve a statistical score. The remained future work is to add a rotation parameter to the candidate windows to achieve rotation-specific matching results.
3,341
1907.01150
2953924693
We propose a novel multi-scale template matching method which is robust against both scaling and rotation in unconstrained environments. The key component behind is a similarity measure referred to as scalable diversity similarity (SDS). Specifically, SDS exploits bidirectional diversity of the nearest neighbor (NN) matches between two sets of points. To address the scale-robustness of the similarity measure, local appearance and rank information are jointly used for the NN search. Furthermore, by introducing penalty term on the scale change, and polar radius term into the similarity measure, SDS is shown to be a well-performing similarity measure against overall size and rotation changes, as well as non-rigid geometric deformations, background clutter, and occlusions. The properties of SDS are statistically justified, and experiments on both synthetic and real-world data show that SDS can significantly outperform state-of-the-art methods.
An eye-catching family of similarity measures in recent years is to explore a global statistic property over the two point sets. Bi-directional similarity (BDS) @cite_1 proposes that two point sets are considered similar if all points of one set are contained in the other, and vice versa. Best-buddies-similarity (BBS) @cite_8 @cite_3 counts the two-side NNs as a similarity statistic. Deformable diversity similarity (DDIS) @cite_11 measures the diversity of feature matches between the two sets and is reported to outperform BBS by revealing the deformation'' of the NN field. Despite the robustness of BBS and DDIS against the transformations within the search windows, scaling and rotation on the whole search windows have not been considered. In this paper, we propose a scaling and rotation independent similarity measure which leads to a significant improvement and allows multi-scale template matching in unconstrained environments.
{ "abstract": [ "We propose a novel measure for template matching named Deformable Diversity Similarity &#x2013; based on the diversity of feature matches between a target image window and the template. We rely on both local appearance and geometric information that jointly lead to a powerful approach for matching. Our key contribution is a similarity measure, that is robust to complex deformations, significant background clutter, and occlusions. Empirical evaluation on the most up-to-date benchmark shows that our method outperforms the current state-of-the-art in its detection accuracy while improving computational complexity.", "We propose a principled approach to summarization of visual data (images or video) based on optimization of a well-defined similarity measure. The problem we consider is re-targeting (or summarization) of image video data into smaller sizes. A good ldquovisual summaryrdquo should satisfy two properties: (1) it should contain as much as possible visual information from the input data; (2) it should introduce as few as possible new visual artifacts that were not in the input data (i.e., preserve visual coherence). We propose a bi-directional similarity measure which quantitatively captures these two requirements: Two signals S and T are considered visually similar if all patches of S (at multiple scales) are contained in T, and vice versa. The problem of summarization re-targeting is posed as an optimization problem of this bi-directional similarity measure. We show summarization results for image and video data. We further show that the same approach can be used to address a variety of other problems, including automatic cropping, completion and synthesis of visual data, image collage, object removal, photo reshuffling and more.", "", "We propose a novel method for template matching in unconstrained environments. Its essence is the Best-Buddies Similarity (BBS), a useful, robust, and parameter-free similarity measure between two sets of points. BBS is based on counting the number of Best-Buddies Pairs (BBPs)—pairs of points in source and target sets, where each point is the nearest neighbor of the other. BBS has several key features that make it robust against complex geometric deformations and high levels of outliers, such as those arising from background clutter and occlusions. We study these properties, provide a statistical analysis that justifies them, and demonstrate the consistent success of BBS on a challenging real-world dataset." ], "cite_N": [ "@cite_11", "@cite_1", "@cite_3", "@cite_8" ], "mid": [ "2560741668", "2115273023", "", "1913744585" ] }
Multi-scale Template Matching with Scalable Diversity Similarity in an Unconstrained Environment
Template matching is a basic component in a variety of computer vision applications. In this paper, we address the problem of template matching in unconstrained scenarios. That is, a rigid/nonrigid object moves in 3D space, with variant/invariant background and the object may undergo rigid/nonrigid deformations and partial occlusions, as demonstrated in Figure. 1. As the most crucial technique in template matching tasks, similarity measure has been studied for decades and yields in various methods from the classic sum of absolute differences(SAD), the sum of squared distances (SSD) to recent best buddies similarity (BBS) [10] and deformable diversity similarity (DDIS) [18]. However, several aspects still need to be improved: (1) Most real applications prefer showing matching results with bounding boxes in variable sizes to include object regions than a fixed size. Nevertheless, setting geometric parameters can result in an expansion of candidates for evaluation, which requires a distinctive similarity measure against scaling change. (2) Template matching is usually dense and all the pixels/features within the template and a candidate window are taken into account c 2019. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms. to measure the similarity even some parts are not desirable (e.g., occlusions, appearance changes brought by deformation), this requires a similarity measure to deal with noises and outliers. (3) Due to the possible deformation with the template, a good similarity measure is expected to be independent with the spatial correlation (e.g., when the object within a candidate window is strongly rotated compared to the template, the spatial correlation between the template and the candidate in raster scan order can become untrustworthy). In this paper, scalable diversity similarity (SDS) is proposed to address the above problems. SDS can be applied with the multi-scale sliding window and is not limited by any specific parametric deformation models. Both BBS and DDIS focus on settling the above problem (2) by exploiting the properties of the nearest neighbor (NN). Each NN is defined by a pair of patches between template and target. In BBS, if and only if each patch in a patch pair is the NN of the other, a match is defined and the number of such matches determines the BBS score. DDIS further improves the BBS by introducing relevant diversity of patch subsets between the target and template, which leads to the robustness of BBS against the occlusions and deformation. Although these methods can deal with deformation within a window to some extent, there are limitations especially on the problem (1) and (3). We extend DDIS to propose SDS based on the relevant diversity statistics. SDS has the following two advantages concerning the problem (1) and (3). The first is that SDS allows similarity measure between two sets of points in different sizes, and the magnitude of the score is scale-robust. Usually, the magnitude of the DDIS or BBS score grows with the increase of the point set's scale, which makes the larger candidate windows more favorable to be selected as final results. To alleviate the unfairness, SDS introduces bidirectional relevant diversity and penalizes on the change of scales to make the employment of multi-scale sliding window feasible, and the score can converge to the correct scale. This property of SDS is well statistically justified in Sec. 2.4. The second advantage of SDS is its robustness to the intense rotation. Both BBS and DDIS involve a spatial distance term in NN search or the final similarity calculation, which poses a limitation that the NN of a point must be spatially close. The limitation is a strong prior that can indeed reduce the number of outliers, but at the same time decrease the robustness against intense rotation. In this paper, instead of Cartesian coordinate, we exploit the polar angle of the polar coordinate for the calculation of spatial distance, which releases the limitation brought by the prior. Besides, rank information of appearance within a local circle is employed for searching NN along with local appearance, which helps to find more confident NN and yields in a significant improvement for intense rotation cases. This property of SDS is also statistically justified in Sec. 2.4. To summarize, the main contributions of this paper can be concluded as (a) SDS introduces bidirectional relevant diversity and penalizes on the change of scales to deal with scaling. (b) The rank of local appearance information and the polar radius is exploited to make the SDS robust against intense rotation change. (c) We originally collect a comprehensive dataset with 498 template-target pairs in the unconstrained environments for testing the matching performance, which includes 166 image pairs for scaling, rotation, scal-ing+rotation, respectively. Methodology Given a template cropped from a reference image and a target image related by unknown geometric and photometric transformations, our purpose is to design a similarity measure, which can distinctively localize a region in the target image that includes the same object with the template by finding the maximum value. Each candidate region in the target image is defined by a rectangular window, and the candidate windows in the target image are generated in a multiple-scale sliding window fashion. Taking the template image T = {t i } n i=1 and a candidate window Q = q j m j=1 from target image Q = {q l } M l=1 as inputs, a SDS score in real number can be calculated, where the t i and q j represent non-overlapped patch from the template and a candidate window, respectively. t i and q j can also be treated as points when T and Q are treated as point sets. Q ⊆ Q, and m ≤ M. Nearest neighbor has been shown to be a strong feature for designing similarity measure in some prior researches. To better address the difference, we first recall BBS [3] which counts the number of bidirectional NN matches between T and Q: BBS = c {∃t i ∈ T, ∃q j ∈ Q : NN(t i , Q) = q j ∧ NN(q j , T ) = t i } ,(1) where NN(t i , Q) = argmin q j ∈Q d(t i , q j ) is a function returns the NN of t i with respect to Q, and the d(·) is a distance function. The |{·}| denotes the size of a set, and the c = 1/min{n, m} is a normalization factor. We are now ready to introduce our method in a bottom-up fashion: from NN search to bidirectional diversity, and finally the SDS similarity. Rank of Local Appearance for Rotation Robust NN search The distance function in Eq. 1 is defined by d (p i , q j ) = p (A) i − q (A) j 2 2 + λ p (L) i − q (L) j 2 2 ,(2) where (A) denotes pixel appearance (e.g., RGB) and (L) denotes pixel location (x, y) within the patch normalized to the range [0, 1]. In the stage of NN searching, under the assumption that intense deformation such as rotation do not occur within the patch, the spatial term can contribute to improving the confidence of NN by confirming the consistency of appearance and position. We propose d (p i , q j ) = p (A) i − q (A) j 2 2 + λ p (R) i − q (R) j 2 2 ,(3) to incorporate (R) instead of (L), which denotes the rank with respect to the appearance of pixels within a circle. The origin of the circle is p i , with a support radius of r. Specifically, p (R) i = ∑ p∈circle(p i ,r) I p (A) i ≥ p (A) /r 2 ,(4) where I(·) is an indicator function that turns true and false into 1 and 0. Equation in the same form is applied to q j . Unlike pixel location, the appearance rank defined by Eq. 4 is invariant to rotation, which can also be considered as structural information (e.g., the shape of the distribution of pixel values) extracted from a local region. As the rotation will not destroy the structure, it is reasonable to explain its invariance against rotation. Furthermore, the Euclidean distance of orders emphasizes the influence of local extremes, which also contributes to keeping the local features well. Bidirectional Diversity for Discriminative Similarity Measure We first extend the diversity similarity (DIS) defined in [18] to a bidirectional way. The DIS is defined as DIS = c {t i ∈ T : ∃q j ∈ Q, NN(q j , T ) = t i } ,(5) which counts the types of points in T that have NN in Q with the same pixel type (i.e., defined as diversity in direction T → Q). The authors claim that this one direction diversity provides a good approximation to BBS with less computation. However, the number of candidates increase explosively by allowing multi-scale candidate windows Q, therefore a more discriminative similarity measure is needed. We exploit both diversity calculated with respect to T and Q (i.e., T → Q and Q → T ). Specifically, we first define the following function ε(t i ) which indicates the number of points q j ∈ Q whose NNs are equal to t i in direction T → Q, ε(t i ) = q j ∈ Q : NN(q j , T ) = t i ,(6) where NN(·) here is calculated with distance defined in Eq. 3. To understand the equation, we analyze its relationship with diversity from two situations. For |T | = |Q|: (1) When ε(t i ) ≥ 1, the value is inversely proportional to the diversity contribution. That is, large value of ε(t i ) indicates that many points in Q have the same NN of t i , which will lower the diversity defined in Eq. 5. (2) When ε(t i ) = 0, it indicates that a t i is not a NN of any q i , which also hinders the increase of diversity. An ideal situation is that for each t i , ε(t i ) = 1. For s|T | = |Q|, the situations become more complex. (1) when ε(t i ) = 0, similarly it means low contribution to the diversity. (2) Due to the scaling s between Q and T , one point can be the NN of multiple points, when 1 ≤ ε(t i ) ≤ s, it contributes to the diversity. (3) When ε(t i ) > s, it will lower the diversity. We propose to simultaneously introduce this statistic to direction Q → T . However, it is not straightforward in the case of template matching. Because the candidate window Q usually belongs to a target image Q, where |Q| |Q|. That is, when finding NNs in the case of T → Q, as T is fixed and the preprocessing (e.g., sorting for brute force search, building kd-tree, etc.) only need to be conducted once. In the case of Q → T , as such preprocessing for NN search has to be conducted over each Q, it will suffer from time cost. To tackle this problem, we pose an assumption that NN(t i , Q) has a high probability to be included in the set of k approximate NNs with respect to Q, which is denoted by ANN(t i , Q). Formally, we define the following function which counts the number of points (i.e., patches in the image) t i ∈ T whose ANNs include q j in direction Q → T , τ(q j ) = t i ∈ T, Q ∈ Q : q j ∈ ANN k (t i , Q) .(7) Scalable Diversity Similarity With bidirectional diversity ε(t i ) and τ(q j ) defined, we define the SDS to quantify the the similarity between template T and candidate Q with given target image Q and scaling s as follows, where s can be calculated from T and Q, SDS(T, Q, s, Q) = λ 1 ∑ q j I(τ(q j ) = 0) ∑ t i I(ε(t i ) = 0) ∑ q j |ρ(q j ) − sρ(NN(q j , T ))| U.(8) Where parameter λ 1 is a normalization factor inversely proportional to the increase of s (e.g., λ 1 = s −1 ). As analyzed in Sec. 2.2, only points in T which hold ε(t i ) = 0, and points in Q which hold τ(q j ) = 0 can possibly contribute to the increase of the diversity. ρ(·) returns the radius of a pixel in polor coordinate, with the pole set as the according geometric center of T and Q. The denominator of Eq. 8 penalizes the spatial consistency in polar coordinate, to further increase the robustness against in-plan rotation. Term U is a normalization term for the number of NNs with respect to scaling. Following the analysis in Sec. 2.2, in our implementation, U is defined as ∑ t i ,ε(t i )>0 exp (I(s/ε(t i ) ≥ 1) + I(s/ε(t i ) < 1)s/ε(t i ) − 1) , which increases when more t i holds s/ε(t i ) ≥ 1. In conclusion, SDS can be viewed as a cooperation of three terms: (1) The numerator term to evaluate the bidirectional diversity, (2) the denominator term to evaluate the spatial consistency, (3) the U term to normalize the number of NNs with respect to s . Statistical Analysis Analysis of scaling-robustness. To assert the effectiveness of SDS in measuring the similarity between scale-variant point sets, we first provide a 1D statistical analysis following [3,18]. The expectations of similarity between two point sets drawn from two 1D Gaussian models are calculated for comparison, where point sets are cast as template/candidate window, points are cast as patches. Our goal is to show that the expectation of SDS is maximal when the two Gaussian models are the same and decrease fastest when models separate. We further analyze the expectations of point sets in different scaling size to show the scalingrobustness of SDS. As suggested by [18], Monte-Carlo integration is exploited for approximating the expectation. Figure 2 (a) to Figure. 2 (d) show the illustration of approximated expectation maps when two point sets have same size (s = 1). It can be obviously observed that the expectation of SDS drops faster than either SSD, BBS, or DDIS when the parameters of the second Gaussian (µ and σ ) get away from the parameters of the first Gaussian (µ = 0 and σ = 1). Figure 2 (d) to Figure. 2 (f) show the comparison of expectation map when two point sets are in different sizes (s = 1, 0.5, 2), which provides a strong evidence that SDS is highly robust against scaling as the expectation maps almost remain the same despite the scaling change. To further show that the scale of target with respect to T can be estimated by maximizing SDS, we provide a statistical result in Figure. 3. Similar with Figure. 2, T is drawn from N(0, 1) and Q is generated for expectation approximation. The difference is, we further prepare Q which involves background points to simulate the template matching task. It can be clearly observed that the expectation of SDS is almost invariant to rotation while BBS drops most when T and Q overlap least (i.e., θ = −π/2). Here, Q = T ∪ B, GT s |T | + |B| = |Q| and B is composed of background points drawn from N(µ, σ ), with µ ∈ [0, 10], σ ∈ [0, 10]. In this demonstration, |T | and |Q| are set to 100 and 200 respectively. |Q| = s|T | and s varies from 0.5 to 2 with step of 0.1. The Q can be treated as a candidate window in the template matching task and is sampled from Q by preferentially sample points in T (i.e., nearest neighbor interpolation). For example, when s = 1.5, 150 points need to be sampled to formulate Q, with 100 points from T and 50 points from B. Estimatedŝ = arg max s SDS(T, Q, s, Q) is supposed to approximate the ground truth scale GT s well. This statistical analysis clearly prove the robustness of SDS against scaling, and the ability for estimating proper scale of the target. Analysis of rotation robustness. To show the robustness against rotation, we analyze the expectation of similarity between two sets T and Q drawn from 2D Gaussian models, as shown in Figure. 4, we fix the parameters except θ and σ 2 to validate the effect of rotation angle along with the shape of the Gaussian. In the case of BBS, as we can observe from Figure. 4 (c), when σ 2 is extremely small, the points drawn are likely to form a line, which is sensitive to rotation as lines overlap little after rotation. This is also the case when σ 2 σ 1 , as it can be observed that the expectation decreases gradually with the increase of σ 2 . Also, isotropic Gaussian is supposed to be unaffected by the rotation, which can be convinced from Figure. 4 (c) that when σ 1 = σ 2 = 1, the expectation keeps well with respect to the rotation. On the other hand, SDS shows the invariance to the rotation despite the shape change of distribution in Figure. 4 (d). Experiment Results We conduct a comprehensive experiment with both qualitative and quantitative tests to validate the superiority of SDS comparing with the state-of-the-art methods BBS [3,10] and DDIS [18], as well as several conventional methods. We follow the same procedure as suggested in [3,18] for a fair comparison. Note that as SDS can be employed with multi-scale windows, we simultaneously compare the performance of SDS with fixed scale, which is referred to as NSDS. In addition, similar to SDS, we also employed DDIS to the multi-scale candidate windows for comparison, denoted as SDDIS. Multiple datasets are utilized for comparison. We originally collected 42 videos under different unconstrained environments and extract frames to create a benchmark for evaluating the performance of template matching involving overall rotation and scaling on the object. Ground truths are scale-variable and annotated manually image by image. Besides, this benchmark also includes other challenges like complex deformations, occlusion, background clutter, etc. The benchmark is subdivided into three datasets: (1) rotation dataset, (2) scaling dataset and (3) rotation-scaling dataset for detail evaluation, each of them includes 166 reference-target image pairs, respectively. It is noteworthy that each dataset also includes other photometric and geometric transformations as they are taken under unconstrained environments. As to the evaluation criteria, following previous works [3,18], we employ the success ratio based on the overlap rate between ground truth W g and matching result W r to measure the accuracy, which is defined as: W r ∩W g / W r ∪W g . Here, the operator |·| is to count the number of pixels within a window. We compare our proposed methods (SDS and NSDS) to DDIS and its multi-scale imple- Figure. 6. 1st and 2nd rows show that SDS is robust against overall rotation. 3rd and 4th rows demonstrate that SDS can deal with scaling problem well. The likelihood maps show that SDS/NSDS is more distinct and yields in better-localized modes compared to other methods. Conclusion We proposed a novel multi-scale template matching method in unconstrained environments, which is robust against overall scaling, intense rotation while taking advantage of global statistic based similarity measure to deal with complex deformations, occlusions, etc. Extended bidirectional diversity combined with rank based nearest neighbor search forms a scale-robust similarity measure, and the exploit of polar coordinate further improves the robustness against rotation. The experimental results have shown that SDS can remarkably outperform other competitive methods. On the other hand, SDS may fail when the template is too small to achieve a statistical score. The remained future work is to add a rotation parameter to the candidate windows to achieve rotation-specific matching results.
3,341
1907.01377
2955307856
Terahertz (THz) sensing is a promising imaging technology for a wide variety of different applications. Extracting the interpretable and physically meaningful parameters for such applications, however, requires solving an inverse problem in which a model function determined by these parameters needs to be fitted to the measured data. Since the underlying optimization problem is nonconvex and very costly to solve, we propose learning the prediction of suitable parameters from the measured data directly. More precisely, we develop a model-based autoencoder in which the encoder network predicts suitable parameters and the decoder is fixed to a physically meaningful model function, such that we can train the encoding network in an unsupervised way. We illustrate numerically that the resulting network is more than 140 times faster than classical optimization techniques while making predictions with only slightly higher objective values. Using such predictions as starting points of local optimization techniques allows us to converge to better local minima about twice as fast as optimization without the network-based initialization.
Due to the revolutionary success (convolutional) neural networks have had on computer vision problems over the last decade, researchers have extended the fields of applications of neural networks significantly. A particularly interesting concept is to learn the solution of complex, possibly nonconvex, optimization problems. Different lines of research have considered directly learning the optimizer itself, e.g. modelled as a recurrent neural network @cite_12 , or rolling out optimization algorithms and learning the incremental steps, e.g. in the form of parameterized proximal operators in @cite_10 . Further hybrid approaches include optimization problems in the networks' architecture, e.g. @cite_29 , or combining optimizers with networks that have been trained individually @cite_25 @cite_2 . The recent work of @cite_19 trains a network to predict descent directions to a given energy in order to give provable convergence results on the learned optimizer.
{ "abstract": [ "This paper presents OptNet, a network architecture that integrates optimization problems (here, specifically in the form of quadratic programs) as individual layers in larger end-to-end train-able deep networks. These layers encode constraints and complex dependencies between the hidden states that traditional convolutional and fully-connected layers often cannot capture. In this paper, we explore the foundations for such an architecture: we show how techniques from sensitivity analysis, bilevel optimization, and implicit differentiation can be used to exactly differentiate through these layers and with respect to layer parameters; we develop a highly efficient solver for these layers that exploits fast GPU-based batch solves within a primal-dual interior point method, and which provides backpropagation gradients with virtually no additional cost on top of the solve; and we highlight the application of these approaches in several problems. In one notable example, we show that the method is capable of learning to play mini-Sudoku (4x4) given just input and output games, with no a priori information about the rules of the game; this highlights the ability of our architecture to learn hard constraints better than other neural architectures.", "", "While variational methods have been among the most powerful tools for solving linear inverse problems in imaging, deep (convolutional) neural networks have recently taken the lead in many challenging benchmarks. A remaining drawback of deep learning approaches is their requirement for an expensive retraining whenever the specific problem, the noise level, noise type, or desired measure of fidelity changes. On the contrary, variational methods have a plug-and-play nature as they usually consist of separate data fidelity and regularization terms. In this paper we study the possibility of replacing the proximal operator of the regularization used in many convex energy minimization algorithms by a denoising neural network. The latter therefore serves as an implicit natural image prior, while the data term can still be chosen independently. Using a fixed denoising neural network in exemplary problems of image deconvolution with different blur kernels and image demosaicking, we obtain state-of-the-art reconstruction results. These indicate the high generalizability of our approach and a reduction of the need for problemspecific training. Additionally, we discuss novel results on the analysis of possible optimization algorithms to incorporate the network into, as well as the choices of algorithm parameters and their relation to the noise level the neural network is trained on.", "In this paper, we introduce variational networks (VNs) for image reconstruction. VNs are fully learned models based on the framework of incremental proximal gradient methods. They provide a natural transition between classical variational methods and state-of-the-art residual neural networks. Due to their incremental nature, VNs are very efficient, but only approximately minimize the underlying variational model. Surprisingly, in our numerical experiments on image reconstruction problems it turns out that giving up exact minimization leads to a consistent performance increase, in particular in the case of convex models.", "While deep learning methods have achieved state-of-theart performance in many challenging inverse problems like image inpainting and super-resolution, they invariably involve problem-specific training of the networks. Under this approach, each inverse problem requires its own dedicated network. In scenarios where we need to solve a wide variety of problems, e.g., on a mobile camera, it is inefficient and expensive to use these problem-specific networks. On the other hand, traditional methods using analytic signal priors can be used to solve any linear inverse problem; this often comes with a performance that is worse than learning-based methods. In this work, we provide a middle ground between the two kinds of methods — we propose a general framework to train a single deep neural network that solves arbitrary linear inverse problems. We achieve this by training a network that acts as a quasi-projection operator for the set of natural images and show that any linear inverse problem involving natural images can be solved using iterative methods. We empirically show that the proposed framework demonstrates superior performance over traditional methods using wavelet sparsity prior while achieving performance comparable to specially-trained networks on tasks including compressive sensing and pixel-wise inpainting.", "The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art." ], "cite_N": [ "@cite_29", "@cite_19", "@cite_2", "@cite_10", "@cite_25", "@cite_12" ], "mid": [ "2963970238", "", "2605960636", "2746434776", "2604885021", "2963775850" ] }
TRAINING AUTO-ENCODER-BASED OPTIMIZERS FOR TERAHERTZ IMAGE RECONSTRUCTION A PREPRINT
Terahertz (THz) imaging is an emerging sensing technology with a great potential for hidden object imaging, contactfree analysis, non-destructive testing and stand-off detection in various application fields, including semi-conductor industry, biological and medical analysis, material and quality control, safety and security [1][2][3]. The physically interpretable quantities relevant to the aforementioned applications, however, cannot always be measured directly. Instead, in THz imaging systems, each pixel contains implicit information about such quantities, making the inverse problem of inferring these physical quantities a challenging problem with high practical relevance. As we will discuss in Sec. 2, at each pixel location x the relation between the desired (unknown) parameters p( x) = (ê( x), σ( x), µ( x), φ( x)) ∈ R 4 , i.e., the electric field amplitudeê, the position of the surface µ, the width of the reflected pulse σ, and the phase φ, and the actual measurements g( x) ∈ R nz can be modelled via the equation g( x, z) = (fê ,σ,µ,φ (z i )) i∈{1,...,nz} + noise, where fê ,σ,µ,φ (z) =ê sinc (σ(z − µ)) exp (−i(ωz − φ)) , sinc(t) =    sin(πt) πt t = 0, 1 t = 0,(1) and (z i ) i∈{1,...,nz} is a device-dependent sampling grid z grid . More details of the THz model are described in [4]. Thus, the crucial step in THz imaging is the solution of optimization problem of the form min e,σ,µ,φ Loss(fê ,σ,µ,φ (z grid ), g( x)), arXiv:1907.01377v2 [cs.CV] 29 Oct 2019 at each pixel x, possibly along with additional regularizers on the unknown parameters. Even with simple choices of the loss function such as an 2 -squared loss, the resulting fitting problem is highly nonconvex and global solutions become rather expensive. Considering that the number (n x · n y ) of pixels, i.e., of optimization problem (3) to be solved, typically is in the order of hundred thousands to millions, even local first order or quasi-Newton methods become quite costly: For example, running the build-in Trust-Region solver of MATLAB R to reconstruct a 446 × 446 THz image takes over 170 minutes. In this paper, we propose to train a neural network to solve the per-pixel optimization problem (3) directly. We formulate the training of the network as a model-based autoencoder (AE), which allows us to train the corresponding network with real data in an unsupervised way, i.e., without ground truth. We demonstrate that the resulting optimization network yields parameters (ê, σ, µ, φ) that result in only slightly higher losses than actually running an optimization algorithm, despite the advantage of being more than 140 times faster. Moreover, we demonstrate that our network can serve as an excellent initialization scheme for classical optimizers. By using the network's prediction as a starting point for a gradient-based optimizer, we obtain lower losses and converge more than 2x faster than classical optimization approaches, while benefiting from all theoretical guarantees of the respective minimization algorithm. This paper is organized as follows: Sec. 2 gives more details on how THz imaging systems work. Sec. 3 summarizes the related work on learning optimizers, machine learning for THz imaging techniques, and model-based autoencoders. Sec. 4 describes model-based AEs in contrast to classical supervised learning approaches in detail, before Sec. 5 summarizes our implementation. Sec. 6 compares the proposed approaches to classical (optimization-based) reconstruction techniques in terms of speed and accuracy before Sec. 7 draws conclusions. THz Imaging Systems There are several approaches to realizing THz imaging, e.g. femtosecond laser based scanning system [5,6], synthetic aperture systems [7,8], and hybrid systems [9]. A typical approach to THz imaging is based on the Frequency Modulated Continuous Wave (FMCW) concept [8], which uses active frequency modulated THz signals to sense reflected signals from the object. The reflected energy and phase shifts due to the signal path length make 3D THz imaging possible. In Figure 1, the setup of our electronic FMCW-THz 3D imaging system is shown. More details on the THz imaging system are described in [8]. In this paper, we denote by g t ( x, t) the measured demodulated time domain signal of the reflected electric field amplitude of the FMCW system at lateral position x ∈ R 2 . In FMCW radar signal processing, this continuous wave temporal signal is converted into frequency domain by a Fourier transform [10,11]. Since the linear frequency sweep has a unique frequency at each spatial position in z-direction, the converted frequency domain signal directly relates to the spatial azimuth (z-direction) domain signal g c ( x, z) = F{g t ( x, t)}.(4) The resulting 3D image g c ∈ C nx×ny×nz is complex data in the spatial domain, representing per-pixel complex reflectivity of THz energy. The quantities n x , n y , n z resemble the discretization in vertical, horizontal and depthdirection, respectively. Equivalently, we may represent g c by considering the real and imaginary parts as two separate channels, resulting a 4D real data tensor g ∈ R nx×ny×nz×2 . Since the system is calibrated by amplitude normalization with respect to an ideal metallic reflector, a rectangular frequency signal response is ensured for the FMCW frequency dependance [8]. After the FFT in (4), the z-direction signal envelope is an ideal sinc function as continuous spatial signal amplitude, giving rise to the physical model given in (1) in the introduction. In (1), the electric field amplitudeê is the reflection coefficient for the material, which is dependent on the complex dielectric constant of the material and helps to identify and classify materials. The depth position µ is the position at which maximum reflection occurs, i.e., the position of the surface reflecting the THz energy. σ is the width of the reflected pulse, which includes information on the dispersion characteristics of the material. The phase φ of the reflected wave depends on the ratio of real to imaginary parts of the dielectric properties of the material. Thus, the parameters p = (ê, σ, µ, φ) contain important information about the geometry as well as the material of the imaged object, which is of interest in a wide variety of applications. A Model-Based Autoencoder for THz Image Reconstruction Let us denote the THz input data by g ∈ R nx×ny×nz×2 , and consider our four unknown parameters (ê, σ, µ, φ) to be R nx×ny matrices, allowing each parameter to change at each pixel. Under slight abuse of notation we can interpret all operations in (1) (1)) is used to simulate data g, which can subsequently be fed into a network to be trained to reproduce the simulation parameters in a supervised way. fê ,σ,µ,ω,φ (z grid ) ∈ R nx×ny×nz×2 , where z grid = (z i ) i∈{1,. ..,nz} denotes the depth sampling grid. Concatenating all four matrix valued parameters into a single parameter tensor P ∈ R ny×nx×4 , our goal can be formalized as finding P such that f P (z grid ) ≈ g. A classical supervised machine learning approach to problems with known forward operator is illustrated in Figure 2 for the example of THz image reconstruction: The explicit forward model f is used to simulate a large set of images g from known parameters P which can subsequently be used as training data for predicting P via a neural network G(g; θ) depending on weights θ. Such supervised approaches with simulated training data are frequently used in other image reconstruction areas, e.g. super resolution [23,24], or image deblurring [25,26]. The accuracy of networks trained on simulated data, however, crucially relies on precise knowledge of the forward model and the simulated noise. Slight deviations thereof can significantly degrade a network performance as demonstrated in [27], where deep denoising networks trained on Gaussian noise were outperformed by BM3D when applied to realistic sensor noise. Instead of pursuing the supervised learning approach described above, we replace p = (ê, σ, µ, φ) in the optimization approach (3) by a suitable network G(g; θ) that depends on the raw input data g and learnable parameters θ, that can be trained in an unsupervised way on real data. Assuming we have multiple examples g k of THz data, and choosing the loss function in (3) as an 2 -squared loss, gives rise to the unsupervised training problem min θ training examples k f G(g k ;θ) (z grid ) − g k 2 F .(5) As we have illustrated in Figure 3, this training resembles an AE architecture: The input to the network is data g k which gets mapped to parameters P that -when fed into the model function f -ought to reproduce g k again. Opposed to the straight forward supervised learning approach, the proposed approach (5) has two significant advantages • It allows us to train the network in an unsupervised way, i.e., on real data, and therefore learn to deal with measurement-specific distortions. • The cost function in (5) implicitly handles the scaling of different parameters, and therefore circumvents the problem of defining meaningful cost functions on the parameter space: Simple parameter discrepancies such as P 1 − P 2 2 2 for two different parameters sets P 1 and P 2 largely depend on the scaling of the individual parameters and might even be meaningless, e.g. for cyclic parameters such as the phase offset φ. Encoder Network Architecture and Training Data Preprocessing As illustrated in the plot of the magnitude of an exemplary measured THz signal shown in Figure 4, the THz energy is mainly focused in the main lobe and first side-lobes of the sinc function. Because the physical model remains valid The input data g is fed into a network G whose parameters θ are trained in such a way that feeding the network's prediction G(g; θ) into a model function f again reproduces the input data g. Such an architecture resembles an AE with a learnable encoder and a model-based decoder and allows an unsupervised training on real data. in close proximity of the main lobe only, we preprocess the data to reduce the impressively large range of 12600 measurements per pixel. We, therefore, crop out 91 measurements per pixel centered around the main lobe, whose position is related to the object distance and to the parameter µ. Details of the cropping window are described in [4]. We represent the THz data in a 4D real tensor g ∈ R nx×ny×nz×2 , where n x = n y = 446, and n z is the size of the cropping window, i.e. 91 in our case. Encoder Architecture and Training For the encoder network G(g; θ) we pick a spatially decoupled architecture using 1 × 1 convolutions on g only, leading to a signal-by-signal reconstruction mechanism that allows a high level of parallelism and therefore maximizes the reconstruction speed on a GPU. The specific architecture (illustrated in Figure 5) applies a first set of convolutional filters on the real and imaginary part separately, before concatenating the activations, and applying three further convolutional filters on the concatenated structure. We apply batch-normalization (BN) [28] after each convolution and use leaky rectified linear units (LeReLU) [29] as activations. Finally, a fully connected layer reduces the dimension to the desired size of four output parameters per pixel. To ensure that the amplitude is physically meaningful, i.e., non-negative, Figure 5: Architecture of encoding network G(g; θ) that predicts the parameters: At each pixel the real and imaginary part is extracted, convolved, concatenated and processed via three convolutional and 1 fully connected layer. To obtain physically meaningful (non-negative) amplitudes, we apply an absolute value function to the first component. We train our model optimizing (5) using the Adam optimizer [30] on 80% of the 446 × 446 pixels from a real (measured) THz image for 1200 epochs. The remaining 20% of the pixels serve as a validation set. The batch size is set to 4096. The initial learning rate is set to 0.005, and is reduced by a factor of 0.99 every 20 epochs. Figure 6 illustrates the decay of the training and validation losses over 1200 epochs. As we can see, the validation loss nicely resembles the training loss with almost no generalization gap. Numerical Experiments We evaluate the proposed model-based AE on two datasets, which are acquired using the setup described in Sec. 2, namely the MetalPCB dataset and the StepChart dataset. The MetalPCB dataset is measured by a nearly planar copper target etched on a circuit board (Figure 7a), which includes metal and PCB material regions, in the standard size scale of USAF target MIL-STD-150A [31]. After the preprocessing described in Sec. 5.1, the MetalPCB dataset has 446 × 446 × 91 sample points. The StepChart dataset is based on an aluminum object (Figure 7b) with sharp edges to evaluate the distance measurement accuracy using a 3D object. The StepChart dataset has 113 × 575 × 91 sample points after preprocessing. In order to evaluate the optimization quality on different materials and structures, MetalPCB dataset is evaluated in regions: PCB region is a local region that contains PCB material only, Metal region is a local region that contains copper StepChart dataset material only, and All region is the entire image area. Similarly, the StepChart dataset is evaluated by 3 regions: Edge region is the region that contains physical edges, Steps region is the center planar region of each steps, and All region is the entire image area. This segmentation is done, because the THz measurements of the highly specular aluminum target results in strong multi-path interference artifacts at the edges that should be investigated separately. The proposed model-based AE is trained on the MetalPCB dataset only, while the parameter inference is made for both the MetalPCB and StepChart datasets. This cross-referencing between two datasets can verify whether the proposed AE method is modelling the physical behavior of the system without overfitting to a specific dataset or recorded material. To compare with the classical optimization methods, the parameters are estimated using the Trust-Region Algorithm (TRA) [32], which is implemented in MATLAB R . The TRA optimization requires a proper definition of the parameter ranges. Furthermore, it is very sensitive with respect to the initial parameter set. We, therefore, carefully select the initial parameters by sequentially estimating them from the source data (see [4] for more details). Still, the optimization may result in a parameter set with significant loss values; see Sec. 6.2. The trained encoder network is independent of any initialization scheme as it tries to directly predict optimal parameters from the input data. While the network alone gives remarkably good results with significantly lower runtimes than the optimization method, there is no guarantee that the network's predictions are critical points of the energy to be minimized. This motivates the use of the encoder network as an initialization scheme to the TRA, specifically because the TRA guarantees the monotonic decrease of objective function such that using the TRA on top of the network can only improve the results. We abbreviate this approach to AE+TRA for the rest of this paper. To fairly compare all three approaches, the optimization time of TRA and the inference time of the AE are both recorded by an Intel R i7-8700K CPU computation, while the AE is trained on a NVIDIA R GTX 1080 GPU. The PyTorch source code is available at https://github.com/tak-wong/THz-AutoEncoder. Table 1, the average loss in (5) and the timing are shown for the Trust-Region Algorithm (TRA), the Autoencoder (AE) and the joint AE+TRA approaches, respectively. We can see that the proposed encoder network achieves a lower average loss than the TRA method in the metal region of the MetalPCB dataset, it yields higher average losses than the TRA on both datasets. It is encouraging to see that although the AE was trained on the MetalPCB dataset, the relative performance in comparison to the TRA does not decay too significantly when changing to an entirely unseen data set with a different material, with the AE loss being 21.7% and 25.9% higher than the TRA loss on the MetalPCB and StepChart data sets, respectively. If such a sacrifice in accuracy is acceptable, the speed-up in runtime is tremendous with the AE being over 140 times faster than the TRA (for both methods being evaluated on a CPU). Note that even the sum of training and inference time are smaller for the proposed AE than the runtime of the TRA on the MetalPCB dataset. Loss and timing Interestingly, the combined AE+TRA approach of initializing the TRA with the encoder network's prediction leads to better losses than the TRA alone in all regions. Additionally, the AE-initialized TRA converged more than 2 times faster due to the stopping criterion being reached earlier. We note that the losses of all approaches are significantly higher for the StepCart data set than they are for the MetalPCB. This is because the aluminum StepChart object (Figure 7b) has a more complex physical structure than the MetalPCB object, which results in a mixture of scattered THz pulses by multi-path interference effects in all object regions. Incorporating such effects in the reflection model of (1) could therefore be an interesting aspect of future research for improving the explainability of the measured data with the physical model. Quality Assessment of THz Images In THz imaging, the intensity image I that is equal to the squared amplitude, i.e. I =ê 2 is the most important criteria for quality assessment. Note that the intensity could be inferred directly from the data by considering that (1) yields fê ,σ,µ,φ (µ) · f * e,σ,µ,φ (µ) =ê 2 · sinc 2 (0) =ê 2 = I(6) where f * is the complex conjugate of f . As we illustrate in Figure 8, the model-based approach is not only capable of extracting all relevant parameters, i.e.,ê, µ, σ and φ, but, compared to values directly extracted from the source data, the resulting intensity I is more homogeneous in homogeneous material regions. The homogeneity of the directly extracted intensity results from the very low depth of field of THz imaging systems in general, combined with the slight non-planarity of the MetalPCB target. As depicted in Figure 8c, the intensity variations along the selected line in the homogeneous copper region are reduced using the three model-based methods, i.e. TRA, AE, and AE+TRA. However, due to the crucial selection of the initial parameters (see discussion at the beginning of Sec. 6), the TRA optimization results exhibit significant amplitude fluctuations and loss values (Figure 8d) in the two horizontal sub-regions x ∈ [150, 200] and x > 430. The proposed AE and AE+TRA methods, however, deliver superior results with respect to the main quality measure applied in THz imaging, i.e. to the intensity homogeneity and the loss in model fitting. Still, the AE approach shows very few extreme loss values, while the AE+TRA method's loss values are consistently low along the selected line in the homogeneous copper region. Conclusions and Future Work In this paper, we propose a model-based autoencoder for THz image reconstruction. Comparing to a classical Trust-Region optimizer, the proposed autoencoder gets within 25% margin to the objective value of the optimizer, while being more than 140 times faster. Using the network's prediction as an initialization to a gradient-based optimization scheme improves the result over a plain optimization scheme in terms of objective values while still being two times faster. We believe that these are very promising results for training optimizers/initialization schemes for parameter identification problems in general by exploiting the idea of model-based autoencoders for unsupervised learning. Future research will include exploiting spatial information during the reconstruction as well as considering joint parameter identification and reconstruction problems such as denoising, sharpening, and super-resolving parameter images such as the amplitude images shown in Figure 8b.
3,288
1907.01377
2955307856
Terahertz (THz) sensing is a promising imaging technology for a wide variety of different applications. Extracting the interpretable and physically meaningful parameters for such applications, however, requires solving an inverse problem in which a model function determined by these parameters needs to be fitted to the measured data. Since the underlying optimization problem is nonconvex and very costly to solve, we propose learning the prediction of suitable parameters from the measured data directly. More precisely, we develop a model-based autoencoder in which the encoder network predicts suitable parameters and the decoder is fixed to a physically meaningful model function, such that we can train the encoding network in an unsupervised way. We illustrate numerically that the resulting network is more than 140 times faster than classical optimization techniques while making predictions with only slightly higher objective values. Using such predictions as starting points of local optimization techniques allows us to converge to better local minima about twice as fast as optimization without the network-based initialization.
The most related prior work is the 3D face reconstruction network from Tewari al @cite_9 . They aimed at finding a semantic code vector from a given facial image such that feeding this code vector into a rending engine yields an image similar to the input image itself. While this problem had been addressed using optimization algorithms a long time ago @cite_23 (also known under the name of analysis-by-synthesis approaches), the Tewari al @cite_9 replaced the optimizer with a neural network and kept the original cost function to train the network in an unsupervised way. The resulting structure resembles an AE in which the decoder fixed to the forward model and was therefore coined model-based AE. As we will discuss in the next section, the idea of model-based AEs generalizes far beyond 3D face reconstruction and can be used to boost the THz parameter identification problem significantly.
{ "abstract": [ "In this work we propose a novel model-based deep convolutional autoencoder that addresses the highly challenging problem of reconstructing a 3D human face from a single in-the-wild color image. To this end, we combine a convolutional encoder network with an expert-designed generative model that serves as decoder. The core innovation is the differentiable parametric decoder that encapsulates image formation analytically based on a generative model. Our decoder takes as input a code vector with exactly defined semantic meaning that encodes detailed face pose, shape, expression, skin reflectance and scene illumination. Due to this new way of combining CNN-based with model-based face reconstruction, the CNN-based encoder learns to extract semantically meaningful parameters from a single monocular input image. For the first time, a CNN encoder and an expert-designed generative model can be trained end-to-end in an unsupervised manner, which renders training on very large (unlabeled) real world data feasible. The obtained reconstructions compare favorably to current state-of-the-art approaches in terms of quality and richness of representation.", "In this paper, a new technique for modeling textured 3D faces is introduced. 3D faces can either be generated automatically from one or more photographs, or modeled directly through an intuitive user interface. Users are assisted in two key problems of computer aided face modeling. First, new face images or new 3D face models can be registered automatically by computing dense one-to-one correspondence to an internal face model. Second, the approach regulates the naturalness of modeled faces avoiding faces with an “unlikely” appearance. Starting from an example set of 3D face models, we derive a morphable face model by transforming the shape and texture of the examples into a vector space representation. New faces and expressions can be modeled by forming linear combinations of the prototypes. Shape and texture constraints derived from the statistics of our example faces are used to guide manual modeling or automated matching algorithms. We show 3D face reconstructions from single images and their applications for photo-realistic image manipulations. We also demonstrate face manipulations according to complex parameters such as gender, fullness of a face or its distinctiveness." ], "cite_N": [ "@cite_9", "@cite_23" ], "mid": [ "2604672468", "2237250383" ] }
TRAINING AUTO-ENCODER-BASED OPTIMIZERS FOR TERAHERTZ IMAGE RECONSTRUCTION A PREPRINT
Terahertz (THz) imaging is an emerging sensing technology with a great potential for hidden object imaging, contactfree analysis, non-destructive testing and stand-off detection in various application fields, including semi-conductor industry, biological and medical analysis, material and quality control, safety and security [1][2][3]. The physically interpretable quantities relevant to the aforementioned applications, however, cannot always be measured directly. Instead, in THz imaging systems, each pixel contains implicit information about such quantities, making the inverse problem of inferring these physical quantities a challenging problem with high practical relevance. As we will discuss in Sec. 2, at each pixel location x the relation between the desired (unknown) parameters p( x) = (ê( x), σ( x), µ( x), φ( x)) ∈ R 4 , i.e., the electric field amplitudeê, the position of the surface µ, the width of the reflected pulse σ, and the phase φ, and the actual measurements g( x) ∈ R nz can be modelled via the equation g( x, z) = (fê ,σ,µ,φ (z i )) i∈{1,...,nz} + noise, where fê ,σ,µ,φ (z) =ê sinc (σ(z − µ)) exp (−i(ωz − φ)) , sinc(t) =    sin(πt) πt t = 0, 1 t = 0,(1) and (z i ) i∈{1,...,nz} is a device-dependent sampling grid z grid . More details of the THz model are described in [4]. Thus, the crucial step in THz imaging is the solution of optimization problem of the form min e,σ,µ,φ Loss(fê ,σ,µ,φ (z grid ), g( x)), arXiv:1907.01377v2 [cs.CV] 29 Oct 2019 at each pixel x, possibly along with additional regularizers on the unknown parameters. Even with simple choices of the loss function such as an 2 -squared loss, the resulting fitting problem is highly nonconvex and global solutions become rather expensive. Considering that the number (n x · n y ) of pixels, i.e., of optimization problem (3) to be solved, typically is in the order of hundred thousands to millions, even local first order or quasi-Newton methods become quite costly: For example, running the build-in Trust-Region solver of MATLAB R to reconstruct a 446 × 446 THz image takes over 170 minutes. In this paper, we propose to train a neural network to solve the per-pixel optimization problem (3) directly. We formulate the training of the network as a model-based autoencoder (AE), which allows us to train the corresponding network with real data in an unsupervised way, i.e., without ground truth. We demonstrate that the resulting optimization network yields parameters (ê, σ, µ, φ) that result in only slightly higher losses than actually running an optimization algorithm, despite the advantage of being more than 140 times faster. Moreover, we demonstrate that our network can serve as an excellent initialization scheme for classical optimizers. By using the network's prediction as a starting point for a gradient-based optimizer, we obtain lower losses and converge more than 2x faster than classical optimization approaches, while benefiting from all theoretical guarantees of the respective minimization algorithm. This paper is organized as follows: Sec. 2 gives more details on how THz imaging systems work. Sec. 3 summarizes the related work on learning optimizers, machine learning for THz imaging techniques, and model-based autoencoders. Sec. 4 describes model-based AEs in contrast to classical supervised learning approaches in detail, before Sec. 5 summarizes our implementation. Sec. 6 compares the proposed approaches to classical (optimization-based) reconstruction techniques in terms of speed and accuracy before Sec. 7 draws conclusions. THz Imaging Systems There are several approaches to realizing THz imaging, e.g. femtosecond laser based scanning system [5,6], synthetic aperture systems [7,8], and hybrid systems [9]. A typical approach to THz imaging is based on the Frequency Modulated Continuous Wave (FMCW) concept [8], which uses active frequency modulated THz signals to sense reflected signals from the object. The reflected energy and phase shifts due to the signal path length make 3D THz imaging possible. In Figure 1, the setup of our electronic FMCW-THz 3D imaging system is shown. More details on the THz imaging system are described in [8]. In this paper, we denote by g t ( x, t) the measured demodulated time domain signal of the reflected electric field amplitude of the FMCW system at lateral position x ∈ R 2 . In FMCW radar signal processing, this continuous wave temporal signal is converted into frequency domain by a Fourier transform [10,11]. Since the linear frequency sweep has a unique frequency at each spatial position in z-direction, the converted frequency domain signal directly relates to the spatial azimuth (z-direction) domain signal g c ( x, z) = F{g t ( x, t)}.(4) The resulting 3D image g c ∈ C nx×ny×nz is complex data in the spatial domain, representing per-pixel complex reflectivity of THz energy. The quantities n x , n y , n z resemble the discretization in vertical, horizontal and depthdirection, respectively. Equivalently, we may represent g c by considering the real and imaginary parts as two separate channels, resulting a 4D real data tensor g ∈ R nx×ny×nz×2 . Since the system is calibrated by amplitude normalization with respect to an ideal metallic reflector, a rectangular frequency signal response is ensured for the FMCW frequency dependance [8]. After the FFT in (4), the z-direction signal envelope is an ideal sinc function as continuous spatial signal amplitude, giving rise to the physical model given in (1) in the introduction. In (1), the electric field amplitudeê is the reflection coefficient for the material, which is dependent on the complex dielectric constant of the material and helps to identify and classify materials. The depth position µ is the position at which maximum reflection occurs, i.e., the position of the surface reflecting the THz energy. σ is the width of the reflected pulse, which includes information on the dispersion characteristics of the material. The phase φ of the reflected wave depends on the ratio of real to imaginary parts of the dielectric properties of the material. Thus, the parameters p = (ê, σ, µ, φ) contain important information about the geometry as well as the material of the imaged object, which is of interest in a wide variety of applications. A Model-Based Autoencoder for THz Image Reconstruction Let us denote the THz input data by g ∈ R nx×ny×nz×2 , and consider our four unknown parameters (ê, σ, µ, φ) to be R nx×ny matrices, allowing each parameter to change at each pixel. Under slight abuse of notation we can interpret all operations in (1) (1)) is used to simulate data g, which can subsequently be fed into a network to be trained to reproduce the simulation parameters in a supervised way. fê ,σ,µ,ω,φ (z grid ) ∈ R nx×ny×nz×2 , where z grid = (z i ) i∈{1,. ..,nz} denotes the depth sampling grid. Concatenating all four matrix valued parameters into a single parameter tensor P ∈ R ny×nx×4 , our goal can be formalized as finding P such that f P (z grid ) ≈ g. A classical supervised machine learning approach to problems with known forward operator is illustrated in Figure 2 for the example of THz image reconstruction: The explicit forward model f is used to simulate a large set of images g from known parameters P which can subsequently be used as training data for predicting P via a neural network G(g; θ) depending on weights θ. Such supervised approaches with simulated training data are frequently used in other image reconstruction areas, e.g. super resolution [23,24], or image deblurring [25,26]. The accuracy of networks trained on simulated data, however, crucially relies on precise knowledge of the forward model and the simulated noise. Slight deviations thereof can significantly degrade a network performance as demonstrated in [27], where deep denoising networks trained on Gaussian noise were outperformed by BM3D when applied to realistic sensor noise. Instead of pursuing the supervised learning approach described above, we replace p = (ê, σ, µ, φ) in the optimization approach (3) by a suitable network G(g; θ) that depends on the raw input data g and learnable parameters θ, that can be trained in an unsupervised way on real data. Assuming we have multiple examples g k of THz data, and choosing the loss function in (3) as an 2 -squared loss, gives rise to the unsupervised training problem min θ training examples k f G(g k ;θ) (z grid ) − g k 2 F .(5) As we have illustrated in Figure 3, this training resembles an AE architecture: The input to the network is data g k which gets mapped to parameters P that -when fed into the model function f -ought to reproduce g k again. Opposed to the straight forward supervised learning approach, the proposed approach (5) has two significant advantages • It allows us to train the network in an unsupervised way, i.e., on real data, and therefore learn to deal with measurement-specific distortions. • The cost function in (5) implicitly handles the scaling of different parameters, and therefore circumvents the problem of defining meaningful cost functions on the parameter space: Simple parameter discrepancies such as P 1 − P 2 2 2 for two different parameters sets P 1 and P 2 largely depend on the scaling of the individual parameters and might even be meaningless, e.g. for cyclic parameters such as the phase offset φ. Encoder Network Architecture and Training Data Preprocessing As illustrated in the plot of the magnitude of an exemplary measured THz signal shown in Figure 4, the THz energy is mainly focused in the main lobe and first side-lobes of the sinc function. Because the physical model remains valid The input data g is fed into a network G whose parameters θ are trained in such a way that feeding the network's prediction G(g; θ) into a model function f again reproduces the input data g. Such an architecture resembles an AE with a learnable encoder and a model-based decoder and allows an unsupervised training on real data. in close proximity of the main lobe only, we preprocess the data to reduce the impressively large range of 12600 measurements per pixel. We, therefore, crop out 91 measurements per pixel centered around the main lobe, whose position is related to the object distance and to the parameter µ. Details of the cropping window are described in [4]. We represent the THz data in a 4D real tensor g ∈ R nx×ny×nz×2 , where n x = n y = 446, and n z is the size of the cropping window, i.e. 91 in our case. Encoder Architecture and Training For the encoder network G(g; θ) we pick a spatially decoupled architecture using 1 × 1 convolutions on g only, leading to a signal-by-signal reconstruction mechanism that allows a high level of parallelism and therefore maximizes the reconstruction speed on a GPU. The specific architecture (illustrated in Figure 5) applies a first set of convolutional filters on the real and imaginary part separately, before concatenating the activations, and applying three further convolutional filters on the concatenated structure. We apply batch-normalization (BN) [28] after each convolution and use leaky rectified linear units (LeReLU) [29] as activations. Finally, a fully connected layer reduces the dimension to the desired size of four output parameters per pixel. To ensure that the amplitude is physically meaningful, i.e., non-negative, Figure 5: Architecture of encoding network G(g; θ) that predicts the parameters: At each pixel the real and imaginary part is extracted, convolved, concatenated and processed via three convolutional and 1 fully connected layer. To obtain physically meaningful (non-negative) amplitudes, we apply an absolute value function to the first component. We train our model optimizing (5) using the Adam optimizer [30] on 80% of the 446 × 446 pixels from a real (measured) THz image for 1200 epochs. The remaining 20% of the pixels serve as a validation set. The batch size is set to 4096. The initial learning rate is set to 0.005, and is reduced by a factor of 0.99 every 20 epochs. Figure 6 illustrates the decay of the training and validation losses over 1200 epochs. As we can see, the validation loss nicely resembles the training loss with almost no generalization gap. Numerical Experiments We evaluate the proposed model-based AE on two datasets, which are acquired using the setup described in Sec. 2, namely the MetalPCB dataset and the StepChart dataset. The MetalPCB dataset is measured by a nearly planar copper target etched on a circuit board (Figure 7a), which includes metal and PCB material regions, in the standard size scale of USAF target MIL-STD-150A [31]. After the preprocessing described in Sec. 5.1, the MetalPCB dataset has 446 × 446 × 91 sample points. The StepChart dataset is based on an aluminum object (Figure 7b) with sharp edges to evaluate the distance measurement accuracy using a 3D object. The StepChart dataset has 113 × 575 × 91 sample points after preprocessing. In order to evaluate the optimization quality on different materials and structures, MetalPCB dataset is evaluated in regions: PCB region is a local region that contains PCB material only, Metal region is a local region that contains copper StepChart dataset material only, and All region is the entire image area. Similarly, the StepChart dataset is evaluated by 3 regions: Edge region is the region that contains physical edges, Steps region is the center planar region of each steps, and All region is the entire image area. This segmentation is done, because the THz measurements of the highly specular aluminum target results in strong multi-path interference artifacts at the edges that should be investigated separately. The proposed model-based AE is trained on the MetalPCB dataset only, while the parameter inference is made for both the MetalPCB and StepChart datasets. This cross-referencing between two datasets can verify whether the proposed AE method is modelling the physical behavior of the system without overfitting to a specific dataset or recorded material. To compare with the classical optimization methods, the parameters are estimated using the Trust-Region Algorithm (TRA) [32], which is implemented in MATLAB R . The TRA optimization requires a proper definition of the parameter ranges. Furthermore, it is very sensitive with respect to the initial parameter set. We, therefore, carefully select the initial parameters by sequentially estimating them from the source data (see [4] for more details). Still, the optimization may result in a parameter set with significant loss values; see Sec. 6.2. The trained encoder network is independent of any initialization scheme as it tries to directly predict optimal parameters from the input data. While the network alone gives remarkably good results with significantly lower runtimes than the optimization method, there is no guarantee that the network's predictions are critical points of the energy to be minimized. This motivates the use of the encoder network as an initialization scheme to the TRA, specifically because the TRA guarantees the monotonic decrease of objective function such that using the TRA on top of the network can only improve the results. We abbreviate this approach to AE+TRA for the rest of this paper. To fairly compare all three approaches, the optimization time of TRA and the inference time of the AE are both recorded by an Intel R i7-8700K CPU computation, while the AE is trained on a NVIDIA R GTX 1080 GPU. The PyTorch source code is available at https://github.com/tak-wong/THz-AutoEncoder. Table 1, the average loss in (5) and the timing are shown for the Trust-Region Algorithm (TRA), the Autoencoder (AE) and the joint AE+TRA approaches, respectively. We can see that the proposed encoder network achieves a lower average loss than the TRA method in the metal region of the MetalPCB dataset, it yields higher average losses than the TRA on both datasets. It is encouraging to see that although the AE was trained on the MetalPCB dataset, the relative performance in comparison to the TRA does not decay too significantly when changing to an entirely unseen data set with a different material, with the AE loss being 21.7% and 25.9% higher than the TRA loss on the MetalPCB and StepChart data sets, respectively. If such a sacrifice in accuracy is acceptable, the speed-up in runtime is tremendous with the AE being over 140 times faster than the TRA (for both methods being evaluated on a CPU). Note that even the sum of training and inference time are smaller for the proposed AE than the runtime of the TRA on the MetalPCB dataset. Loss and timing Interestingly, the combined AE+TRA approach of initializing the TRA with the encoder network's prediction leads to better losses than the TRA alone in all regions. Additionally, the AE-initialized TRA converged more than 2 times faster due to the stopping criterion being reached earlier. We note that the losses of all approaches are significantly higher for the StepCart data set than they are for the MetalPCB. This is because the aluminum StepChart object (Figure 7b) has a more complex physical structure than the MetalPCB object, which results in a mixture of scattered THz pulses by multi-path interference effects in all object regions. Incorporating such effects in the reflection model of (1) could therefore be an interesting aspect of future research for improving the explainability of the measured data with the physical model. Quality Assessment of THz Images In THz imaging, the intensity image I that is equal to the squared amplitude, i.e. I =ê 2 is the most important criteria for quality assessment. Note that the intensity could be inferred directly from the data by considering that (1) yields fê ,σ,µ,φ (µ) · f * e,σ,µ,φ (µ) =ê 2 · sinc 2 (0) =ê 2 = I(6) where f * is the complex conjugate of f . As we illustrate in Figure 8, the model-based approach is not only capable of extracting all relevant parameters, i.e.,ê, µ, σ and φ, but, compared to values directly extracted from the source data, the resulting intensity I is more homogeneous in homogeneous material regions. The homogeneity of the directly extracted intensity results from the very low depth of field of THz imaging systems in general, combined with the slight non-planarity of the MetalPCB target. As depicted in Figure 8c, the intensity variations along the selected line in the homogeneous copper region are reduced using the three model-based methods, i.e. TRA, AE, and AE+TRA. However, due to the crucial selection of the initial parameters (see discussion at the beginning of Sec. 6), the TRA optimization results exhibit significant amplitude fluctuations and loss values (Figure 8d) in the two horizontal sub-regions x ∈ [150, 200] and x > 430. The proposed AE and AE+TRA methods, however, deliver superior results with respect to the main quality measure applied in THz imaging, i.e. to the intensity homogeneity and the loss in model fitting. Still, the AE approach shows very few extreme loss values, while the AE+TRA method's loss values are consistently low along the selected line in the homogeneous copper region. Conclusions and Future Work In this paper, we propose a model-based autoencoder for THz image reconstruction. Comparing to a classical Trust-Region optimizer, the proposed autoencoder gets within 25% margin to the objective value of the optimizer, while being more than 140 times faster. Using the network's prediction as an initialization to a gradient-based optimization scheme improves the result over a plain optimization scheme in terms of objective values while still being two times faster. We believe that these are very promising results for training optimizers/initialization schemes for parameter identification problems in general by exploiting the idea of model-based autoencoders for unsupervised learning. Future research will include exploiting spatial information during the reconstruction as well as considering joint parameter identification and reconstruction problems such as denoising, sharpening, and super-resolving parameter images such as the amplitude images shown in Figure 8b.
3,288
1907.01377
2955307856
Terahertz (THz) sensing is a promising imaging technology for a wide variety of different applications. Extracting the interpretable and physically meaningful parameters for such applications, however, requires solving an inverse problem in which a model function determined by these parameters needs to be fitted to the measured data. Since the underlying optimization problem is nonconvex and very costly to solve, we propose learning the prediction of suitable parameters from the measured data directly. More precisely, we develop a model-based autoencoder in which the encoder network predicts suitable parameters and the decoder is fixed to a physically meaningful model function, such that we can train the encoding network in an unsupervised way. We illustrate numerically that the resulting network is more than 140 times faster than classical optimization techniques while making predictions with only slightly higher objective values. Using such predictions as starting points of local optimization techniques allows us to converge to better local minima about twice as fast as optimization without the network-based initialization.
Finally, a recent work has exploited deep learning techniques in Terahertz imaging in @cite_20 , but the considered application of super-resolving the THz amplitude image @math by training a convolutional neural network on synthetically blurred images is not directly related to our proposed approach.
{ "abstract": [ "We propose an effective and robust method for terahertz (THz) image super-resolution based on a deep convolutional neural network (CNN). A deep CNN model is designed. It learns an end-to-end mapping between the low- and high-resolution images. Blur kernels with multiple width and noise with multiple levels are taken into the training set so that the network can handle THz images very well. Quantitative comparison of the proposed method and other super-resolution methods on the synthetic THz images indicates that the proposed method performs better than other methods in accuracy and visual improvements. Experimental results on real THz images show that the proposed method significantly improves the quality of THz images with increased resolution and decreased noise, which proves the practicability and exactitude of the proposed method." ], "cite_N": [ "@cite_20" ], "mid": [ "2927145094" ] }
TRAINING AUTO-ENCODER-BASED OPTIMIZERS FOR TERAHERTZ IMAGE RECONSTRUCTION A PREPRINT
Terahertz (THz) imaging is an emerging sensing technology with a great potential for hidden object imaging, contactfree analysis, non-destructive testing and stand-off detection in various application fields, including semi-conductor industry, biological and medical analysis, material and quality control, safety and security [1][2][3]. The physically interpretable quantities relevant to the aforementioned applications, however, cannot always be measured directly. Instead, in THz imaging systems, each pixel contains implicit information about such quantities, making the inverse problem of inferring these physical quantities a challenging problem with high practical relevance. As we will discuss in Sec. 2, at each pixel location x the relation between the desired (unknown) parameters p( x) = (ê( x), σ( x), µ( x), φ( x)) ∈ R 4 , i.e., the electric field amplitudeê, the position of the surface µ, the width of the reflected pulse σ, and the phase φ, and the actual measurements g( x) ∈ R nz can be modelled via the equation g( x, z) = (fê ,σ,µ,φ (z i )) i∈{1,...,nz} + noise, where fê ,σ,µ,φ (z) =ê sinc (σ(z − µ)) exp (−i(ωz − φ)) , sinc(t) =    sin(πt) πt t = 0, 1 t = 0,(1) and (z i ) i∈{1,...,nz} is a device-dependent sampling grid z grid . More details of the THz model are described in [4]. Thus, the crucial step in THz imaging is the solution of optimization problem of the form min e,σ,µ,φ Loss(fê ,σ,µ,φ (z grid ), g( x)), arXiv:1907.01377v2 [cs.CV] 29 Oct 2019 at each pixel x, possibly along with additional regularizers on the unknown parameters. Even with simple choices of the loss function such as an 2 -squared loss, the resulting fitting problem is highly nonconvex and global solutions become rather expensive. Considering that the number (n x · n y ) of pixels, i.e., of optimization problem (3) to be solved, typically is in the order of hundred thousands to millions, even local first order or quasi-Newton methods become quite costly: For example, running the build-in Trust-Region solver of MATLAB R to reconstruct a 446 × 446 THz image takes over 170 minutes. In this paper, we propose to train a neural network to solve the per-pixel optimization problem (3) directly. We formulate the training of the network as a model-based autoencoder (AE), which allows us to train the corresponding network with real data in an unsupervised way, i.e., without ground truth. We demonstrate that the resulting optimization network yields parameters (ê, σ, µ, φ) that result in only slightly higher losses than actually running an optimization algorithm, despite the advantage of being more than 140 times faster. Moreover, we demonstrate that our network can serve as an excellent initialization scheme for classical optimizers. By using the network's prediction as a starting point for a gradient-based optimizer, we obtain lower losses and converge more than 2x faster than classical optimization approaches, while benefiting from all theoretical guarantees of the respective minimization algorithm. This paper is organized as follows: Sec. 2 gives more details on how THz imaging systems work. Sec. 3 summarizes the related work on learning optimizers, machine learning for THz imaging techniques, and model-based autoencoders. Sec. 4 describes model-based AEs in contrast to classical supervised learning approaches in detail, before Sec. 5 summarizes our implementation. Sec. 6 compares the proposed approaches to classical (optimization-based) reconstruction techniques in terms of speed and accuracy before Sec. 7 draws conclusions. THz Imaging Systems There are several approaches to realizing THz imaging, e.g. femtosecond laser based scanning system [5,6], synthetic aperture systems [7,8], and hybrid systems [9]. A typical approach to THz imaging is based on the Frequency Modulated Continuous Wave (FMCW) concept [8], which uses active frequency modulated THz signals to sense reflected signals from the object. The reflected energy and phase shifts due to the signal path length make 3D THz imaging possible. In Figure 1, the setup of our electronic FMCW-THz 3D imaging system is shown. More details on the THz imaging system are described in [8]. In this paper, we denote by g t ( x, t) the measured demodulated time domain signal of the reflected electric field amplitude of the FMCW system at lateral position x ∈ R 2 . In FMCW radar signal processing, this continuous wave temporal signal is converted into frequency domain by a Fourier transform [10,11]. Since the linear frequency sweep has a unique frequency at each spatial position in z-direction, the converted frequency domain signal directly relates to the spatial azimuth (z-direction) domain signal g c ( x, z) = F{g t ( x, t)}.(4) The resulting 3D image g c ∈ C nx×ny×nz is complex data in the spatial domain, representing per-pixel complex reflectivity of THz energy. The quantities n x , n y , n z resemble the discretization in vertical, horizontal and depthdirection, respectively. Equivalently, we may represent g c by considering the real and imaginary parts as two separate channels, resulting a 4D real data tensor g ∈ R nx×ny×nz×2 . Since the system is calibrated by amplitude normalization with respect to an ideal metallic reflector, a rectangular frequency signal response is ensured for the FMCW frequency dependance [8]. After the FFT in (4), the z-direction signal envelope is an ideal sinc function as continuous spatial signal amplitude, giving rise to the physical model given in (1) in the introduction. In (1), the electric field amplitudeê is the reflection coefficient for the material, which is dependent on the complex dielectric constant of the material and helps to identify and classify materials. The depth position µ is the position at which maximum reflection occurs, i.e., the position of the surface reflecting the THz energy. σ is the width of the reflected pulse, which includes information on the dispersion characteristics of the material. The phase φ of the reflected wave depends on the ratio of real to imaginary parts of the dielectric properties of the material. Thus, the parameters p = (ê, σ, µ, φ) contain important information about the geometry as well as the material of the imaged object, which is of interest in a wide variety of applications. A Model-Based Autoencoder for THz Image Reconstruction Let us denote the THz input data by g ∈ R nx×ny×nz×2 , and consider our four unknown parameters (ê, σ, µ, φ) to be R nx×ny matrices, allowing each parameter to change at each pixel. Under slight abuse of notation we can interpret all operations in (1) (1)) is used to simulate data g, which can subsequently be fed into a network to be trained to reproduce the simulation parameters in a supervised way. fê ,σ,µ,ω,φ (z grid ) ∈ R nx×ny×nz×2 , where z grid = (z i ) i∈{1,. ..,nz} denotes the depth sampling grid. Concatenating all four matrix valued parameters into a single parameter tensor P ∈ R ny×nx×4 , our goal can be formalized as finding P such that f P (z grid ) ≈ g. A classical supervised machine learning approach to problems with known forward operator is illustrated in Figure 2 for the example of THz image reconstruction: The explicit forward model f is used to simulate a large set of images g from known parameters P which can subsequently be used as training data for predicting P via a neural network G(g; θ) depending on weights θ. Such supervised approaches with simulated training data are frequently used in other image reconstruction areas, e.g. super resolution [23,24], or image deblurring [25,26]. The accuracy of networks trained on simulated data, however, crucially relies on precise knowledge of the forward model and the simulated noise. Slight deviations thereof can significantly degrade a network performance as demonstrated in [27], where deep denoising networks trained on Gaussian noise were outperformed by BM3D when applied to realistic sensor noise. Instead of pursuing the supervised learning approach described above, we replace p = (ê, σ, µ, φ) in the optimization approach (3) by a suitable network G(g; θ) that depends on the raw input data g and learnable parameters θ, that can be trained in an unsupervised way on real data. Assuming we have multiple examples g k of THz data, and choosing the loss function in (3) as an 2 -squared loss, gives rise to the unsupervised training problem min θ training examples k f G(g k ;θ) (z grid ) − g k 2 F .(5) As we have illustrated in Figure 3, this training resembles an AE architecture: The input to the network is data g k which gets mapped to parameters P that -when fed into the model function f -ought to reproduce g k again. Opposed to the straight forward supervised learning approach, the proposed approach (5) has two significant advantages • It allows us to train the network in an unsupervised way, i.e., on real data, and therefore learn to deal with measurement-specific distortions. • The cost function in (5) implicitly handles the scaling of different parameters, and therefore circumvents the problem of defining meaningful cost functions on the parameter space: Simple parameter discrepancies such as P 1 − P 2 2 2 for two different parameters sets P 1 and P 2 largely depend on the scaling of the individual parameters and might even be meaningless, e.g. for cyclic parameters such as the phase offset φ. Encoder Network Architecture and Training Data Preprocessing As illustrated in the plot of the magnitude of an exemplary measured THz signal shown in Figure 4, the THz energy is mainly focused in the main lobe and first side-lobes of the sinc function. Because the physical model remains valid The input data g is fed into a network G whose parameters θ are trained in such a way that feeding the network's prediction G(g; θ) into a model function f again reproduces the input data g. Such an architecture resembles an AE with a learnable encoder and a model-based decoder and allows an unsupervised training on real data. in close proximity of the main lobe only, we preprocess the data to reduce the impressively large range of 12600 measurements per pixel. We, therefore, crop out 91 measurements per pixel centered around the main lobe, whose position is related to the object distance and to the parameter µ. Details of the cropping window are described in [4]. We represent the THz data in a 4D real tensor g ∈ R nx×ny×nz×2 , where n x = n y = 446, and n z is the size of the cropping window, i.e. 91 in our case. Encoder Architecture and Training For the encoder network G(g; θ) we pick a spatially decoupled architecture using 1 × 1 convolutions on g only, leading to a signal-by-signal reconstruction mechanism that allows a high level of parallelism and therefore maximizes the reconstruction speed on a GPU. The specific architecture (illustrated in Figure 5) applies a first set of convolutional filters on the real and imaginary part separately, before concatenating the activations, and applying three further convolutional filters on the concatenated structure. We apply batch-normalization (BN) [28] after each convolution and use leaky rectified linear units (LeReLU) [29] as activations. Finally, a fully connected layer reduces the dimension to the desired size of four output parameters per pixel. To ensure that the amplitude is physically meaningful, i.e., non-negative, Figure 5: Architecture of encoding network G(g; θ) that predicts the parameters: At each pixel the real and imaginary part is extracted, convolved, concatenated and processed via three convolutional and 1 fully connected layer. To obtain physically meaningful (non-negative) amplitudes, we apply an absolute value function to the first component. We train our model optimizing (5) using the Adam optimizer [30] on 80% of the 446 × 446 pixels from a real (measured) THz image for 1200 epochs. The remaining 20% of the pixels serve as a validation set. The batch size is set to 4096. The initial learning rate is set to 0.005, and is reduced by a factor of 0.99 every 20 epochs. Figure 6 illustrates the decay of the training and validation losses over 1200 epochs. As we can see, the validation loss nicely resembles the training loss with almost no generalization gap. Numerical Experiments We evaluate the proposed model-based AE on two datasets, which are acquired using the setup described in Sec. 2, namely the MetalPCB dataset and the StepChart dataset. The MetalPCB dataset is measured by a nearly planar copper target etched on a circuit board (Figure 7a), which includes metal and PCB material regions, in the standard size scale of USAF target MIL-STD-150A [31]. After the preprocessing described in Sec. 5.1, the MetalPCB dataset has 446 × 446 × 91 sample points. The StepChart dataset is based on an aluminum object (Figure 7b) with sharp edges to evaluate the distance measurement accuracy using a 3D object. The StepChart dataset has 113 × 575 × 91 sample points after preprocessing. In order to evaluate the optimization quality on different materials and structures, MetalPCB dataset is evaluated in regions: PCB region is a local region that contains PCB material only, Metal region is a local region that contains copper StepChart dataset material only, and All region is the entire image area. Similarly, the StepChart dataset is evaluated by 3 regions: Edge region is the region that contains physical edges, Steps region is the center planar region of each steps, and All region is the entire image area. This segmentation is done, because the THz measurements of the highly specular aluminum target results in strong multi-path interference artifacts at the edges that should be investigated separately. The proposed model-based AE is trained on the MetalPCB dataset only, while the parameter inference is made for both the MetalPCB and StepChart datasets. This cross-referencing between two datasets can verify whether the proposed AE method is modelling the physical behavior of the system without overfitting to a specific dataset or recorded material. To compare with the classical optimization methods, the parameters are estimated using the Trust-Region Algorithm (TRA) [32], which is implemented in MATLAB R . The TRA optimization requires a proper definition of the parameter ranges. Furthermore, it is very sensitive with respect to the initial parameter set. We, therefore, carefully select the initial parameters by sequentially estimating them from the source data (see [4] for more details). Still, the optimization may result in a parameter set with significant loss values; see Sec. 6.2. The trained encoder network is independent of any initialization scheme as it tries to directly predict optimal parameters from the input data. While the network alone gives remarkably good results with significantly lower runtimes than the optimization method, there is no guarantee that the network's predictions are critical points of the energy to be minimized. This motivates the use of the encoder network as an initialization scheme to the TRA, specifically because the TRA guarantees the monotonic decrease of objective function such that using the TRA on top of the network can only improve the results. We abbreviate this approach to AE+TRA for the rest of this paper. To fairly compare all three approaches, the optimization time of TRA and the inference time of the AE are both recorded by an Intel R i7-8700K CPU computation, while the AE is trained on a NVIDIA R GTX 1080 GPU. The PyTorch source code is available at https://github.com/tak-wong/THz-AutoEncoder. Table 1, the average loss in (5) and the timing are shown for the Trust-Region Algorithm (TRA), the Autoencoder (AE) and the joint AE+TRA approaches, respectively. We can see that the proposed encoder network achieves a lower average loss than the TRA method in the metal region of the MetalPCB dataset, it yields higher average losses than the TRA on both datasets. It is encouraging to see that although the AE was trained on the MetalPCB dataset, the relative performance in comparison to the TRA does not decay too significantly when changing to an entirely unseen data set with a different material, with the AE loss being 21.7% and 25.9% higher than the TRA loss on the MetalPCB and StepChart data sets, respectively. If such a sacrifice in accuracy is acceptable, the speed-up in runtime is tremendous with the AE being over 140 times faster than the TRA (for both methods being evaluated on a CPU). Note that even the sum of training and inference time are smaller for the proposed AE than the runtime of the TRA on the MetalPCB dataset. Loss and timing Interestingly, the combined AE+TRA approach of initializing the TRA with the encoder network's prediction leads to better losses than the TRA alone in all regions. Additionally, the AE-initialized TRA converged more than 2 times faster due to the stopping criterion being reached earlier. We note that the losses of all approaches are significantly higher for the StepCart data set than they are for the MetalPCB. This is because the aluminum StepChart object (Figure 7b) has a more complex physical structure than the MetalPCB object, which results in a mixture of scattered THz pulses by multi-path interference effects in all object regions. Incorporating such effects in the reflection model of (1) could therefore be an interesting aspect of future research for improving the explainability of the measured data with the physical model. Quality Assessment of THz Images In THz imaging, the intensity image I that is equal to the squared amplitude, i.e. I =ê 2 is the most important criteria for quality assessment. Note that the intensity could be inferred directly from the data by considering that (1) yields fê ,σ,µ,φ (µ) · f * e,σ,µ,φ (µ) =ê 2 · sinc 2 (0) =ê 2 = I(6) where f * is the complex conjugate of f . As we illustrate in Figure 8, the model-based approach is not only capable of extracting all relevant parameters, i.e.,ê, µ, σ and φ, but, compared to values directly extracted from the source data, the resulting intensity I is more homogeneous in homogeneous material regions. The homogeneity of the directly extracted intensity results from the very low depth of field of THz imaging systems in general, combined with the slight non-planarity of the MetalPCB target. As depicted in Figure 8c, the intensity variations along the selected line in the homogeneous copper region are reduced using the three model-based methods, i.e. TRA, AE, and AE+TRA. However, due to the crucial selection of the initial parameters (see discussion at the beginning of Sec. 6), the TRA optimization results exhibit significant amplitude fluctuations and loss values (Figure 8d) in the two horizontal sub-regions x ∈ [150, 200] and x > 430. The proposed AE and AE+TRA methods, however, deliver superior results with respect to the main quality measure applied in THz imaging, i.e. to the intensity homogeneity and the loss in model fitting. Still, the AE approach shows very few extreme loss values, while the AE+TRA method's loss values are consistently low along the selected line in the homogeneous copper region. Conclusions and Future Work In this paper, we propose a model-based autoencoder for THz image reconstruction. Comparing to a classical Trust-Region optimizer, the proposed autoencoder gets within 25% margin to the objective value of the optimizer, while being more than 140 times faster. Using the network's prediction as an initialization to a gradient-based optimization scheme improves the result over a plain optimization scheme in terms of objective values while still being two times faster. We believe that these are very promising results for training optimizers/initialization schemes for parameter identification problems in general by exploiting the idea of model-based autoencoders for unsupervised learning. Future research will include exploiting spatial information during the reconstruction as well as considering joint parameter identification and reconstruction problems such as denoising, sharpening, and super-resolving parameter images such as the amplitude images shown in Figure 8b.
3,288
1812.02899
2903664137
While much progress has been made in capturing high-quality facial performances using motion capture markers and shape-from-shading, high-end systems typically also rely on rotoscope curves hand-drawn on the image. These curves are subjective and difficult to draw consistently; moreover, ad-hoc procedural methods are required for generating matching rotoscope curves on synthetic renders embedded in the optimization used to determine three-dimensional facial pose and expression. We propose an alternative approach whereby these curves and other keypoints are detected automatically on both the image and the synthetic renders using trained neural networks, eliminating artist subjectivity and the ad-hoc procedures meant to mimic it. More generally, we propose using machine learning networks to implicitly define deep energies which when minimized using classical optimization techniques lead to three-dimensional facial pose and expression estimation.
The earliest approaches to regression-based face alignment trained a cascade of regressors to detect face landmarks @cite_67 @cite_58 @cite_36 @cite_16 @cite_35 . More recently, deep convolutional neural networks (CNNs) have been used for both 2D and 3D facial landmark detection from 2D images @cite_27 @cite_7 . These methods are generally classified into coordinate regression models @cite_37 @cite_27 @cite_41 @cite_60 , where a direct mapping is learned between the image and the landmark coordinates, and heatmap regression models @cite_30 @cite_0 @cite_48 , where prediction heatmaps are learned for each landmark. Heatmap-based architectures are generally derived from stacked hourglass @cite_30 @cite_0 @cite_69 @cite_55 or convolutional pose machine @cite_59 architectures used for human body pose estimation. Pixel coordinates can be obtained from the heatmaps by applying the argmax operation; however, @cite_68 @cite_11 use soft-argmax to achieve end-to-end differentiability. A more comprehensive overview of face alignment methods can be found in @cite_49 .
{ "abstract": [ "This paper investigates how far a very deep neural network is from attaining close to saturating performance on existing 2D and 3D face alignment datasets. To this end, we make the following 5 contributions: (a) we construct, for the first time, a very strong baseline by combining a state-of-the-art architecture for landmark localization with a state-of-the-art residual block, train it on a very large yet synthetically expanded 2D facial landmark dataset and finally evaluate it on all other 2D facial landmark datasets. (b)We create a guided by 2D landmarks network which converts 2D landmark annotations to 3D and unifies all existing datasets, leading to the creation of LS3D-W, the largest and most challenging 3D facial landmark dataset to date ( 230,000 images). (c) Following that, we train a neural network for 3D face alignment and evaluate it on the newly introduced LS3D-W. (d) We further look into the effect of all “traditional” factors affecting face alignment performance like large pose, initialization and resolution, and introduce a “new” one, namely the size of the network. (e) We show that both 2D and 3D face alignment networks achieve performance of remarkable accuracy which is probably close to saturating the datasets used. Training and testing code as well as the dataset can be downloaded from https: www.adrianbulat.com face-alignment", "We present a practical approach to address the problem of unconstrained face alignment for a single image. In our unconstrained problem, we need to deal with large shape and appearance variations under extreme head poses and rich shape deformation. To equip cascaded regressors with the capability to handle global shape variation and irregular appearance-shape relation in the unconstrained scenario, we partition the optimisation space into multiple domains of homogeneous descent, and predict a shape as a composition of estimations from multiple domain-specific regressors. With a specially formulated learning objective and a novel tree splitting function, our approach is capable of estimating a robust and meaningful composition. In addition to achieving state-of-the-art accuracy over existing approaches, our framework is also an efficient solution (350 FPS), thanks to the on-the-fly domain exclusion mechanism and the capability of leveraging the fast pixel feature.", "We present a new Cascaded Shape Regression (CSR) architecture, namely Dynamic Attention-Controlled CSR (DAC-CSR), for robust facial landmark detection on unconstrained faces. Our DAC-CSR divides facial landmark detection into three cascaded sub-tasks: face bounding box refinement, general CSR and attention-controlled CSR. The first two stages refine initial face bounding boxes and output intermediate facial landmarks. Then, an online dynamic model selection method is used to choose appropriate domain-specific CSRs for further landmark refinement. The key innovation of our DAC-CSR is the fault-tolerant mechanism, using fuzzy set sample weighting for attention-controlled domain-specific model training. Moreover, we advocate data augmentation with a simple but effective 2D profile face generator, and context-aware feature extraction for better facial feature representation. Experimental results obtained on challenging datasets demonstrate the merits of our DAC-CSR over the state-of-the-art.", "We propose a method for human pose estimation based on Deep Neural Networks (DNNs). The pose estimation is formulated as a DNN-based regression problem towards body joints. We present a cascade of such DNN regres- sors which results in high precision pose estimates. The approach has the advantage of reasoning about pose in a holistic fashion and has a simple but yet powerful formula- tion which capitalizes on recent advances in Deep Learn- ing. We present a detailed empirical analysis with state-of- art or better performance on four academic benchmarks of diverse real-world images.", "We present a new state-of-the-art approach for face detection. The key idea is to combine face alignment with detection, observing that aligned face shapes provide better features for face classification. To make this combination more effective, our approach learns the two tasks jointly in the same cascade framework, by exploiting recent advances in face alignment. Such joint learning greatly enhances the capability of cascade detection and still retains its realtime performance. Extensive experiments show that our approach achieves the best accuracy on challenging datasets, where all existing solutions are either inaccurate or too slow.", "We present a very efficient, highly accurate, \"Explicit Shape Regression\" approach for face alignment. Unlike previous regression-based approaches, we directly learn a vectorial regression function to infer the whole facial shape (a set of facial landmarks) from the image and explicitly minimize the alignment errors over the training data. The inherent shape constraint is naturally encoded into the regressor in a cascaded learning framework and applied from coarse to fine during the test, without using a fixed parametric shape model as in most previous methods. To make the regression more effective and efficient, we design a two-level boosted regression, shape indexed features and a correlation-based feature selection method. This combination enables us to learn accurate models from large training data in a short time (20 min for 2,000 training images), and run regression extremely fast in test (15 ms for a 87 landmarks shape). Experiments on challenging data show that our approach significantly outperforms the state-of-the-art in terms of both accuracy and efficiency.", "3D face reconstruction is a fundamental Computer Vision problem of extraordinary difficulty. Current systems often assume the availability of multiple facial images (sometimes from the same subject) as input, and must address a number of methodological challenges such as establishing dense correspondences across large facial poses, expressions, and non-uniform illumination. In general these methods require complex and inefficient pipelines for model building and fitting. In this work, we propose to address many of these limitations by training a Convolutional Neural Network (CNN) on an appropriate dataset consisting of 2D images and 3D facial models or scans. Our CNN works with just a single 2D facial image, does not require accurate alignment nor establishes dense correspondence between images, works for arbitrary facial poses and expressions, and can be used to reconstruct the whole 3D facial geometry (including the non-visible parts of the face) bypassing the construction (during training) and fitting (during testing) of a 3D Morphable Model. We achieve this via a simple CNN architecture that performs direct regression of a volumetric representation of the 3D facial geometry from a single 2D image. We also demonstrate how the related task of facial landmark localization can be incorporated into the proposed framework and help improve reconstruction quality, especially for the cases of large poses and facial expressions. Code and models will be made available at http: aaronsplace.co.uk", "", "Constrained Local Models (CLMs) are a well-established family of methods for facial landmark detection. However, they have recently fallen out of favor to cascaded regression-based approaches. This is in part due to the inability of existing CLM local detectors to model the very complex individual landmark appearance that is affected by expression, illumination, facial hair, makeup, and accessories. In our work, we present a novel local detector -- Convolutional Experts Network (CEN) -- that brings together the advantages of neural architectures and mixtures of experts in an end-to-end framework. We further propose a Convolutional Experts Constrained Local Model (CE-CLM) algorithm that uses CEN as local detectors. We demonstrate that our proposed CE-CLM algorithm outperforms competitive state-of-the-art baselines for facial landmark detection by a large margin on four publicly-available datasets. Our approach is especially accurate and robust on challenging profile images.", "Abstract Over the last two decades, face alignment or localizing fiducial facial points on 2D images has received increasing attention owing to its comprehensive applications in automatic face analysis. However, such a task has proven extremely challenging in unconstrained environments due to many confounding factors, such as pose, occlusions, expression and illumination. While numerous techniques have been developed to address these challenges, this problem is still far away from being solved. In this survey, we present an up-to-date critical review of the existing literatures on face alignment, focusing on those methods addressing overall difficulties and challenges of this topic under uncontrolled conditions. Specifically, we categorize existing face alignment techniques, present detailed descriptions of the prominent algorithms within each category, and discuss their advantages and disadvantages. Furthermore, we organize special discussions on the practical aspects of face alignment in-the-wild , towards the development of a robust face alignment system. In addition, we show performance statistics of the state of the art, and conclude this paper with several promising directions for future research.", "In this paper, we present supervision-by-registration, an unsupervised approach to improve the precision of facial landmark detectors on both images and video. Our key observation is that the detections of the same landmark in adjacent frames should be coherent with registration, i.e., optical flow. Interestingly, coherency of optical flow is a source of supervision that does not require manual labeling, and can be leveraged during detector training. For example, we can enforce in the training loss function that a detected landmark at framet-1 followed by optical flow tracking from framet-1 to framet should coincide with the location of the detection at framet. Essentially, supervision-by-registration augments the training loss function with a registration loss, thus training the detector to have output that is not only close to the annotations in labeled images, but also consistent with registration on large amounts of unlabeled videos. End-to-end training with the registration loss is made possible by a differentiable Lucas-Kanade operation, which computes optical flow registration in the forward pass, and back-propagates gradients that encourage temporal coherency in the detector. The output of our method is a more precise image-based facial landmark detector, which can be applied to single images or video. With supervision-by-registration, we demonstrate (1) improvements in facial landmark detection on both images (300W, ALFW) and video (300VW, Youtube-Celebrities), and (2) significant reduction of jittering in video detections.", "To enable real-time, person-independent 3D registration from 2D video, we developed a 3D cascade regression approach in which facial landmarks remain invariant across pose over a range of approximately 60. From a single 2D image of a person's face, a dense 3D shape is registered in real time for each frame. The algorithm utilizes a fast cascade regression framework trained on high-resolution 3D face-scans of posed and spontaneous emotion expression. The algorithm first estimates the location of a dense set of landmarks and their visibility, then reconstructs face shapes by fitting a part-based 3D model. Because no assumptions are required about illumination or surface properties, the method can be applied to a wide range of imaging conditions that include 2D video and uncalibrated multi-view video. The method has been validated in a battery of experiments that evaluate its precision of 3D reconstruction, extension to multi-view reconstruction, temporal integration for videos and 3D head-pose estimation. Experimental findings strongly support the validity of real-time, 3D registration and reconstruction from 2D video. The software is available online at http: zface.org. Display Omitted 3D cascade regression approach is proposed in which facial landmarks remain invariant.From a single 2D image of a person's face, a dense 3D shape is registered in real time for each frame.Multi-view reconstruction and temporal integration for videos are presented.Method is robust for 3D head-pose estimation under various conditions.", "We present a novel boundary-aware face alignment algorithm by utilising boundary lines as the geometric structure of a human face to help facial landmark localisation. Unlike the conventional heatmap based method and regression based method, our approach derives face landmarks from boundary lines which remove the ambiguities in the landmark definition. Three questions are explored and answered by this work: 1. Why using boundary? 2. How to use boundary? 3. What is the relationship between boundary estimation and landmarks localisation? Our boundary- aware face alignment algorithm achieves 3.49 mean error on 300-W Fullset, which outperforms state-of-the-art methods by a large margin. Our method can also easily integrate information from other datasets. By utilising boundary information of 300-W dataset, our method achieves 3.92 mean error with 0.39 failure rate on COFW dataset, and 1.25 mean error on AFLW-Full dataset. Moreover, we propose a new dataset WFLW to unify training and testing across different factors, including poses, expressions, illuminations, makeups, occlusions, and blurriness. Dataset and model will be publicly available at this https URL", "This work introduces a novel convolutional network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a \"stacked hourglass\" network based on the successive steps of pooling and upsampling that are done to produce a final set of predictions. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods.", "", "This paper addresses the problem of Face Alignment for a single image. We show how an ensemble of regression trees can be used to estimate the face's landmark positions directly from a sparse subset of pixel intensities, achieving super-realtime performance with high quality predictions. We present a general framework based on gradient boosting for learning an ensemble of regression trees that optimizes the sum of square error loss and naturally handles missing or partially labelled data. We show how using appropriate priors exploiting the structure of image data helps with efficient feature selection. Different regularization strategies and its importance to combat overfitting are also investigated. In addition, we analyse the effect of the quantity of training data on the accuracy of the predictions and explore the effect of data augmentation using synthesized data.", "", "Pose Machines provide a sequential prediction framework for learning rich implicit spatial models. In this work we show a systematic design for how convolutional networks can be incorporated into the pose machine framework for learning image features and image-dependent spatial models for the task of pose estimation. The contribution of this paper is to implicitly model long-range dependencies between variables in structured prediction tasks such as articulated pose estimation. We achieve this by designing a sequential architecture composed of convolutional networks that directly operate on belief maps from previous stages, producing increasingly refined estimates for part locations, without the need for explicit graphical model-style inference. Our approach addresses the characteristic difficulty of vanishing gradients during training by providing a natural learning objective function that enforces intermediate supervision, thereby replenishing back-propagated gradients and conditioning the learning procedure. We demonstrate state-of-the-art performance and outperform competing methods on standard benchmarks including the MPII, LSP, and FLIC datasets.", "State-of-the-art human pose estimation methods are based on heat map representation. In spite of the good performance, the representation has a few issues in nature, such as non-differentiable post-processing and quantization error. This work shows that a simple integral operation relates and unifies the heat map representation and joint regression, thus avoiding the above issues. It is differentiable, efficient, and compatible with any heat map based methods. Its effectiveness is convincingly validated via comprehensive ablation experiments under various settings, specifically on 3D pose estimation, for the first time." ], "cite_N": [ "@cite_30", "@cite_35", "@cite_36", "@cite_41", "@cite_58", "@cite_67", "@cite_69", "@cite_60", "@cite_48", "@cite_49", "@cite_68", "@cite_37", "@cite_7", "@cite_55", "@cite_27", "@cite_16", "@cite_0", "@cite_59", "@cite_11" ], "mid": [ "2605105738", "2462523589", "2951244631", "2113325037", "204612701", "1990937109", "2599226450", "", "2738134571", "2962714886", "2798730128", "2399512112", "2798980566", "2950762923", "", "2087681821", "", "2255781698", "2963598138" ] }
Deep Energies for Estimating Three-Dimensional Facial Pose and Expression
For high-end face performance capture, either motion capture markers [4] or markerless techniques such as shapefrom-shading [1] or optical flow [64] are typically used; however, these methods are generally unable to capture the intricacies of the performance, especially around the lips. To obtain high-end results, artists hand-draw rotoscope curves on the captured image; then, a variety of techniques are used to construct similar curves on the synthetic render of the estimated pose and to determine correspondences between the hand-drawn and synthetically generated curves. The simplest such approach would be to use a predefined contour on the three-dimensional face model, clip it for occlusions, and create correspondences in a length proportional way; although this provides some consistency to the curves generated on the synthetic render, it is quite difficult for an artist to emulate these curves. Thus, practical systems implement a number of ad-hoc methods in order to match the artist's more subjective interpretation. The inability to embed the artist's subjectivity into the optimiza-tion loop and onto the synthetic render coupled with the artist's inability to faithfully reproduce procedurally generated curves leaves a gaping chasm in the uncanny valley. Although one might debate what works best in order to align a three-dimensional virtual model with an image, it is clearly the case that a consistent metric should be applied to evaluate whether the synthetic render and image are aligned. This motivatives the employment of a machine learning algorithm to draw the rotoscope curves on both the captured image and the synthetic render, hoping that accurately representing the real world pose would lead to a negligible difference between the two curves. Although one might reasonably expect that the differences between real and synthetic camera behavior, albedo, lighting, etc. may lead to different rotoscope curves being generated by the deep learning algorithm, GAN-like approaches [22,36] could be used to rectify such issues by training a network to draw the curves such that a discriminator cannot tell which curves were generated on real images versus synthetic renders. We feed that synthetic render through the network to produce a set of outputs (e), which are then compared to the outputs produced by the same network when feeding it the captured image (f). ations" into classical optimization approaches that estimate facial pose and expression. Overview In this paper, we advocate for a general strategy that uses classical optimization where the energy to be minimized is based on metrics ascertained from deep learning neural net- Figure 2. A visual overview of our approach applied to facial landmark detection. We pass the full resolution image through a facial detector and crop the face out of the image. This crop is then resized to pass through the neural network which outputs, in this case, heatmaps for every landmark. These heatmaps are processed using a soft-argmax operation to get facial landmark coordinates on the cropped and resized image. These positions are then transformed back onto the full resolution image before being used as part of the objective function. An identical process is performed for the synthetic render. works. In particular, this removes the rotoscope artist and the ad-hoc contour drawing procedures meant to match the artist's work, or vice versa, from the pipeline. This pins advancements in three-dimensional facial pose and expression estimation to those being made in machine learning, which are advancing at a fast pace. Generally, we take the following approach: First, we estimate an initial rigid alignment of the three-dimensional face model to the two-dimensional image using a facial alignment network. Then, we estimate an initial guess for the jaw and mouth expression using the same network. Finally, we temporally refine the results and insert/repair failed frames (if/when necessary) using an optical flow network. We use a blendshape model hybridized with linear blend skinning for a six degree of freedom jaw joint [33]; let w denote the parameters that drive the face triangulated surface x(w). The resulting surface has a rigid frame given by Euler angles θ, rotation matrix R(θ), and a translation t such that the final vertex positions are x R (θ, t, w) = R(θ)x(w) + t.(1) We note that other geometry such as the teeth can be trivially handled by Equation 1 as well. The geometry x R is rendered using OpenDR [39] obtaining a rendered image F (x R ). As a precomputation, we estimate the face's albedo and nine coefficients for a spherical harmonics light [48] on a frame where the face is close to neutral; however, we note that a texture captured using a light stage [11] (for example) would potentially work just as well if not better. Then, our goal is to determine the parameters θ, t, and w that best match a given captured image F * . Both the pixels of captured image F * and the pixels of the rendered image F (x R ) are fed through the same deep network to get two sets of landmark positions N (F * ) and N (F ). See Figure 1. We use the L2 norm of the difference between them N (F * ) − N (F (x R (θ, t, w))) 2(2) as the objective function to minimize via nonlinear least squares, which is solved using the Dogleg method [40] as implemented by Chumpy [38]. This requires computing the Jacobian via the chain rule of the energy function; ∂N/∂F , ∂F/∂x R , and ∂x R /∂p where p is one of θ, t, and w all need to be evaluated. We use OpenDR to compute ∂F/∂x R , and Equation 1 yields ∂x R /∂p. ∂N/∂F is computed by backpropagating through the trained network using one's deep learning library of choice; in this paper, we use PyTorch [47]. Note that for computational efficiency, we do not compute ∂N/∂F explicitly, but instead compute the Jacobian of Equation 2 with respect to the rendered image pixels output by F instead. Rigid Alignment We first solve for the initial estimate of the rigid alignment of the face, i.e. θ and t using the pre-trained 3D-FAN network [7]. Note that 3D-FAN, and most other facial alignment networks, requires taking in a cropped and resized image of the face as input; we denote these two operations as C and S respectively. The cropping function requires the bounding box output of a face detector D; we use the CNNbased face detector implemented by Dlib [30]. The resize function resizes the crop to a resolution of 256 × 256 to feed into the network. The final image passed to the network is thus S(C(F (x R ), D(F (x R ))), D(F (x R ))) where we note that both the crop and resize functions depend on the output of the face detector; however, aggressively assuming that ∂D/∂F = 0 did not impede our ability to estimate a reasonable facial pose. Given D, C is merely a subset of the pixels of F so ∂C/∂p = ∂F/∂p for all the pixels within the detected bounding box and ∂C/∂p = 0 for all pixels outside. S resizes the crop using bilinear interpolation so ∂S/∂C can be computed using the size of the detected bounding box. 3D-FAN outputs a tensor of size 68 × 64 × 64, i.e. each of the 68 landmarks has a 64 × 64 heatmap specifying the likelihood of a particular pixel containing that landmark. While one might difference the heatmaps directly, it is unlikely that this would sufficiently capture correspondences. Instead, we follow the approach of [15,58] and apply a dif-ferentiable soft-argmax function to the heatmaps obtaining pixel coordinates for each of the 68 landmarks. That is, given the marker position m i computed using the argmax function on heatmap H i , we use a 3 × 3 patch of pixels M i around m i to compute the soft-argmax position aŝ m i = m∈Mi me βHi(m) m∈Mi e βHi(m)(3) where β = 50 is set experimentally and H i (m) returns the heatmap value at a pixel coordinate m. We found that using a small patch around the argmax landmark positions gives better results than running the soft-argmax operation on the entire heatmap. The soft-argmax function returns an image coordinate on the 64 × 64 image, and these image coordinates need to be remapped to the full resolution image to capture translation between the synthetic face render and the captured image. Thus, we apply inverse rescale S −1 m and crop oper- ations C −1 m , i.e.m i = C −1 m (S −1 m (4m i , D), D). Expression Estimation After the rigid alignment determines θ and t, we solve for an initial estimate of the mouth and jaw blendshape parameters (a subset of w). Generally, one would use handdrawn rotoscope curves around the lips to accomplish this as discussed in Section 2; however, given the multitude of problems associated with this method as discussed in Section 1, we instead turn to deep networks to accomplish the same goal. We use 3D-FAN in the same manner as discussed in Section 4 to solve for a subset of the blendshape weights w keeping the rigid parameters θ and t fixed. It is sometimes beneficial or even preferred to also allow θ and t to be modified somewhat at this stage, although a prior energy term that penalizes these deviations from the values computed during the rigid alignment stage is often useful. We note that the ideal solution would be to instead create new network architectures and train new models that are designed specifically for the purpose of detecting lip/mouth contours, especially since the 64 × 64 heatmaps generated by 3D-FAN are generally too low-resolution to detect fine mouth movements such as when the lips pucker. However, since our goal in this paper is to show how to leverage existing architectures and pre-trained networks especially so one can benefit from the plethora of existing literature, for now, we bootstrap the mouth and jaw estimation using the existing facial landmark detection in 3D-FAN. Optical Flow for Missing Frames The face detector used in Section 4 can sometimes fail, e.g. on our test sequence, the Dlib's HOG-based detector failed on 20 frames while Dlib's CNN-based detector succeeded on all frames. We thus propose using optical flow networks to infer the rigid and blendshape parameters for failed frames by "flowing" these parameters from surrounding frames where the face detector succeeded. This is accomplished by assuming that the optical flow of the synthetic render from one frame to the next should be identical to the corresponding optical flow of the captured image. That is, given two synthetic renders F 1 and F 2 and two captured images F * 1 and F * 2 , we can compute two optical flow fields N (F 1 , F 2 ) and N (F * 1 , F * 2 ) using FlowNet2 [23]. We resize the synthetic renders and captured images to a resolution of 512 × 512 before feeding them through the optical flow network. Assuming that F * 2 is the image the face detector failed on, we solve for the parameters p 2 of F 2 starting with an initial guess p 1 , the parameters of F 1 , by minimizing the L2 difference between the flow field vectors N (F * 1 , F * 2 ) − N (F 1 , F 2 ) 2 . ∂N/∂F 2 can be computed by back-propagating through the network. Temporal Refinement Since we solve for the rigid alignment and expression for all captured images in parallel, adjacent frames may produce visually disjointed results either because of noisy facial landmarks detected by 3D-FAN or due to the nonlinear optimization converging to different local minima. Thus, we also use optical flow to refine temporal inconsistencies between adjacent frames. We adopt a method that can be run in parallel. Given three sequentially captured images F * 1 , F * 2 , and F * 3 , we compute two optical flow fields N (F * 1 , F * 2 ) and N (F * 2 , F * 3 ). Similarly, we can compute N (F 1 , F 2 ) and N (F 2 , F 3 ). Then, we solve for the parameters p 2 of F 2 by minimizing the sum of two L2 norms N ( F * 1 , F * 2 ) − N (F 1 , F 2 ) 2 and N (F * 2 , F * 3 ) − N (F 2 , F 3 ) 2 . The details for computing the Jacobian follow that in Section 6. Optionally, one may also wish to add a prior penalizing the parameters p 2 from deviating too far from their initial value. Here, step k of smoothing to obtain a new set of parameters p k i uses the parameters from the last step p k−1 i±1 ; however, one could also use the updated parameter values p k i±1 whenever available in a Gauss-Seidel style approach. Alternatively, one could adopt a self-smoothing approach by ignoring the capture image's optical flow and solving for the parameters p 2 that minimize N (F 1 , F 2 ) − The initial state where the face is front facing and centered (note the figured is cropped) in the image plane. Right: The initial translation stage in the rigid alignment step roughly aligns the synthetic render of the face to the face in the captured image. (Note that we display the synthetic render without the estimated albedo for clarity, but the network sees the version with the albedo as in Figures 1d and 1f, not 1c). N (F 2 , F 3 ) 2 . Such an approach in effect minimizes the second derivative of the motion of the head in the image plane, causing any sudden motions to be smoothed out; however, since the energy function contains no knowledge of the data being targeted, it is possible for such a smoothing operation to cause the model to deviate from the captured image. While we focus on exploring deep learning based techniques, more traditional smoothing/interpolation techniques can also be applied in place of in addition to the proposed optical flow approaches. Such methods include: spline fitting the rigid parameters and blendshape weights, smoothing the detected landmarks/bounding boxes on the captured images as a preprocess, smoothing each frame's parameters using the adjacent frame's estimations, etc. Results We estimate the facial pose and expression on a moderately challenging performance captured by a single ARRI Alexa XT Studio running at 24 frames-per-second with an 180 degree shutter angle at ISO 800 where numerous captured images exhibit motion blur. These images are captured at a resolution of 2880 × 2160, but we downsample them to 720×540 before feeding them through our pipeline. We assume that the camera intrinsics and extrinsics have been pre-calibrated, the captured images have been undistorted, and that the face model described in Equation 1 has already been created. Furthermore, we assume that the face's rigid transform has been set such that the rendered face is initially visible and forward-facing in all the captured viewpoints. Rigid Alignment We estimate the rigid alignment (i.e. θ and t) of the face using 3D-FAN. We use an energy E 1 = W (N (F ) − N (F * )) where N are the image space coordinates of the facial landmarks as described in Section 4 and W is a perlandmark weighting matrix. Furthermore, we use an edgepreserving energy working. First, we only solve for t using all the landmarks except for those around the jaw to bring the initial state of the face into the general area of the face on the captured image. See Figure 3. We prevent the optimization from overfitting to the landmarks by limiting the maximum number of iterations. Next, we solve for both θ and t in three steps: using the non-jaw markers, using only the jaw markers, and using all markers. We perform these steps in stages as we generally found the non-jaw markers to be more reliable and use them to guide the face model to the approximate location before trying to fit to all existing markers. See E 2 = i (m F * i −m F * i−1 )−(m F i −m F i−1 ) wherem F * Expression Estimation We run a similar multi-stage process to estimate facial expression using the detected 3D-FAN landmarks. We use the same energy term E 1 as Section 8.1, but also introduce L2 regularization on the blendshape weights E 2 = λw with λ = 1 × 10 2 set experimentally. In the first stage, we weight the landmarks around the mouth and lips more heavily and estimate only the jaw open parameter along with Figure 8. From left to right: the result after rigid alignment, after expression estimation, and the captured image. Erroneous markers such as those around the jaw cause the optimization to land in an inaccurate local minima. the rigid alignment. The next stage estimates all available jaw-related blendshape parameters using the same set of landmarks. The final stage estimates all available jaw and mouth-related blendshapes as well as the rigid alignment using all available landmarks. See Figure 5. This process will also generally correct any overfitting introduced during the rigid alignment due to not being able to fully match the markers along the mouth. See Figure 6. Our approach naturally depends on the robustness of 3D-FAN's landmark detection on both the captured images and synthetic renders. As seen in Figure 8, the optimization will try to target the erroneous markers producing inaccurate θ, t, and w which overfit to the markers. Such frames should be considered a failure case and thus require using the optical flow approach described in Section 6 for infill. Alternatively, one could manually modify the multi-stage process for rigid alignment and expression estimation to remove the erroneous markers around the jaw; however, such an approach may then overfit to the potentially inaccurate mouth markers. We note that such concerns will gradually become less prominent as these networks improve. Optical Flow Infill Consider, for example, Figure 7 where frames 1142 and 1146 were solved for successfully and we wish to fill frames 1143, 1144, and 1145. We visualize the optical flow fields using the coloring scheme of [3]. We adopt our proposed approach from Section 6 whereby the parameters of frames 1143, 1144, and 1145 are first solved for sequentially starting from frame 1142. Then, the frames are solved again in reverse order starting from frame 1146. This back-and-forth process which can be repeated multiple times ensures that the infilled frames at the end of the sequence have not accumulated so much error that they no longer match the other known frame. Using optical flow information is preferable to using simple interpolation as it is able to more accurately capture any nonlinear motion in the captured images (e.g. the mouth staying open and then suddenly closing). We compare the results of our approach of using optical flow to using linear interpolation for t and w and spherical linear interpolation for θ in Figure 9. Multi-Camera Our approach can trivially be extended to multiple calibrated camera viewpoints as it only entails adding another duplicate set of energy terms to the nonlinear least squares objective function. We demonstrate the effectiveness of this approach by applying our approach from Sections 8.1 and 8.2 to the same performance captured using an identical ARRI Alexa XT Studio from another viewpoint. See Figure 10. We also compare the rigid alignment estimated by our automatic method to the rigid alignment created by a skilled matchmove artist for the same performance. The manual rigid alignment was performed by tracking the painted black dots on the face along with other manually tracked facial features. In comparison, our rigid alignment was done using only the markers detected by 3D-FAN on both the captured images and the synthetic renders. See Figure 11. Our approach using only features detected by 3D-FAN produces visually comparable results. In Figure 13, we assume the manually done rigid alignment is the "ground truth" and quantitatively evaluate the rigid alignment computed by the monocular and stereo solves. Both the monocular and stereo solves are able to recover similar rotation parameters, and the stereo solve is able to much more accurately determine the rigid translation. We note, however, that it is unlikely that the manually done rigid alignment can be con-Optical Flow sidered "ground truth" as it more than likely contains errors as well. Temporal Refinement As seen in the supplementary video, the facial pose and expression estimations are generally temporally inconsistent. We adopt our proposed approach from Section 7. This attempts to mimic the captured temporal performance which not only helps to better match the synthetic render to the captured image but also introduces temporal consistency between renders. While this is theoretically susceptible to noise in the optical flow field, we did not find this to be a problem. See Figure 12. We explore additional methods of performing temporal refinement in the supplementary ma- Figure 13. Assuming the manually done rigid alignment is the "ground truth," we measure the errors for rigid parameters for the monocular and stereo case. terial. Conclusion and Future Work We have proposed and demonstrated the efficacy of a fully automatic pipeline for estimating facial pose and expression using pre-trained deep networks as the objective functions in traditional nonlinear optimization. Such an approach is advantageous as it removes the subjectivity and inconsistency of the artist. Our approach heavily depends upon the robustness of the face detector and the facial alignment networks, and any failures in those cause the optimization to fail. Currently, we use optical flow to fix such problematic frames, and we leave exploring methods to automatically avoid problematic areas of the search space for future work. Furthermore, as the quality of these networks improve, our proposed approach would similarly benefit, leading to higher-fidelity results. While we have only explored using pre-trained facial alignment and optical flow networks, using other types of networks (e.g. face segmentation, face recognition, etc.) and using networks trained specifically on the vast repository of data from decades of visual effects work are exciting avenues for future work. Magic for supporting our efforts into facial performance capture. M.B. was supported in part by The VMWare Fellowship in Honor of Ole Agesen. J.W. was supported in part by the Stanford School of Engineering Fellowship. We would also like to thank Paul Huston for his acting. Figure 14 (third row) shows the results obtained by matching the synthetic render's optical flow to the captured image's optical flow (denoted plate flow in Figure 14). Although this generally produces accurate results when looking at each frame in isolation, adjacent frames may still obtain visually disjoint results (see the accompanying video). Thus, we explore additional temporal smoothing methods. Appendix A. Temporal Smoothing Alternatives We first explore temporally smoothing the parameters (θ, t, and w) by computing a weighted average over a three frame window centered at every frame. We weigh the current frame more heavily and use the method of [44] to average the rigid rotation parameters. While this approach produces temporally smooth parameters, it generally causes the synthetic render to no longer match the captured image. This inaccuracy is demonstrated in Figure 14 (top row, denoted as averaging) and is especially apparent around the nose (frames 1147 and 1148) and around the lower right cheek (frame 1150). One could also carry out averaging using an optical flow network. This can be accomplished by finding the parameters p 2 that minimize the difference in optical flow fields between the current frame's synthetic render and the adjacent frames' synthetic renders, i.e. N (F 1 , F 2 ) − N (F 2 , F 3 ) 2 . See Figure 14 (second row, designated self flow). This aims to minimize the second derivative of the motion of the head in the image plane; however, in practice, we found this method to have little effect on temporal noise while still causing the synthetic render to deviate from the captured image. These inaccuracies are most noticeable around the right cheek and lips. We found the most effective approach to temporal refinment to be a two step process: First, we use averaging to produce temporally consistent parameter values. Then, starting from those values, we use the optical flow approach to make the synthetic render flow better target that of the plate. See Figure 14 (bottom row, denoted hybrid). This hybrid approach produces temporally consistent results with synthetic renders that still match the captured image. Figure 15 shows the rigid parameters before and after using this hybrid approach, along with that obtained manually by a matchmove artist for reference. Assuming the manual rigid alignment is the "ground truth," Figure 16 compares how far the rigid parameters are from their manually solved for values both before and after the hybrid smoothing approach. A.1. Expression Reestimation The expression estimation and temporal smoothing steps can be repeated multiple times until convergence to produce more accurate results. To demonstrate the potential of this approach, we reestimate the facial expression by solving for the mouth and jaw blendshape parameters (a subset of w) while keeping the rigid parameters fixed after temporal smoothing. As seen in Figure 18, the resulting facial expression is generally more accurate than the pre-temporal smoothing result. Furthermore, in the case where temporal smoothing dampens the performance, performing expression re-estimation will once again capture the desired expression (frame 1159).
4,267
1812.02899
2903664137
While much progress has been made in capturing high-quality facial performances using motion capture markers and shape-from-shading, high-end systems typically also rely on rotoscope curves hand-drawn on the image. These curves are subjective and difficult to draw consistently; moreover, ad-hoc procedural methods are required for generating matching rotoscope curves on synthetic renders embedded in the optimization used to determine three-dimensional facial pose and expression. We propose an alternative approach whereby these curves and other keypoints are detected automatically on both the image and the synthetic renders using trained neural networks, eliminating artist subjectivity and the ad-hoc procedures meant to mimic it. More generally, we propose using machine learning networks to implicitly define deep energies which when minimized using classical optimization techniques lead to three-dimensional facial pose and expression estimation.
Neural networks have been used for various other face image analysis tasks such as gender determination @cite_13 and face detection @cite_18 . More recently, deep CNNs have been used to improve face detection results especially in uncontrolled environments and with more extreme poses @cite_31 @cite_50 . Additionally, CNNs have been employed for face segmentation @cite_17 @cite_22 @cite_51 , facial pose and reflectance acquisition @cite_10 @cite_54 , and face recognition @cite_33 @cite_65 .
{ "abstract": [ "", "We show that even when face images are unconstrained and arbitrarily paired, face swapping between them is quite simple. To this end, we make the following contributions. (a) Instead of tailoring systems for face segmentation, as others previously proposed, we show that a standard fully convolutional network (FCN) can achieve remarkably fast and accurate segmentations, provided that it is trained on a rich enough example set. For this purpose, we describe novel data collection and generation routines which provide challenging segmented face examples. (b) We use our segmentations for robust face swapping under unprecedented conditions. (c) Unlike previous work, our swapping is robust enough to allow for extensive quantitative tests. To this end, we use the Labeled Faces in the Wild (LFW) benchmark and measure the effect of intra- and inter-subject face swapping on recognition. We show that our intra-subject swapped faces remain as recognizable as their sources, testifying to the effectiveness of our method. In line with established perceptual studies, we show that better face swapping produces less recognizable inter-subject results. This is the first time this effect was quantitatively demonstrated by machine vision systems.", "Despite significant recent advances in the field of face recognition [10, 14, 15, 17], implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors.", "We present a real-time deep learning framework for video-based facial performance capture---the dense 3D tracking of an actor's face given a monocular video. Our pipeline begins with accurately capturing a subject using a high-end production facial capture pipeline based on multi-view stereo tracking and artist-enhanced animations. With 5--10 minutes of captured footage, we train a convolutional neural network to produce high-quality output, including self-occluded regions, from a monocular video sequence of that subject. Since this 3D facial performance capture is fully automated, our system can drastically reduce the amount of labor involved in the development of modern narrative-driven video games or films involving realistic digital doubles of actors and potentially hours of animated dialogue per character. We compare our results with several state-of-the-art monocular real-time facial capture techniques and demonstrate compelling animation inference in challenging areas such as eyes and lips.", "", "In modern face recognition, the conventional pipeline consists of four stages: detect => align => represent => classify. We revisit both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network. This deep network involves more than 120 million parameters using several locally connected layers without weight sharing, rather than the standard convolutional layers. Thus we trained it on the largest facial dataset to-date, an identity labeled dataset of four million facial images belonging to more than 4, 000 identities. The learned representations coupling the accurate model-based alignment with the large facial database generalize remarkably well to faces in unconstrained environments, even with a simple classifier. Our method reaches an accuracy of 97.35 on the Labeled Faces in the Wild (LFW) dataset, reducing the error of the current state of the art by more than 27 , closely approaching human-level performance.", "Face detection and alignment in unconstrained environment are challenging due to various poses, illuminations, and occlusions. Recent studies show that deep learning approaches can achieve impressive performance on these two tasks. In this letter, we propose a deep cascaded multitask framework that exploits the inherent correlation between detection and alignment to boost up their performance. In particular, our framework leverages a cascaded architecture with three stages of carefully designed deep convolutional networks to predict face and landmark location in a coarse-to-fine manner. In addition, we propose a new online hard sample mining strategy that further improves the performance in practice. Our method achieves superior accuracy over the state-of-the-art techniques on the challenging face detection dataset and benchmark and WIDER FACE benchmarks for face detection, and annotated facial landmarks in the wild benchmark for face alignment, while keeps real-time performance.", "We introduce the concept of unconstrained real-time 3D facial performance capture through explicit semantic segmentation in the RGB input. To ensure robustness, cutting edge supervised learning approaches rely on large training datasets of face images captured in the wild. While impressive tracking quality has been demonstrated for faces that are largely visible, any occlusion due to hair, accessories, or hand-to-face gestures would result in significant visual artifacts and loss of tracking accuracy. The modeling of occlusions has been mostly avoided due to its immense space of appearance variability. To address this curse of high dimensionality, we perform tracking in unconstrained images assuming non-face regions can be fully masked out. Along with recent breakthroughs in deep learning, we demonstrate that pixel-level facial segmentation is possible in real-time by repurposing convolutional neural networks designed originally for general semantic segmentation. We develop an efficient architecture based on a two-stream deconvolution network with complementary characteristics, and introduce carefully designed training samples and data augmentation strategies for improved segmentation accuracy and robustness. We adopt a state-of-the-art regression-based facial tracking framework with segmented face images as training, and demonstrate accurate and uninterrupted facial performance capture in the presence of extreme occlusion and even side views. Furthermore, the resulting segmentation can be directly used to composite partial 3D face models on the input images and enable seamless facial manipulation tasks, such as virtual make-up or face replacement.", "In real-world face detection, large visual variations, such as those due to pose, expression, and lighting, demand an advanced discriminative model to accurately differentiate faces from the backgrounds. Consequently, effective models for the problem tend to be computationally prohibitive. To address these two conflicting challenges, we propose a cascade architecture built on convolutional neural networks (CNNs) with very powerful discriminative capability, while maintaining high performance. The proposed CNN cascade operates at multiple resolutions, quickly rejects the background regions in the fast low resolution stages, and carefully evaluates a small number of challenging candidates in the last high resolution stage. To improve localization effectiveness, and reduce the number of candidates at later stages, we introduce a CNN-based calibration stage after each of the detection stages in the cascade. The output of each calibration stage is used to adjust the detection window position for input to the subsequent stage. The proposed method runs at 14 FPS on a single CPU core for VGA-resolution images and 100 FPS using a GPU, and achieves state-of-the-art detection performance on two public face detection benchmarks.", "Sex identification in animals has biological importance. Humans are good at making this determination visually, but machines have not matched this ability. A neural network was trained to discriminate sex in human faces, and performed as well as humans on a set of 90 exemplars. Images sampled at 30×30 were compressed using a 900×40×900 fully-connected back-propagation network; activities of hidden units served as input to a back-propagation \"SexNet\" trained to produce values of 1 for male and 0 for female faces. The network's average error rate of 8.1 compared favorably to humans, who averaged 11.6 . Some SexNet errors mimicked those of humans.", "This paper formulates face labeling as a conditional random field with unary and pairwise classifiers. We develop a novel multi-objective learning method that optimizes a single unified deep convolutional network with two distinct non-structured loss functions: one encoding the unary label likelihoods and the other encoding the pairwise label dependencies. Moreover, we regularize the network by using a nonparametric prior as new input channels in addition to the RGB image, and show that significant performance improvements can be achieved with a much smaller network size. Experiments on both the LFW and Helen datasets demonstrate state-of-the-art results of the proposed algorithm, and accurate labeling results on challenging images can be obtained by the proposed algorithm for real-world applications." ], "cite_N": [ "@cite_18", "@cite_22", "@cite_33", "@cite_10", "@cite_54", "@cite_65", "@cite_50", "@cite_51", "@cite_31", "@cite_13", "@cite_17" ], "mid": [ "", "2608058963", "2096733369", "2623464795", "", "2145287260", "2341528187", "2339268922", "1934410531", "2123687672", "1905033729" ] }
Deep Energies for Estimating Three-Dimensional Facial Pose and Expression
For high-end face performance capture, either motion capture markers [4] or markerless techniques such as shapefrom-shading [1] or optical flow [64] are typically used; however, these methods are generally unable to capture the intricacies of the performance, especially around the lips. To obtain high-end results, artists hand-draw rotoscope curves on the captured image; then, a variety of techniques are used to construct similar curves on the synthetic render of the estimated pose and to determine correspondences between the hand-drawn and synthetically generated curves. The simplest such approach would be to use a predefined contour on the three-dimensional face model, clip it for occlusions, and create correspondences in a length proportional way; although this provides some consistency to the curves generated on the synthetic render, it is quite difficult for an artist to emulate these curves. Thus, practical systems implement a number of ad-hoc methods in order to match the artist's more subjective interpretation. The inability to embed the artist's subjectivity into the optimiza-tion loop and onto the synthetic render coupled with the artist's inability to faithfully reproduce procedurally generated curves leaves a gaping chasm in the uncanny valley. Although one might debate what works best in order to align a three-dimensional virtual model with an image, it is clearly the case that a consistent metric should be applied to evaluate whether the synthetic render and image are aligned. This motivatives the employment of a machine learning algorithm to draw the rotoscope curves on both the captured image and the synthetic render, hoping that accurately representing the real world pose would lead to a negligible difference between the two curves. Although one might reasonably expect that the differences between real and synthetic camera behavior, albedo, lighting, etc. may lead to different rotoscope curves being generated by the deep learning algorithm, GAN-like approaches [22,36] could be used to rectify such issues by training a network to draw the curves such that a discriminator cannot tell which curves were generated on real images versus synthetic renders. We feed that synthetic render through the network to produce a set of outputs (e), which are then compared to the outputs produced by the same network when feeding it the captured image (f). ations" into classical optimization approaches that estimate facial pose and expression. Overview In this paper, we advocate for a general strategy that uses classical optimization where the energy to be minimized is based on metrics ascertained from deep learning neural net- Figure 2. A visual overview of our approach applied to facial landmark detection. We pass the full resolution image through a facial detector and crop the face out of the image. This crop is then resized to pass through the neural network which outputs, in this case, heatmaps for every landmark. These heatmaps are processed using a soft-argmax operation to get facial landmark coordinates on the cropped and resized image. These positions are then transformed back onto the full resolution image before being used as part of the objective function. An identical process is performed for the synthetic render. works. In particular, this removes the rotoscope artist and the ad-hoc contour drawing procedures meant to match the artist's work, or vice versa, from the pipeline. This pins advancements in three-dimensional facial pose and expression estimation to those being made in machine learning, which are advancing at a fast pace. Generally, we take the following approach: First, we estimate an initial rigid alignment of the three-dimensional face model to the two-dimensional image using a facial alignment network. Then, we estimate an initial guess for the jaw and mouth expression using the same network. Finally, we temporally refine the results and insert/repair failed frames (if/when necessary) using an optical flow network. We use a blendshape model hybridized with linear blend skinning for a six degree of freedom jaw joint [33]; let w denote the parameters that drive the face triangulated surface x(w). The resulting surface has a rigid frame given by Euler angles θ, rotation matrix R(θ), and a translation t such that the final vertex positions are x R (θ, t, w) = R(θ)x(w) + t.(1) We note that other geometry such as the teeth can be trivially handled by Equation 1 as well. The geometry x R is rendered using OpenDR [39] obtaining a rendered image F (x R ). As a precomputation, we estimate the face's albedo and nine coefficients for a spherical harmonics light [48] on a frame where the face is close to neutral; however, we note that a texture captured using a light stage [11] (for example) would potentially work just as well if not better. Then, our goal is to determine the parameters θ, t, and w that best match a given captured image F * . Both the pixels of captured image F * and the pixels of the rendered image F (x R ) are fed through the same deep network to get two sets of landmark positions N (F * ) and N (F ). See Figure 1. We use the L2 norm of the difference between them N (F * ) − N (F (x R (θ, t, w))) 2(2) as the objective function to minimize via nonlinear least squares, which is solved using the Dogleg method [40] as implemented by Chumpy [38]. This requires computing the Jacobian via the chain rule of the energy function; ∂N/∂F , ∂F/∂x R , and ∂x R /∂p where p is one of θ, t, and w all need to be evaluated. We use OpenDR to compute ∂F/∂x R , and Equation 1 yields ∂x R /∂p. ∂N/∂F is computed by backpropagating through the trained network using one's deep learning library of choice; in this paper, we use PyTorch [47]. Note that for computational efficiency, we do not compute ∂N/∂F explicitly, but instead compute the Jacobian of Equation 2 with respect to the rendered image pixels output by F instead. Rigid Alignment We first solve for the initial estimate of the rigid alignment of the face, i.e. θ and t using the pre-trained 3D-FAN network [7]. Note that 3D-FAN, and most other facial alignment networks, requires taking in a cropped and resized image of the face as input; we denote these two operations as C and S respectively. The cropping function requires the bounding box output of a face detector D; we use the CNNbased face detector implemented by Dlib [30]. The resize function resizes the crop to a resolution of 256 × 256 to feed into the network. The final image passed to the network is thus S(C(F (x R ), D(F (x R ))), D(F (x R ))) where we note that both the crop and resize functions depend on the output of the face detector; however, aggressively assuming that ∂D/∂F = 0 did not impede our ability to estimate a reasonable facial pose. Given D, C is merely a subset of the pixels of F so ∂C/∂p = ∂F/∂p for all the pixels within the detected bounding box and ∂C/∂p = 0 for all pixels outside. S resizes the crop using bilinear interpolation so ∂S/∂C can be computed using the size of the detected bounding box. 3D-FAN outputs a tensor of size 68 × 64 × 64, i.e. each of the 68 landmarks has a 64 × 64 heatmap specifying the likelihood of a particular pixel containing that landmark. While one might difference the heatmaps directly, it is unlikely that this would sufficiently capture correspondences. Instead, we follow the approach of [15,58] and apply a dif-ferentiable soft-argmax function to the heatmaps obtaining pixel coordinates for each of the 68 landmarks. That is, given the marker position m i computed using the argmax function on heatmap H i , we use a 3 × 3 patch of pixels M i around m i to compute the soft-argmax position aŝ m i = m∈Mi me βHi(m) m∈Mi e βHi(m)(3) where β = 50 is set experimentally and H i (m) returns the heatmap value at a pixel coordinate m. We found that using a small patch around the argmax landmark positions gives better results than running the soft-argmax operation on the entire heatmap. The soft-argmax function returns an image coordinate on the 64 × 64 image, and these image coordinates need to be remapped to the full resolution image to capture translation between the synthetic face render and the captured image. Thus, we apply inverse rescale S −1 m and crop oper- ations C −1 m , i.e.m i = C −1 m (S −1 m (4m i , D), D). Expression Estimation After the rigid alignment determines θ and t, we solve for an initial estimate of the mouth and jaw blendshape parameters (a subset of w). Generally, one would use handdrawn rotoscope curves around the lips to accomplish this as discussed in Section 2; however, given the multitude of problems associated with this method as discussed in Section 1, we instead turn to deep networks to accomplish the same goal. We use 3D-FAN in the same manner as discussed in Section 4 to solve for a subset of the blendshape weights w keeping the rigid parameters θ and t fixed. It is sometimes beneficial or even preferred to also allow θ and t to be modified somewhat at this stage, although a prior energy term that penalizes these deviations from the values computed during the rigid alignment stage is often useful. We note that the ideal solution would be to instead create new network architectures and train new models that are designed specifically for the purpose of detecting lip/mouth contours, especially since the 64 × 64 heatmaps generated by 3D-FAN are generally too low-resolution to detect fine mouth movements such as when the lips pucker. However, since our goal in this paper is to show how to leverage existing architectures and pre-trained networks especially so one can benefit from the plethora of existing literature, for now, we bootstrap the mouth and jaw estimation using the existing facial landmark detection in 3D-FAN. Optical Flow for Missing Frames The face detector used in Section 4 can sometimes fail, e.g. on our test sequence, the Dlib's HOG-based detector failed on 20 frames while Dlib's CNN-based detector succeeded on all frames. We thus propose using optical flow networks to infer the rigid and blendshape parameters for failed frames by "flowing" these parameters from surrounding frames where the face detector succeeded. This is accomplished by assuming that the optical flow of the synthetic render from one frame to the next should be identical to the corresponding optical flow of the captured image. That is, given two synthetic renders F 1 and F 2 and two captured images F * 1 and F * 2 , we can compute two optical flow fields N (F 1 , F 2 ) and N (F * 1 , F * 2 ) using FlowNet2 [23]. We resize the synthetic renders and captured images to a resolution of 512 × 512 before feeding them through the optical flow network. Assuming that F * 2 is the image the face detector failed on, we solve for the parameters p 2 of F 2 starting with an initial guess p 1 , the parameters of F 1 , by minimizing the L2 difference between the flow field vectors N (F * 1 , F * 2 ) − N (F 1 , F 2 ) 2 . ∂N/∂F 2 can be computed by back-propagating through the network. Temporal Refinement Since we solve for the rigid alignment and expression for all captured images in parallel, adjacent frames may produce visually disjointed results either because of noisy facial landmarks detected by 3D-FAN or due to the nonlinear optimization converging to different local minima. Thus, we also use optical flow to refine temporal inconsistencies between adjacent frames. We adopt a method that can be run in parallel. Given three sequentially captured images F * 1 , F * 2 , and F * 3 , we compute two optical flow fields N (F * 1 , F * 2 ) and N (F * 2 , F * 3 ). Similarly, we can compute N (F 1 , F 2 ) and N (F 2 , F 3 ). Then, we solve for the parameters p 2 of F 2 by minimizing the sum of two L2 norms N ( F * 1 , F * 2 ) − N (F 1 , F 2 ) 2 and N (F * 2 , F * 3 ) − N (F 2 , F 3 ) 2 . The details for computing the Jacobian follow that in Section 6. Optionally, one may also wish to add a prior penalizing the parameters p 2 from deviating too far from their initial value. Here, step k of smoothing to obtain a new set of parameters p k i uses the parameters from the last step p k−1 i±1 ; however, one could also use the updated parameter values p k i±1 whenever available in a Gauss-Seidel style approach. Alternatively, one could adopt a self-smoothing approach by ignoring the capture image's optical flow and solving for the parameters p 2 that minimize N (F 1 , F 2 ) − The initial state where the face is front facing and centered (note the figured is cropped) in the image plane. Right: The initial translation stage in the rigid alignment step roughly aligns the synthetic render of the face to the face in the captured image. (Note that we display the synthetic render without the estimated albedo for clarity, but the network sees the version with the albedo as in Figures 1d and 1f, not 1c). N (F 2 , F 3 ) 2 . Such an approach in effect minimizes the second derivative of the motion of the head in the image plane, causing any sudden motions to be smoothed out; however, since the energy function contains no knowledge of the data being targeted, it is possible for such a smoothing operation to cause the model to deviate from the captured image. While we focus on exploring deep learning based techniques, more traditional smoothing/interpolation techniques can also be applied in place of in addition to the proposed optical flow approaches. Such methods include: spline fitting the rigid parameters and blendshape weights, smoothing the detected landmarks/bounding boxes on the captured images as a preprocess, smoothing each frame's parameters using the adjacent frame's estimations, etc. Results We estimate the facial pose and expression on a moderately challenging performance captured by a single ARRI Alexa XT Studio running at 24 frames-per-second with an 180 degree shutter angle at ISO 800 where numerous captured images exhibit motion blur. These images are captured at a resolution of 2880 × 2160, but we downsample them to 720×540 before feeding them through our pipeline. We assume that the camera intrinsics and extrinsics have been pre-calibrated, the captured images have been undistorted, and that the face model described in Equation 1 has already been created. Furthermore, we assume that the face's rigid transform has been set such that the rendered face is initially visible and forward-facing in all the captured viewpoints. Rigid Alignment We estimate the rigid alignment (i.e. θ and t) of the face using 3D-FAN. We use an energy E 1 = W (N (F ) − N (F * )) where N are the image space coordinates of the facial landmarks as described in Section 4 and W is a perlandmark weighting matrix. Furthermore, we use an edgepreserving energy working. First, we only solve for t using all the landmarks except for those around the jaw to bring the initial state of the face into the general area of the face on the captured image. See Figure 3. We prevent the optimization from overfitting to the landmarks by limiting the maximum number of iterations. Next, we solve for both θ and t in three steps: using the non-jaw markers, using only the jaw markers, and using all markers. We perform these steps in stages as we generally found the non-jaw markers to be more reliable and use them to guide the face model to the approximate location before trying to fit to all existing markers. See E 2 = i (m F * i −m F * i−1 )−(m F i −m F i−1 ) wherem F * Expression Estimation We run a similar multi-stage process to estimate facial expression using the detected 3D-FAN landmarks. We use the same energy term E 1 as Section 8.1, but also introduce L2 regularization on the blendshape weights E 2 = λw with λ = 1 × 10 2 set experimentally. In the first stage, we weight the landmarks around the mouth and lips more heavily and estimate only the jaw open parameter along with Figure 8. From left to right: the result after rigid alignment, after expression estimation, and the captured image. Erroneous markers such as those around the jaw cause the optimization to land in an inaccurate local minima. the rigid alignment. The next stage estimates all available jaw-related blendshape parameters using the same set of landmarks. The final stage estimates all available jaw and mouth-related blendshapes as well as the rigid alignment using all available landmarks. See Figure 5. This process will also generally correct any overfitting introduced during the rigid alignment due to not being able to fully match the markers along the mouth. See Figure 6. Our approach naturally depends on the robustness of 3D-FAN's landmark detection on both the captured images and synthetic renders. As seen in Figure 8, the optimization will try to target the erroneous markers producing inaccurate θ, t, and w which overfit to the markers. Such frames should be considered a failure case and thus require using the optical flow approach described in Section 6 for infill. Alternatively, one could manually modify the multi-stage process for rigid alignment and expression estimation to remove the erroneous markers around the jaw; however, such an approach may then overfit to the potentially inaccurate mouth markers. We note that such concerns will gradually become less prominent as these networks improve. Optical Flow Infill Consider, for example, Figure 7 where frames 1142 and 1146 were solved for successfully and we wish to fill frames 1143, 1144, and 1145. We visualize the optical flow fields using the coloring scheme of [3]. We adopt our proposed approach from Section 6 whereby the parameters of frames 1143, 1144, and 1145 are first solved for sequentially starting from frame 1142. Then, the frames are solved again in reverse order starting from frame 1146. This back-and-forth process which can be repeated multiple times ensures that the infilled frames at the end of the sequence have not accumulated so much error that they no longer match the other known frame. Using optical flow information is preferable to using simple interpolation as it is able to more accurately capture any nonlinear motion in the captured images (e.g. the mouth staying open and then suddenly closing). We compare the results of our approach of using optical flow to using linear interpolation for t and w and spherical linear interpolation for θ in Figure 9. Multi-Camera Our approach can trivially be extended to multiple calibrated camera viewpoints as it only entails adding another duplicate set of energy terms to the nonlinear least squares objective function. We demonstrate the effectiveness of this approach by applying our approach from Sections 8.1 and 8.2 to the same performance captured using an identical ARRI Alexa XT Studio from another viewpoint. See Figure 10. We also compare the rigid alignment estimated by our automatic method to the rigid alignment created by a skilled matchmove artist for the same performance. The manual rigid alignment was performed by tracking the painted black dots on the face along with other manually tracked facial features. In comparison, our rigid alignment was done using only the markers detected by 3D-FAN on both the captured images and the synthetic renders. See Figure 11. Our approach using only features detected by 3D-FAN produces visually comparable results. In Figure 13, we assume the manually done rigid alignment is the "ground truth" and quantitatively evaluate the rigid alignment computed by the monocular and stereo solves. Both the monocular and stereo solves are able to recover similar rotation parameters, and the stereo solve is able to much more accurately determine the rigid translation. We note, however, that it is unlikely that the manually done rigid alignment can be con-Optical Flow sidered "ground truth" as it more than likely contains errors as well. Temporal Refinement As seen in the supplementary video, the facial pose and expression estimations are generally temporally inconsistent. We adopt our proposed approach from Section 7. This attempts to mimic the captured temporal performance which not only helps to better match the synthetic render to the captured image but also introduces temporal consistency between renders. While this is theoretically susceptible to noise in the optical flow field, we did not find this to be a problem. See Figure 12. We explore additional methods of performing temporal refinement in the supplementary ma- Figure 13. Assuming the manually done rigid alignment is the "ground truth," we measure the errors for rigid parameters for the monocular and stereo case. terial. Conclusion and Future Work We have proposed and demonstrated the efficacy of a fully automatic pipeline for estimating facial pose and expression using pre-trained deep networks as the objective functions in traditional nonlinear optimization. Such an approach is advantageous as it removes the subjectivity and inconsistency of the artist. Our approach heavily depends upon the robustness of the face detector and the facial alignment networks, and any failures in those cause the optimization to fail. Currently, we use optical flow to fix such problematic frames, and we leave exploring methods to automatically avoid problematic areas of the search space for future work. Furthermore, as the quality of these networks improve, our proposed approach would similarly benefit, leading to higher-fidelity results. While we have only explored using pre-trained facial alignment and optical flow networks, using other types of networks (e.g. face segmentation, face recognition, etc.) and using networks trained specifically on the vast repository of data from decades of visual effects work are exciting avenues for future work. Magic for supporting our efforts into facial performance capture. M.B. was supported in part by The VMWare Fellowship in Honor of Ole Agesen. J.W. was supported in part by the Stanford School of Engineering Fellowship. We would also like to thank Paul Huston for his acting. Figure 14 (third row) shows the results obtained by matching the synthetic render's optical flow to the captured image's optical flow (denoted plate flow in Figure 14). Although this generally produces accurate results when looking at each frame in isolation, adjacent frames may still obtain visually disjoint results (see the accompanying video). Thus, we explore additional temporal smoothing methods. Appendix A. Temporal Smoothing Alternatives We first explore temporally smoothing the parameters (θ, t, and w) by computing a weighted average over a three frame window centered at every frame. We weigh the current frame more heavily and use the method of [44] to average the rigid rotation parameters. While this approach produces temporally smooth parameters, it generally causes the synthetic render to no longer match the captured image. This inaccuracy is demonstrated in Figure 14 (top row, denoted as averaging) and is especially apparent around the nose (frames 1147 and 1148) and around the lower right cheek (frame 1150). One could also carry out averaging using an optical flow network. This can be accomplished by finding the parameters p 2 that minimize the difference in optical flow fields between the current frame's synthetic render and the adjacent frames' synthetic renders, i.e. N (F 1 , F 2 ) − N (F 2 , F 3 ) 2 . See Figure 14 (second row, designated self flow). This aims to minimize the second derivative of the motion of the head in the image plane; however, in practice, we found this method to have little effect on temporal noise while still causing the synthetic render to deviate from the captured image. These inaccuracies are most noticeable around the right cheek and lips. We found the most effective approach to temporal refinment to be a two step process: First, we use averaging to produce temporally consistent parameter values. Then, starting from those values, we use the optical flow approach to make the synthetic render flow better target that of the plate. See Figure 14 (bottom row, denoted hybrid). This hybrid approach produces temporally consistent results with synthetic renders that still match the captured image. Figure 15 shows the rigid parameters before and after using this hybrid approach, along with that obtained manually by a matchmove artist for reference. Assuming the manual rigid alignment is the "ground truth," Figure 16 compares how far the rigid parameters are from their manually solved for values both before and after the hybrid smoothing approach. A.1. Expression Reestimation The expression estimation and temporal smoothing steps can be repeated multiple times until convergence to produce more accurate results. To demonstrate the potential of this approach, we reestimate the facial expression by solving for the mouth and jaw blendshape parameters (a subset of w) while keeping the rigid parameters fixed after temporal smoothing. As seen in Figure 18, the resulting facial expression is generally more accurate than the pre-temporal smoothing result. Furthermore, in the case where temporal smoothing dampens the performance, performing expression re-estimation will once again capture the desired expression (frame 1159).
4,267
1812.02899
2903664137
While much progress has been made in capturing high-quality facial performances using motion capture markers and shape-from-shading, high-end systems typically also rely on rotoscope curves hand-drawn on the image. These curves are subjective and difficult to draw consistently; moreover, ad-hoc procedural methods are required for generating matching rotoscope curves on synthetic renders embedded in the optimization used to determine three-dimensional facial pose and expression. We propose an alternative approach whereby these curves and other keypoints are detected automatically on both the image and the synthetic renders using trained neural networks, eliminating artist subjectivity and the ad-hoc procedures meant to mimic it. More generally, we propose using machine learning networks to implicitly define deep energies which when minimized using classical optimization techniques lead to three-dimensional facial pose and expression estimation.
Using deep networks such as VGG-16 @cite_15 for losses has been shown to be effective for training other deep networks for tasks such as style transfer and super-resolution @cite_26 . Such techniques have also been used for image generation @cite_61 and face swapping @cite_56 . Furthermore, deep networks have been used in energies for traditional optimization problems for style transfer @cite_52 , texture synthesis @cite_62 , and image generation @cite_19 @cite_14 . While @cite_52 @cite_62 use the L-BFGS @cite_34 method to minimize the optimization problem, @cite_19 @cite_14 use gradient descent methods @cite_32 .
{ "abstract": [ "Image-generating machine learning models are typically trained with loss functions based on distance in the image space. This often leads to over-smoothed results. We propose a class of loss functions, which we call deep perceptual similarity metrics (DeePSiM), that mitigate this problem. Instead of computing distances in the image space, we compute distances between image features extracted by deep neural networks. This metric better reflects perceptually similarity of images and thus leads to better results. We show three applications: autoencoder training, a modification of a variational autoencoder, and inversion of deep convolutional networks. In all cases, the generated images look sharp and resemble natural images.", "We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.", "Example-based texture synthesis has been an active research problem for over two decades. Still, synthesizing textures with nonlocal structures remains a challenge. In this article, we present a texture synthesis technique that builds upon convolutional neural networks and extracted statistics of pretrained deep features. We introduce a structural energy, based on correlations among deep features, which capture the self-similarities and regularities characterizing the texture. Specifically, we show that our technique can synthesize textures that have structures of various scales, local and nonlocal, and the combination of the two.", "Deep convolutional networks have become a popular tool for image generation and restoration. Generally, their excellent performance is imputed to their ability to learn realistic image priors from a large number of example images. In this paper, we show that, on the contrary, the structure of a generator network is sufficient to capture a great deal of low-level image statistics prior to any learning. In order to do so, we show that a randomly-initialized neural network can be used as a handcrafted prior with excellent results in standard inverse problems such as denoising, super-resolution, and inpainting. Furthermore, the same prior can be used to invert deep neural representations to diagnose them, and to restore images based on flash-no flash input pairs. Apart from its diverse applications, our approach highlights the inductive bias captured by standard generator network architectures. It also bridges the gap between two very popular families of image restoration methods: learning-based methods using deep convolutional networks and learning-free methods based on handcrafted image priors such as self-similarity. Code and supplementary material are available at this https URL .", "Rendering the semantic content of an image in different styles is a difficult image processing task. Arguably, a major limiting factor for previous approaches has been the lack of image representations that explicitly represent semantic information and, thus, allow to separate image content from style. Here we use image representations derived from Convolutional Neural Networks optimised for object recognition, which make high level image information explicit. We introduce A Neural Algorithm of Artistic Style that can separate and recombine the image content and style of natural images. The algorithm allows us to produce new images of high perceptual quality that combine the content of an arbitrary photograph with the appearance of numerous wellknown artworks. Our results provide new insights into the deep image representations learned by Convolutional Neural Networks and demonstrate their potential for high level image synthesis and manipulation.", "Gradient descent optimization algorithms, while increasingly popular, are often used as black-box optimizers, as practical explanations of their strengths and weaknesses are hard to come by. This article aims to provide the reader with intuitions with regard to the behaviour of different algorithms that will allow her to put them to use. In the course of this overview, we look at different variants of gradient descent, summarize challenges, introduce the most common optimization algorithms, review architectures in a parallel and distributed setting, and investigate additional strategies for optimizing gradient descent.", "", "Image representations, from SIFT and Bag of Visual Words to Convolutional Neural Networks (CNNs), are a crucial component of almost any image understanding system. Nevertheless, our understanding of them remains limited. In this paper we conduct a direct analysis of the visual information contained in representations by asking the following question: given an encoding of an image, to which extent is it possible to reconstruct the image itself? To answer this question we contribute a general framework to invert representations. We show that this method can invert representations such as HOG and SIFT more accurately than recent alternatives while being applicable to CNNs too. We then use this technique to study the inverse of recent state-of-the-art CNN image representations for the first time. Among our findings, we show that several layers in CNNs retain photographically accurate information about the image, with different degrees of geometric and photometric invariance.", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "" ], "cite_N": [ "@cite_61", "@cite_26", "@cite_62", "@cite_14", "@cite_52", "@cite_32", "@cite_56", "@cite_19", "@cite_15", "@cite_34" ], "mid": [ "2259643685", "2950689937", "2737664244", "2771305881", "2475287302", "2523246573", "", "2949987032", "1686810756", "" ] }
Deep Energies for Estimating Three-Dimensional Facial Pose and Expression
For high-end face performance capture, either motion capture markers [4] or markerless techniques such as shapefrom-shading [1] or optical flow [64] are typically used; however, these methods are generally unable to capture the intricacies of the performance, especially around the lips. To obtain high-end results, artists hand-draw rotoscope curves on the captured image; then, a variety of techniques are used to construct similar curves on the synthetic render of the estimated pose and to determine correspondences between the hand-drawn and synthetically generated curves. The simplest such approach would be to use a predefined contour on the three-dimensional face model, clip it for occlusions, and create correspondences in a length proportional way; although this provides some consistency to the curves generated on the synthetic render, it is quite difficult for an artist to emulate these curves. Thus, practical systems implement a number of ad-hoc methods in order to match the artist's more subjective interpretation. The inability to embed the artist's subjectivity into the optimiza-tion loop and onto the synthetic render coupled with the artist's inability to faithfully reproduce procedurally generated curves leaves a gaping chasm in the uncanny valley. Although one might debate what works best in order to align a three-dimensional virtual model with an image, it is clearly the case that a consistent metric should be applied to evaluate whether the synthetic render and image are aligned. This motivatives the employment of a machine learning algorithm to draw the rotoscope curves on both the captured image and the synthetic render, hoping that accurately representing the real world pose would lead to a negligible difference between the two curves. Although one might reasonably expect that the differences between real and synthetic camera behavior, albedo, lighting, etc. may lead to different rotoscope curves being generated by the deep learning algorithm, GAN-like approaches [22,36] could be used to rectify such issues by training a network to draw the curves such that a discriminator cannot tell which curves were generated on real images versus synthetic renders. We feed that synthetic render through the network to produce a set of outputs (e), which are then compared to the outputs produced by the same network when feeding it the captured image (f). ations" into classical optimization approaches that estimate facial pose and expression. Overview In this paper, we advocate for a general strategy that uses classical optimization where the energy to be minimized is based on metrics ascertained from deep learning neural net- Figure 2. A visual overview of our approach applied to facial landmark detection. We pass the full resolution image through a facial detector and crop the face out of the image. This crop is then resized to pass through the neural network which outputs, in this case, heatmaps for every landmark. These heatmaps are processed using a soft-argmax operation to get facial landmark coordinates on the cropped and resized image. These positions are then transformed back onto the full resolution image before being used as part of the objective function. An identical process is performed for the synthetic render. works. In particular, this removes the rotoscope artist and the ad-hoc contour drawing procedures meant to match the artist's work, or vice versa, from the pipeline. This pins advancements in three-dimensional facial pose and expression estimation to those being made in machine learning, which are advancing at a fast pace. Generally, we take the following approach: First, we estimate an initial rigid alignment of the three-dimensional face model to the two-dimensional image using a facial alignment network. Then, we estimate an initial guess for the jaw and mouth expression using the same network. Finally, we temporally refine the results and insert/repair failed frames (if/when necessary) using an optical flow network. We use a blendshape model hybridized with linear blend skinning for a six degree of freedom jaw joint [33]; let w denote the parameters that drive the face triangulated surface x(w). The resulting surface has a rigid frame given by Euler angles θ, rotation matrix R(θ), and a translation t such that the final vertex positions are x R (θ, t, w) = R(θ)x(w) + t.(1) We note that other geometry such as the teeth can be trivially handled by Equation 1 as well. The geometry x R is rendered using OpenDR [39] obtaining a rendered image F (x R ). As a precomputation, we estimate the face's albedo and nine coefficients for a spherical harmonics light [48] on a frame where the face is close to neutral; however, we note that a texture captured using a light stage [11] (for example) would potentially work just as well if not better. Then, our goal is to determine the parameters θ, t, and w that best match a given captured image F * . Both the pixels of captured image F * and the pixels of the rendered image F (x R ) are fed through the same deep network to get two sets of landmark positions N (F * ) and N (F ). See Figure 1. We use the L2 norm of the difference between them N (F * ) − N (F (x R (θ, t, w))) 2(2) as the objective function to minimize via nonlinear least squares, which is solved using the Dogleg method [40] as implemented by Chumpy [38]. This requires computing the Jacobian via the chain rule of the energy function; ∂N/∂F , ∂F/∂x R , and ∂x R /∂p where p is one of θ, t, and w all need to be evaluated. We use OpenDR to compute ∂F/∂x R , and Equation 1 yields ∂x R /∂p. ∂N/∂F is computed by backpropagating through the trained network using one's deep learning library of choice; in this paper, we use PyTorch [47]. Note that for computational efficiency, we do not compute ∂N/∂F explicitly, but instead compute the Jacobian of Equation 2 with respect to the rendered image pixels output by F instead. Rigid Alignment We first solve for the initial estimate of the rigid alignment of the face, i.e. θ and t using the pre-trained 3D-FAN network [7]. Note that 3D-FAN, and most other facial alignment networks, requires taking in a cropped and resized image of the face as input; we denote these two operations as C and S respectively. The cropping function requires the bounding box output of a face detector D; we use the CNNbased face detector implemented by Dlib [30]. The resize function resizes the crop to a resolution of 256 × 256 to feed into the network. The final image passed to the network is thus S(C(F (x R ), D(F (x R ))), D(F (x R ))) where we note that both the crop and resize functions depend on the output of the face detector; however, aggressively assuming that ∂D/∂F = 0 did not impede our ability to estimate a reasonable facial pose. Given D, C is merely a subset of the pixels of F so ∂C/∂p = ∂F/∂p for all the pixels within the detected bounding box and ∂C/∂p = 0 for all pixels outside. S resizes the crop using bilinear interpolation so ∂S/∂C can be computed using the size of the detected bounding box. 3D-FAN outputs a tensor of size 68 × 64 × 64, i.e. each of the 68 landmarks has a 64 × 64 heatmap specifying the likelihood of a particular pixel containing that landmark. While one might difference the heatmaps directly, it is unlikely that this would sufficiently capture correspondences. Instead, we follow the approach of [15,58] and apply a dif-ferentiable soft-argmax function to the heatmaps obtaining pixel coordinates for each of the 68 landmarks. That is, given the marker position m i computed using the argmax function on heatmap H i , we use a 3 × 3 patch of pixels M i around m i to compute the soft-argmax position aŝ m i = m∈Mi me βHi(m) m∈Mi e βHi(m)(3) where β = 50 is set experimentally and H i (m) returns the heatmap value at a pixel coordinate m. We found that using a small patch around the argmax landmark positions gives better results than running the soft-argmax operation on the entire heatmap. The soft-argmax function returns an image coordinate on the 64 × 64 image, and these image coordinates need to be remapped to the full resolution image to capture translation between the synthetic face render and the captured image. Thus, we apply inverse rescale S −1 m and crop oper- ations C −1 m , i.e.m i = C −1 m (S −1 m (4m i , D), D). Expression Estimation After the rigid alignment determines θ and t, we solve for an initial estimate of the mouth and jaw blendshape parameters (a subset of w). Generally, one would use handdrawn rotoscope curves around the lips to accomplish this as discussed in Section 2; however, given the multitude of problems associated with this method as discussed in Section 1, we instead turn to deep networks to accomplish the same goal. We use 3D-FAN in the same manner as discussed in Section 4 to solve for a subset of the blendshape weights w keeping the rigid parameters θ and t fixed. It is sometimes beneficial or even preferred to also allow θ and t to be modified somewhat at this stage, although a prior energy term that penalizes these deviations from the values computed during the rigid alignment stage is often useful. We note that the ideal solution would be to instead create new network architectures and train new models that are designed specifically for the purpose of detecting lip/mouth contours, especially since the 64 × 64 heatmaps generated by 3D-FAN are generally too low-resolution to detect fine mouth movements such as when the lips pucker. However, since our goal in this paper is to show how to leverage existing architectures and pre-trained networks especially so one can benefit from the plethora of existing literature, for now, we bootstrap the mouth and jaw estimation using the existing facial landmark detection in 3D-FAN. Optical Flow for Missing Frames The face detector used in Section 4 can sometimes fail, e.g. on our test sequence, the Dlib's HOG-based detector failed on 20 frames while Dlib's CNN-based detector succeeded on all frames. We thus propose using optical flow networks to infer the rigid and blendshape parameters for failed frames by "flowing" these parameters from surrounding frames where the face detector succeeded. This is accomplished by assuming that the optical flow of the synthetic render from one frame to the next should be identical to the corresponding optical flow of the captured image. That is, given two synthetic renders F 1 and F 2 and two captured images F * 1 and F * 2 , we can compute two optical flow fields N (F 1 , F 2 ) and N (F * 1 , F * 2 ) using FlowNet2 [23]. We resize the synthetic renders and captured images to a resolution of 512 × 512 before feeding them through the optical flow network. Assuming that F * 2 is the image the face detector failed on, we solve for the parameters p 2 of F 2 starting with an initial guess p 1 , the parameters of F 1 , by minimizing the L2 difference between the flow field vectors N (F * 1 , F * 2 ) − N (F 1 , F 2 ) 2 . ∂N/∂F 2 can be computed by back-propagating through the network. Temporal Refinement Since we solve for the rigid alignment and expression for all captured images in parallel, adjacent frames may produce visually disjointed results either because of noisy facial landmarks detected by 3D-FAN or due to the nonlinear optimization converging to different local minima. Thus, we also use optical flow to refine temporal inconsistencies between adjacent frames. We adopt a method that can be run in parallel. Given three sequentially captured images F * 1 , F * 2 , and F * 3 , we compute two optical flow fields N (F * 1 , F * 2 ) and N (F * 2 , F * 3 ). Similarly, we can compute N (F 1 , F 2 ) and N (F 2 , F 3 ). Then, we solve for the parameters p 2 of F 2 by minimizing the sum of two L2 norms N ( F * 1 , F * 2 ) − N (F 1 , F 2 ) 2 and N (F * 2 , F * 3 ) − N (F 2 , F 3 ) 2 . The details for computing the Jacobian follow that in Section 6. Optionally, one may also wish to add a prior penalizing the parameters p 2 from deviating too far from their initial value. Here, step k of smoothing to obtain a new set of parameters p k i uses the parameters from the last step p k−1 i±1 ; however, one could also use the updated parameter values p k i±1 whenever available in a Gauss-Seidel style approach. Alternatively, one could adopt a self-smoothing approach by ignoring the capture image's optical flow and solving for the parameters p 2 that minimize N (F 1 , F 2 ) − The initial state where the face is front facing and centered (note the figured is cropped) in the image plane. Right: The initial translation stage in the rigid alignment step roughly aligns the synthetic render of the face to the face in the captured image. (Note that we display the synthetic render without the estimated albedo for clarity, but the network sees the version with the albedo as in Figures 1d and 1f, not 1c). N (F 2 , F 3 ) 2 . Such an approach in effect minimizes the second derivative of the motion of the head in the image plane, causing any sudden motions to be smoothed out; however, since the energy function contains no knowledge of the data being targeted, it is possible for such a smoothing operation to cause the model to deviate from the captured image. While we focus on exploring deep learning based techniques, more traditional smoothing/interpolation techniques can also be applied in place of in addition to the proposed optical flow approaches. Such methods include: spline fitting the rigid parameters and blendshape weights, smoothing the detected landmarks/bounding boxes on the captured images as a preprocess, smoothing each frame's parameters using the adjacent frame's estimations, etc. Results We estimate the facial pose and expression on a moderately challenging performance captured by a single ARRI Alexa XT Studio running at 24 frames-per-second with an 180 degree shutter angle at ISO 800 where numerous captured images exhibit motion blur. These images are captured at a resolution of 2880 × 2160, but we downsample them to 720×540 before feeding them through our pipeline. We assume that the camera intrinsics and extrinsics have been pre-calibrated, the captured images have been undistorted, and that the face model described in Equation 1 has already been created. Furthermore, we assume that the face's rigid transform has been set such that the rendered face is initially visible and forward-facing in all the captured viewpoints. Rigid Alignment We estimate the rigid alignment (i.e. θ and t) of the face using 3D-FAN. We use an energy E 1 = W (N (F ) − N (F * )) where N are the image space coordinates of the facial landmarks as described in Section 4 and W is a perlandmark weighting matrix. Furthermore, we use an edgepreserving energy working. First, we only solve for t using all the landmarks except for those around the jaw to bring the initial state of the face into the general area of the face on the captured image. See Figure 3. We prevent the optimization from overfitting to the landmarks by limiting the maximum number of iterations. Next, we solve for both θ and t in three steps: using the non-jaw markers, using only the jaw markers, and using all markers. We perform these steps in stages as we generally found the non-jaw markers to be more reliable and use them to guide the face model to the approximate location before trying to fit to all existing markers. See E 2 = i (m F * i −m F * i−1 )−(m F i −m F i−1 ) wherem F * Expression Estimation We run a similar multi-stage process to estimate facial expression using the detected 3D-FAN landmarks. We use the same energy term E 1 as Section 8.1, but also introduce L2 regularization on the blendshape weights E 2 = λw with λ = 1 × 10 2 set experimentally. In the first stage, we weight the landmarks around the mouth and lips more heavily and estimate only the jaw open parameter along with Figure 8. From left to right: the result after rigid alignment, after expression estimation, and the captured image. Erroneous markers such as those around the jaw cause the optimization to land in an inaccurate local minima. the rigid alignment. The next stage estimates all available jaw-related blendshape parameters using the same set of landmarks. The final stage estimates all available jaw and mouth-related blendshapes as well as the rigid alignment using all available landmarks. See Figure 5. This process will also generally correct any overfitting introduced during the rigid alignment due to not being able to fully match the markers along the mouth. See Figure 6. Our approach naturally depends on the robustness of 3D-FAN's landmark detection on both the captured images and synthetic renders. As seen in Figure 8, the optimization will try to target the erroneous markers producing inaccurate θ, t, and w which overfit to the markers. Such frames should be considered a failure case and thus require using the optical flow approach described in Section 6 for infill. Alternatively, one could manually modify the multi-stage process for rigid alignment and expression estimation to remove the erroneous markers around the jaw; however, such an approach may then overfit to the potentially inaccurate mouth markers. We note that such concerns will gradually become less prominent as these networks improve. Optical Flow Infill Consider, for example, Figure 7 where frames 1142 and 1146 were solved for successfully and we wish to fill frames 1143, 1144, and 1145. We visualize the optical flow fields using the coloring scheme of [3]. We adopt our proposed approach from Section 6 whereby the parameters of frames 1143, 1144, and 1145 are first solved for sequentially starting from frame 1142. Then, the frames are solved again in reverse order starting from frame 1146. This back-and-forth process which can be repeated multiple times ensures that the infilled frames at the end of the sequence have not accumulated so much error that they no longer match the other known frame. Using optical flow information is preferable to using simple interpolation as it is able to more accurately capture any nonlinear motion in the captured images (e.g. the mouth staying open and then suddenly closing). We compare the results of our approach of using optical flow to using linear interpolation for t and w and spherical linear interpolation for θ in Figure 9. Multi-Camera Our approach can trivially be extended to multiple calibrated camera viewpoints as it only entails adding another duplicate set of energy terms to the nonlinear least squares objective function. We demonstrate the effectiveness of this approach by applying our approach from Sections 8.1 and 8.2 to the same performance captured using an identical ARRI Alexa XT Studio from another viewpoint. See Figure 10. We also compare the rigid alignment estimated by our automatic method to the rigid alignment created by a skilled matchmove artist for the same performance. The manual rigid alignment was performed by tracking the painted black dots on the face along with other manually tracked facial features. In comparison, our rigid alignment was done using only the markers detected by 3D-FAN on both the captured images and the synthetic renders. See Figure 11. Our approach using only features detected by 3D-FAN produces visually comparable results. In Figure 13, we assume the manually done rigid alignment is the "ground truth" and quantitatively evaluate the rigid alignment computed by the monocular and stereo solves. Both the monocular and stereo solves are able to recover similar rotation parameters, and the stereo solve is able to much more accurately determine the rigid translation. We note, however, that it is unlikely that the manually done rigid alignment can be con-Optical Flow sidered "ground truth" as it more than likely contains errors as well. Temporal Refinement As seen in the supplementary video, the facial pose and expression estimations are generally temporally inconsistent. We adopt our proposed approach from Section 7. This attempts to mimic the captured temporal performance which not only helps to better match the synthetic render to the captured image but also introduces temporal consistency between renders. While this is theoretically susceptible to noise in the optical flow field, we did not find this to be a problem. See Figure 12. We explore additional methods of performing temporal refinement in the supplementary ma- Figure 13. Assuming the manually done rigid alignment is the "ground truth," we measure the errors for rigid parameters for the monocular and stereo case. terial. Conclusion and Future Work We have proposed and demonstrated the efficacy of a fully automatic pipeline for estimating facial pose and expression using pre-trained deep networks as the objective functions in traditional nonlinear optimization. Such an approach is advantageous as it removes the subjectivity and inconsistency of the artist. Our approach heavily depends upon the robustness of the face detector and the facial alignment networks, and any failures in those cause the optimization to fail. Currently, we use optical flow to fix such problematic frames, and we leave exploring methods to automatically avoid problematic areas of the search space for future work. Furthermore, as the quality of these networks improve, our proposed approach would similarly benefit, leading to higher-fidelity results. While we have only explored using pre-trained facial alignment and optical flow networks, using other types of networks (e.g. face segmentation, face recognition, etc.) and using networks trained specifically on the vast repository of data from decades of visual effects work are exciting avenues for future work. Magic for supporting our efforts into facial performance capture. M.B. was supported in part by The VMWare Fellowship in Honor of Ole Agesen. J.W. was supported in part by the Stanford School of Engineering Fellowship. We would also like to thank Paul Huston for his acting. Figure 14 (third row) shows the results obtained by matching the synthetic render's optical flow to the captured image's optical flow (denoted plate flow in Figure 14). Although this generally produces accurate results when looking at each frame in isolation, adjacent frames may still obtain visually disjoint results (see the accompanying video). Thus, we explore additional temporal smoothing methods. Appendix A. Temporal Smoothing Alternatives We first explore temporally smoothing the parameters (θ, t, and w) by computing a weighted average over a three frame window centered at every frame. We weigh the current frame more heavily and use the method of [44] to average the rigid rotation parameters. While this approach produces temporally smooth parameters, it generally causes the synthetic render to no longer match the captured image. This inaccuracy is demonstrated in Figure 14 (top row, denoted as averaging) and is especially apparent around the nose (frames 1147 and 1148) and around the lower right cheek (frame 1150). One could also carry out averaging using an optical flow network. This can be accomplished by finding the parameters p 2 that minimize the difference in optical flow fields between the current frame's synthetic render and the adjacent frames' synthetic renders, i.e. N (F 1 , F 2 ) − N (F 2 , F 3 ) 2 . See Figure 14 (second row, designated self flow). This aims to minimize the second derivative of the motion of the head in the image plane; however, in practice, we found this method to have little effect on temporal noise while still causing the synthetic render to deviate from the captured image. These inaccuracies are most noticeable around the right cheek and lips. We found the most effective approach to temporal refinment to be a two step process: First, we use averaging to produce temporally consistent parameter values. Then, starting from those values, we use the optical flow approach to make the synthetic render flow better target that of the plate. See Figure 14 (bottom row, denoted hybrid). This hybrid approach produces temporally consistent results with synthetic renders that still match the captured image. Figure 15 shows the rigid parameters before and after using this hybrid approach, along with that obtained manually by a matchmove artist for reference. Assuming the manual rigid alignment is the "ground truth," Figure 16 compares how far the rigid parameters are from their manually solved for values both before and after the hybrid smoothing approach. A.1. Expression Reestimation The expression estimation and temporal smoothing steps can be repeated multiple times until convergence to produce more accurate results. To demonstrate the potential of this approach, we reestimate the facial expression by solving for the mouth and jaw blendshape parameters (a subset of w) while keeping the rigid parameters fixed after temporal smoothing. As seen in Figure 18, the resulting facial expression is generally more accurate than the pre-temporal smoothing result. Furthermore, in the case where temporal smoothing dampens the performance, performing expression re-estimation will once again capture the desired expression (frame 1159).
4,267
1812.02868
2904815624
Deep reinforcement-learning methods have achieved remarkable performance on challenging control tasks. Observations of the resulting behavior give the impression that the agent has constructed a generalized representation that supports insightful action decisions. We re-examine what is meant by generalization in RL, and propose several definitions based on an agent's performance in on-policy, off-policy, and unreachable states. We propose a set of practical methods for evaluating agents with these definitions of generalization. We demonstrate these techniques on a common benchmark task for deep RL, and we show that the learned networks make poor decisions for states that differ only slightly from on-policy states, even though those states are not selected adversarially. Taken together, these results call into question the extent to which deep Q-networks learn generalized representations, and suggest that more experimentation and analysis is necessary before claims of representation learning can be supported.
Generalization has been cast as avoiding overfitting to a particular training environment, implying that sampling from diverse environments is necessary for generalization @cite_8 @cite_2 . Other work has focused on generalization as improved performance in off-policy states, a framework much closer to standard approaches in supervised learning. Techniques such as adding stochasticity to the policy @cite_7 , having the agent take random steps, no-ops, steps from human play @cite_11 , or probabilistically repeating the agent's previous action @cite_6 , all force the agent to transition to off-policy states.
{ "abstract": [ "Pseudo-random number generation on the Atari 2600 was commonly accomplished using a Linear Feedback Shift Register (LFSR). One drawback was that the initial seed for the LFSR had to be hard-coded into the ROM. To overcome this constraint, programmers sampled from the LFSR once per frame, including title and end screens. Since a human player will have some random amount of delay between seeing the title screen and starting to play, the LFSR state was effectively randomized at the beginning of the game despite the hard-coded seed. Other games used the player’s actions as a source of randomness. Notable pseudo-random games include Adventure in which a bat randomly steals and hides items around the game world and River Raid which used randomness to make enemy movements less predictable. Relying on the player to provide a source of randomness is not sufficient for computer controlled agents which are capable of memorizing and repeating pre-determined sequences of actions. Ideally, the games themselves would provide stochasticity generated from an external source such as the CPU clock. In practice, this was not an option presented by the hardware. Atari games are deterministic given a fixed policy leading to a set sequence of actions. This article discusses different approaches for adding stochasticity to Atari games and examines how effective each approach is at derailing an agent known to memorize action sequences. Additionally it is the authors’ hope that this article will spark discussion in the community over the following questions:", "", "The Arcade Learning Environment (ALE) is an evaluation platform that poses the challenge of building AI agents with general competency across dozens of Atari 2600 games. It supports a variety of different problem settings and it has been receiving increasing attention from the scientific community, leading to some high-profile success stories such as the much publicized Deep Q-Networks (DQN). In this article we take a big picture look at how the ALE is being used by the research community. We show how diverse the evaluation methodologies in the ALE have become with time, and highlight some key concerns when evaluating agents in the ALE. We use this discussion to present some methodological best practices and provide new benchmark results using these best practices. To further the progress in the field, we introduce a new version of the ALE that supports multiple game modes and provides a form of stochasticity we call sticky actions. We conclude this big picture look by revisiting challenges posed when the ALE was introduced, summarizing the state-of-the-art in various problems and highlighting problems that remain open.", "Recent years have witnessed significant progresses in deep Reinforcement Learning (RL). Empowered with large scale neural networks, carefully designed architectures, novel training algorithms and massively parallel computing devices, researchers are able to attack many challenging RL problems. However, in machine learning, more training power comes with a potential risk of more overfitting. As deep RL techniques are being applied to critical problems such as healthcare and finance, it is important to understand the generalization behaviors of the trained agents. In this paper, we conduct a systematic study of standard RL agents and find that they could overfit in various ways. Moreover, overfitting could happen \"robustly\": commonly used techniques in RL that add stochasticity do not necessarily prevent or detect overfitting. In particular, the same agents and learning algorithms could have drastically different test performance, even when all of them achieve optimal rewards during training. The observations call for more principled and careful evaluation protocols in RL. We conclude with a general discussion on overfitting in RL and a study of the generalization behaviors from the perspective of inductive bias.", "We present the first massively distributed architecture for deep reinforcement learning. This architecture uses four main components: parallel actors that generate new behaviour; parallel learners that are trained from stored experience; a distributed neural network to represent the value function or behaviour policy; and a distributed store of experience. We used our architecture to implement the Deep Q-Network algorithm (DQN). Our distributed algorithm was applied to 49 games from Atari 2600 games from the Arcade Learning Environment, using identical hyperparameters. Our performance surpassed non-distributed DQN in 41 of the 49 games and also reduced the wall-time required to achieve these results by an order of magnitude on most games." ], "cite_N": [ "@cite_7", "@cite_8", "@cite_6", "@cite_2", "@cite_11" ], "mid": [ "2100506021", "", "2963403143", "2797527950", "1658008008" ] }
Measuring and Characterizing Generalization in Deep Reinforcement Learning
Deep reinforcement learning (RL) has produced agents that can perform complex tasks using only pixel-level visual input data. Given the apparent competence of some of these agents, it is tempting to see them as possessing a deep understanding of their environments. Unfortunately, this intuition can be shown to be very wrong in some circumstances. Consider a deep RL agent responsible for controlling a self-driving car. Suppose the agent is trained on typical road surfaces but one day it needs to travel on a newly paved roadway. If the agent operates the vehicle erratically in this scenario, we would conclude that this agent has not formed a sufficiently general policy for driving. We provide a conceptual framework for thinking about generalization in RL. We contend that traditional notions that separate a training and testing set are misleading in RL because of the close relationship between the experience gathered during training and evaluations of the learned policy. With this context in mind, we address the question: To what extent do the accomplishments of deep RL agents demonstrate generalization, and how can we recognize such a capability when presented with only a black-box controller? We propose a view of generalization in RL based on an agent's performance in states it couldn't have encountered during training, yet that only differ from on-policy states in minor ways. Our approach only requires knowledge of the training environment, and doesn't require access to the actual training episodes. The intuition is simple: To understand how an agent will perform across parts of the state space it could easily encounter and should be able to handle, expose it to states it could never have observed and measure its performance. Agents that perform well under this notion of generalization could be rightfully viewed as having mastered their environment. In this work, we make the following contributions: Recasting generalization. We define a range of types of generalization for value-based RL agents, based on an agent's performance in on-policy, off-policy, and unreachable states. We do so by establishing a correspondence between the wellunderstood notions of interpolation and extrapolation in prediction tasks with off-policy and unreachable states in RL. Empirical methodology. We propose a set of practical methods to: (1) produce off-policy evaluation states; and (2) use parameterized simulators and controlled experiments to produce unreachable states. Analysis case-study. We demonstrate these techniques on a custom implementation of a common benchmark task for deep RL, the Atari 2600 game of AMIDAR. Our version, INTERVENIDAR, is fully parameterized, allowing us to manipulate the game's latent state, thus enabling an unprecedented set of experiments on a state-of-the-art deep Qnetwork architecture. We provide evidence that DQNs trained on pixel-level input can fail to generalize in the presence of non-adversarial, semantically meaningful, and plausible changes in an environment. Example. In AMIDAR, a Pac-Man-like video game, an agent moves a player around a two-dimensional grid, accumulating reward for each vertical and horizontal line segment the first time that the player traverses them. An episode terminates when the player makes contact with one of the five enemies that also move along the grid. Consider the two executions of an agent's learned policy in Figure 1 starting from two distinct states, default and modified. The default condition places the trained agent in the deterministic start position it experienced during training. The modified condition is identical, except that a single line segment has been filled in. While this exact state could never be observed during training, we would expect an agent that has learned appropriate representations and a generalized policy to perform well. Indeed, with a segment filled in, the agent is at least as close to completing the level as in the default condition. However, this small modification causes the agent to obtain an order of magnitude smaller reward. Importantly, this perturbation differs from an adversarial attack (Huang et al. 2017) for deep agents in that it influences the latent semantics of state, not solely the agent's perception of that state. Our experiments expand on this representative example, enumerating a set of agents and perturbations. Recasting Generalization Using existing notions of generalization, such as held-out set performance, is complicated when applied to RL for two reasons: (1) training data is dependent on the agent's policy; and (2) the vastness of the state space in real-world applications means it is likely for novel states to be encountered at deployment time. One could imagine a procedure in RL that directly mimics evaluation on held-out samples by omitting some subset of training data from any learning steps. However, this methodology only evaluates the ability of a model to use data after it is collected, and ignores the effect of exploration on generalization. Using this definition, we could incorrectly claim that an agent has learned a general policy, even if this policy performs well on a very small subset of states. Instead, we focus on a definition that encapsulates the trained agent as a standalone entity, agnostic to the specific data it encountered during training. Generalization via State-Space Partitioning. We partition the universe of possible input states to a trained agent into three sets, according to how the agent can encounter them following its learned policy π from s 0 ∈ S 0 . Here, Π is the set of all policy functions, and α, δ, and β are some small positive values close to 0. We can think of δ and β as thresholds on estimation accuracy and optimality performance. The set of reachable states, S reachable , is the set of states that an agent encounters with probability greater than α by following any π ∈ Π. 1 Definition 1 (Repetition). An RL agent has high repetition performance, G R , if δ > |v(s) − v π (s)| and β > v * (s) − v π (s), ∀s ∈ S on . The set of on-policy states, S on , is the set of states that the agent encounters with probability greater than α by following π from s 0 ∈ S 0 . Definition 2 (Interpolation). An RL agent has high interpolation performance, G I , if δ > |q(s, a) − q π (s, a)| and β > q * (s, a) − q π (s, a), ∀s ∈ S off , a ∈ A. The set of offpolicy states, S off , is defined as S reachable \ S on . Definition 3 (Extrapolation). An RL agent has high extrapolation performance, G E , if δ > |q(s, a) − q π (s, a)| and β > q * (s, a) − q π (s, a), ∀s ∈ S unreachable , a ∈ A. The set of unreachable states, S unreachable , is defined as S \ S reachable . Note that S only includes states that are in the domain of T (s, a, s ). In other words, specification of the transition function implicitly defines S, and by extension S unreachable . This definition is particularly important in the context of deep RL, as the dimensionality of the observable input space is typically much larger than |S|. If we wish to demonstrate that an agent generalizes well for AMIDAR, T (s, a, s ) would need to be well defined with respect to latent state variables in the AMIDAR game, such as player and enemy position. If we wish to demonstrate that an agent generalizes well for all Atari games, we would need T (s, a, s ) to be well defined with respect to latent state variables in other Atari games as well, such as the paddle position in Breakout. Given any reaonable bound on the MDP, we would not expect the agent to perform well when exposed to random configurations of pixels. 2 Note that a large body of work implicitly uses G R as a criteria for performance, even though this is the weakest of generalization capabilities. It is what you get when testing a learned policy in the environment in which it was trained. Some readers may doubt that it is possible to learn policies that extrapolate well. However, Kansky et al. (2017) show that, with an appropriate representation, reinforcement learning can produce policies that extrapolate well under similar conditions to what we describe in this paper. What has not been shown to date is that deep RL agents can learn policies that generalize well from pixel-level input. We demonstrate a simple example of this state-space partition in Figure 2, a classic GRIDWORLD benchmark. In this environment, the agent begins each episode in a deterministic start position, can take actions right, right and up, and right and down, and obtains a reward of +1 when it arrives at the goal state, s g . Note that the agent must move right at every step, therefore there are three regions that are unreachable from the agent's fixed start position: the upper left corner, the lower left corner, and the lower left corner after the wall. While unreachable, the upper left corner is a valid state that does not restrict the agent's ability to reach the goal state and obtain a large reward. Note that an agent interacting in the GRIDWORLD environment learns tabular Q-values, therefore we should not expect it to satisfy any reasonable definition of generalization. However, given an adequate exploration strategy, an agent could conceivably visit every off-policy state during training, resulting inv(s) converging to v * (s), ∀s ∈ S reachable . This agent would satisfy G R and G I for arbitrarily small values of δ and β. Despite this positive outcome, most observers would not say that this agent "generalizes", because it lacks any function-approximation method. Only the definition G E is consistent with this conclusion. With the emergence of RL-as-a-service 3 and concerns over propriety RL technology, evaluators may not have access to an agent's training episodes, even if they have access to the training environments. In this context, the distinction between G I and G E is particularly important when measuring an agent's generalization performance, as off-policy states may have unknowingly been visited during training. Quantifying Generalization Error. Generalization in Qvalue-based RL can be encapsulated by two measurements for off-policy and unreachable states, one that accounts for the condition δ > |q(s, a) − q π (s, a)|-whether the agent's estimate is close to the actual Q-value after executing πand another for the condition γ > q * (s, a) − q π (s, a)whether the actual Q-value is close to the optimal Q-value. In our work, we use value estimate error, VEE π (s) = v(s) − v π (s), and total accumulated reward, TAR π (s) = E π [ ∞ k=1 R(s t+k ) | s t = s, a t = a], respectively. In most situations, q * (s, a) is not known explicitly; however, TAR π (s) can be used to evaluate the relative generalization ability between two agents, as the optimal value for a given state is fixed by definition. Unlike TAR π (s), which, when measured in isolation can depend on the inherent difficulty of s, VEE π (s) has the advantage of consistency. For example, if an agent is placed in a state such that v * (s) = 0, TAR π (s) alone does not capture the model's ability to generalize. VEE π (s) may, however, ifv(s) ≈ 0. We address this limitation of TAR π (s) in our experiment by training benchmark (BM) agents on each of the evaluation conditions. Empirical Methodology In this section, we describe specific techniques for producing off-policy states and a general methodology for producing unreachable states based on parameterized simulators and controlled experiments. Off-Policy States It is helpful to think of off-policy states as the set of states that a particular agent could encounter, but doesn't when executing its policy from s 0 . Framed in this way, the task of generating off-policy states in practice is equivalent to finding agents with policies that differ from the policy of the agent under inspection. We present three distinct categories of alternative policies for producing off-policy states, which we believe to encapsulate a broad set of historical methods for measuring generalization in RL. 4 Stochasticity. One method for producing off-policy states is to introduce stochasticity into the policy of the agent under inspection (Machado et al. 2017). We present a representative method we call k off-policy actions (k-OPA), which causes the agent to execute some sequence of on-policy actions and then take k random actions to place the agent in an offpolicy state. This method is scalable to large and complex environments, but careful consideration must be made to avoid overlap between states, as well as to ensure that the episode does not terminate before k actions are completed. It is easy to imagine other variations, where the k actions are not selected randomly but according to some other mechanism inconsistent with greedy-action selection. Human Agents. The use of human agents has become a standard method in evaluating the generalization capabilities of RL agents. The most common method is known as human starts (HS) and is defined as exposing the agent to a state recorded by a human user interacting with an interface to the MDP environment (Mnih et al. 2015). One could easily imagine desirable variations on human starts within this general category, such as passing control back and forth between an agent and a human user. Human agents differ from other alternative agents in that they may not be motivated by the explicit reward function specified in the MDP, instead focusing on novelty or entertainment. Synthetic Agents. Synthetic agents are commonly used during training in multiagent scenarios, although to our knowledge have not been used previously to evaluate an agent's generalization ability. We present a representative method we call agent swaps (AS), where the agent is exposed to a state midway through an alternative agent's trajectory. This method has the potential to be significantly more scalable than human starts in large and complex environments, but attention must be paid to avoiding overlap between the alternative agents and the agent under inspection. This method may also be useful in applications not amenable to a user interface or otherwise challenging to gather human data. Unreachable States Unreachable states are unlike off-policy states, which can be produced using carefully selected alternative agents. By definition, unreachable states require some modification to the training environment. We propose a methodology that is particularly well suited for applications of deep RL, where agents often only have access to low-level observable effects, rather than what we would typically describe as a semantically meaningful or high-level representation. In the case of AMIDAR and other Atari games, for example, the position of individual entities can be described as latent state and the rendered pixels are their observable effects. Intervening on Latent State. We present two distinct classes of interventions on latent state: existential, adding or removing entities, and parameterized, varying the value of an input parameter for an entity. The particular design of intervention categories and magnitude should be based on expected sources of variation in the deployment environment, and will likely need to be customized for individual benchmarks. To facilitate this kind of intervention on latent state, we implemented INTERVENIDAR, an AMIDAR simulator. IN-TERVENIDAR closely mimics the Atari 2600 AMIDAR's behavior, 5 while allowing users to modify board configurations, sprite positions, enemy movement behavior, and other features of gameplay without modifying INTERVENIDAR source code. Some manipulable features that we use in our experiments are: Enemy existence and movement. The five enemies in AMI-DAR move at a constant speed along a fixed track. By default, INTERVENIDAR also has five enemies whose movement behavior is a time-based lookup table that mimics enemy position and speed in AMIDAR. Other distinct enemy movement behaviors include following the perimeter and the alternative movement protocols. These enemy behaviors are implemented as functions of the enemy's local board configuration and are used for our transfer learning experiments. Line segment existence and predicates. A line segment is any piece of track that intersects with another piece of track at both endpoints. Line segments may be filled or unfilled; the player's objective is to fill all of them. In INTERVENIDAR, users may specify which of the 88 line segments are filled at any timestep. Furthermore, INTERVENIDAR allows users to customize the quantity and position of line segments. Player/enemy positions. Player and enemy entities always begin a game in the same start positions during AMIDAR, but they may be moved to arbitrary locations at any point in INTERVENIDAR. We included these features in the experiments because they encapsulate what we believe to be the fundamental components of AMIDAR gameplay, avoiding death and navigating the board to accumulate reward. The scale of these interventions were selected to reflect a small change from the original environment, and are detailed in the case-study section. Control. In addition to producing unreachable states, parameterizable simulators enable fine control of experiments, informing researchers and practitioners about where agents fail to generalize, not simply that they fail macroscopically. One limitation of using exclusively off-policy states is that multiple components of latent state may be confounded, making it challenging to disentagle the causes of brittleness from other differences between on-policy and off-policy states. Controlled experiments avoid this problem of confounding by modifying only a single component of latent state. Analysis Case Study: AMIDAR We trained a suite of agents and evaluated them on a series of on-policy, off-policy, and unreachable INTERVENIDAR states. Using our proposed partitioning of states and empirical methodology, we ran a series of experiments on these agents' ability to generalize. In this section, we discuss how we generated off-policy and unreachable states for the AMIDAR problem domain. We used the standard AMIDAR MDP specification for state: a three-dimensional tensor composed of greyscale pixel values for the current, and three previous, frames during gameplay (Mnih et al. 2015). There are five movement actions. The transition function is deterministic, and entirely encapsulated by the AMIDAR game. The reward function is the difference between succesive scores, and is truncated such that positive differences in score result in a reward of 1. There are no negative rewards, and state transitions with no change in score result in a reward of 0. We trained all agents using the state-of-the-art dueling network architecture, double Q-loss function, and prioritized experience replay ( Van Hasselt, Guez, and Silver 2016;Wang et al. 2015;). All of the training sessions in this paper used the same hyperparameters as in Mnih et al.'s work and we use the OpenAI's baselines implementation (Dhariwal et al. 2017). AMIDAR Agents. We explored three types of modifications on network architecture and training regimens in an attempt to produce more generalized agents: (1) increasing dataset size by increasing training time; (2) broadening the support of the training data by increasing exploration at the start of each episode; and (3) reducing model capacity by decreasing network size and number of layers. To establish performance benchmarks for unreachable states, we trained an agent on each of the experimental extrapolation configurations. Training Time. To understand the effect of training-set size on generalization performance, we saved checkpoints of the parameters for the baseline DQN after 10, 20, 30, and 40 million training actions before the model's training reward converged at approximately 50 million actions. This process differs from increasing training dataset size in prediction tasks in that increasing the number of training episodes simulataneously changes the distribution of states in the agent's experience replay. Exploring Starts. To increase the diversity of the agent's experience, we trained agents with 30 and 50 random actions at the beginning of each training episode before returning to the agent's standard -greedy exploration strategy. Model Capacity. To reduce the capacity of the Q-value function, we explored three architectural variations from the state-of-the-art dueling architecture: (1) reducing the size of the fully connected layers by half (256-HU), (2) reducing the number of channels in each of the three convolutional filters by half respectively (HC), and (3) removing the last convolutional layer of the network (TL). Recent work on deep networks for computer vision suggest that deeper architectures produce more heirarchical representations, enabling a higher degree of generalization (Krizhevsky, Sutskever, and Hinton 2012). Off-policy States. We employed three strategies to generate off-policy states for an agent: human starts, agent swaps, and k-OPA. None of these methods require the IN-TERVENIDAR system. In each case, we ran an agent nine times, for n steps, where n ∈ {100, 200, . . . , 900}. Human starts. Four individuals played 30 INTERVENIDAR games each. We randomly selected 75 action sequences lasting more than 1000 steps and extracted 9 states, taken at each of the n time steps (Nair et al. 2015). Agent swaps. We designated five of the trained agents as alternative agents: (1) the baseline agent, (2) the agent that starts with 50 random actions, (3) the agent with half of the convolutional channels as the original architecture, (4) the agent with only two convolutional layers, and (5) the agent with 256 hidden units. We chose these agents with the belief that their policies would be sufficienctly different from each other to provide some variation in off-policy states. 6 k-OPA. Unlike the previous two cases where states came from sources external to the agent, in this case we had every agent play the game for n steps before taking k random actions, where k was set to 10 and 20. Unreachable States. With INTERVENIDAR, we generated unreachable states, guaranteeing that the agent begins an episode in a state it has never encountered during training. All modifications to the board happen before gameplay. Modifications to enemies. We make one existential and one parameterized modification to enemies: We randomly remove between one and four enemies from the board (ER), and we shift one randomly selected enemy by n steps along its path, where n is drawn randomly between 1 and 20 (ES). Modifications to line segments. We make one existential and one parameterized modification to line segments: We add one new vertical line segment to a random location on the board (ALS) and we randomly fill between one and four non-adjacent unfilled line segments (FLS). Modification to player start position. We start the player in a randomly chosen unoccupied tile location that has at least one tile of buffer between the player and any enemies (PRS). Transfer Learning: Assessing Representations. We conducted a series of transfer learning experiments (Oquab et al. 2014), freezing the convolutional layers and retraining the fully connected layers for 25 million steps. We use these results to understand how learned representations in the convolutional layers relates to overall generalization performance. We train each of the agents using the alternative enemy movement protocol so that enemies move on the basis of local track features, rather than using a lookup table. If an agent has learned useful representations in the convolutional layers, then we expect that agent to learn a new policy using those representations for the alternative movement protocol. 7 Figure 5: TAR and average VEE for control, extrapolation, and interpolation experiments. The agent consistently overestimates the state value. TAR and VEE are strongly anticorrelated. All TAR bars are normalized by the TAR of the control condition. All VEE bars are normalized by their respective TAR. Results Our experiments demonstrate that: (1) the state-of-the-art DQN has poor generalization performance for AMIDAR gameplay; (2) distance in the network's learned representation is strongly anti-correlated with generalization performance; (3) modifications to training volume, model capacity, and exploration have minor and sometimes counterintuitive effects on generalization performance; and (4) generalization performance does not necessarily correlate with an agent's ability to transfer representations to a new environment. Poor Generalization Performance. Figures 4 and 5 show that the fully trained state-of-the-art DQN dueling architecture produces a policy that is exceptionally brittle to small non-adversarial changes in the environment. The most egregious examples can be seen in Figure 5, in the filling line segments (FLS) and player random starts (PRS) interventions. Visual inspection of the action sequences proceeeding these states showed the agent predominantly remaining stationary, often terminating the epsisode without traversing a single line segment. This behavior can be seen in Figure 4, where PRS and FLS episodes terminate prematurely. Videos displaying this behaviour can be found in the supplementary materials. Furthermore, Figure 5 shows that VEE and TAR are very highly anti-correlated across the experiments, indicating that the agent's ability to select appropriate actions is related to its ability to correctly measure the value of a particular state. We observe that the model always overestimates the value of off-policy and unreachable states. In contrast, the agent's value estimates are small and approximately symetrically distributed around 0 in the control condition. Distance in Representation. By extracting the activations of the last layer of the DQN, we are able to observe the distance between training and evaluation states with respect to the network's learned representation. Figure 6 depicts the density estimates for the distribution of these distances. We find that the agent does not "recognize" the unreachable states where generalization is the worst, such as PRS and FLS, implying that the learned representation is inconsistent with these components of latent state. Alternatively, one could imagine a network that performs poorly by conflating states that are meaningfully different. Training Agents for Generalization. We take inspiration from well-established methods in supervised learning; increasing training set size, broadening the support of the training distribution, and reducing model capacity. We propose the following analogs to each of these methods, respectively; increasing the number of training episodes, introducing additional exploration, and removing layers and nodes. These experiments indicate that: (1) naïvely increasing the number of training episodes until training set performance converges reduces generalization; (2) some reductions to model capacity induce improvements to generalization; and (3) increasing exploration and otherwise diversifying training experience results in more generalized policies. These results are shown in figure 3. Training Episodes. While increasing training time clearly increases the total accumulated reward in the control condition, shorter training times appear to contribute to increased generalization ability. This increase is minimal, but it does illustrate that naïvely increasing training time until converge of training rewards may not be the best strategy for producing generalized agents. Model Capacity. Of the reductions to model capacity, we find that shrinking the size of the fully-connected layers results in the greatest increase in generalization performance across perturbations. Reducing the number of convolutional layers also results in improvements in generalization performance, particularly for the enemy perturbation experiments. Exploration Starts. We find that increasing the diversity of training experience has the greatest effect on generalization performance, particularly for the agent with 50 random actions. This agent experiences almost a twofold increase in total accumulated reward for human starts and all of the extrapolation experiments. This agent outperforms the baseline agent in every condition. Of particular interest is the agent's performance on the enemy shift experiments, where the agents' total accumulated reward approaches the reward achieved by an agent trained entirely in that scenario. Hierarchical Representations and Generalization. While the agents with increased exploration demonstrate a clear improvement in generalization ability over baseline, it is not consistent with their ability to accumulate large reward with the alternative enemy-movement protocol after retraining. This finding contradicts those of work on representations in computer vision, where transferability of representations directly corresponds to generalization ability. Conclusions Generalization in RL needs to be discussed more broadly, as a capability of an arbitrary agent. We propose framing generalization as the performance metric of the researcher's choice over a partition of on-policy, off-policy, and unreachable states. Our custom, parameterizable AMIDAR simulator is a proof of concept of the type of simulation environments that are needed for generating unreachable states and training truly general agents.
4,628
1812.02605
2903301099
In this paper, a unified approach is presented to transfer learning that addresses several source and target domain label-space and annotation assumptions with a single model. It is particularly effective in handling a challenging case, where source and target label-spaces are disjoint, and outperforms alternatives in both unsupervised and semi-supervised settings. The key ingredient is a common representation termed Common Factorised Space. It is shared between source and target domains, and trained with an unsupervised factorisation loss and a graph-based loss. With a wide range of experiments, we demonstrate the flexibility, relevance and efficacy of our method, both in the challenging cases with disjoint label spaces, and in the more conventional cases such as unsupervised domain adaptation, where the source and target domains share the same label-sets.
Transfer learning (TL) aims to transfer knowledge from one domain task to improve performance on the another @cite_6 . The most widely used TL technique for deep networks is fine-tuning @cite_9 @cite_0 @cite_7 . Instead of training a target network from scratch, its weights are initialised by a pre-trained model from another task such as ImageNet @cite_19 classification. While fine-tuning reduces label requirement compared to learning the target problem from scratch, it is prone to over-fitting if target labels are very few @cite_9 . Therefore, it is ineffective for very sparsely supervised DLSTL, and not applicable to unsupervised DLSTL. Moreover, vanilla TL does not exploit available unlabelled samples for the target problem (i.e. semi-supervised TL). The most related method to ours is @cite_17 which does exploit both unlabelled and few labelled data, i.e., semi-supervised DLSTL. However like other TL methods, it does not generalise to the unsupervised DLSTL setting where no target annotations are available.
{ "abstract": [ "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.", "Many deep neural networks trained on natural images exhibit a curious phenomenon in common: on the first layer they learn features similar to Gabor filters and color blobs. Such first-layer features appear not to be specific to a particular dataset or task, but general in that they are applicable to many datasets and tasks. Features must eventually transition from general to specific by the last layer of the network, but this transition has not been studied extensively. In this paper we experimentally quantify the generality versus specificity of neurons in each layer of a deep convolutional neural network and report a few surprising results. Transferability is negatively affected by two distinct issues: (1) the specialization of higher layer neurons to their original task at the expense of performance on the target task, which was expected, and (2) optimization difficulties related to splitting networks between co-adapted neurons, which was not expected. In an example network trained on ImageNet, we demonstrate that either of these two issues may dominate, depending on whether features are transferred from the bottom, middle, or top of the network. We also document that the transferability of features decreases as the distance between the base task and target task increases, but that transferring features even from distant tasks can be better than using random features. A final surprising result is that initializing a network with transferred features from almost any number of layers can produce a boost to generalization that lingers even after fine-tuning to the target dataset.", "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.", "Recent advances in object detection are mainly driven by deep learning with large-scale detection benchmarks. However, the fully-annotated training set is often limited for a target detection task, which may deteriorate the performance of deep detectors. To address this challenge, we propose a novel low-shot transfer detector (LSTD) in this paper, where we leverage rich source-domain knowledge to construct an effective target-domain detector with very few training examples. The main contributions are described as follows. First, we design a flexible deep architecture of LSTD to alleviate transfer difficulties in low-shot detection. This architecture can integrate the advantages of both SSD and Faster RCNN in a unified deep framework. Second, we introduce a novel regularized transfer learning framework for low-shot detection, where the transfer knowledge (TK) and background depression (BD) regularizations are proposed to leverage object knowledge respectively from source and target domains, in order to further enhance fine-tuning with a few target images. Finally, we examine our LSTD on a number of challenging low-shot detection experiments, where LSTD outperforms other state-of-the-art approaches. The results demonstrate that LSTD is a preferable deep detector for low-shot scenarios.", "The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.", "" ], "cite_N": [ "@cite_7", "@cite_9", "@cite_6", "@cite_0", "@cite_19", "@cite_17" ], "mid": [ "2613718673", "2149933564", "2165698076", "2788210750", "2108598243", "" ] }
Disjoint Label Space Transfer Learning with Common Factorised Space
Deep learning methods are now widely used in diverse applications. However, their efficacy is largely contingent on a large amount of labelled data in the target task and domain of interest. This issue continues to motivate intense interest in cross-task and cross-domain knowledge transfer. A wide range of transfer learning settings are considered which differ in whether the label spaces of source and target domains are overlapped (i.e., aligned or disjoint), as well as the amount of supervision/labelled training samples available in the target domain (see Figure 1). The standard practice of fine-tuning (Yosinski et al. 2014) treats a pre-trained source model as a good initialisation for training a target problem model. It is adopted when the label spaces of both domains are either aligned or disjoint, but always requires a significant amount of labelled data from the target, albeit less than learning from scratch. Another popular problem is the unsupervised domain adaptation (UDA), where knowledge is transferred from a labelled source domain to an unlabelled target domain (Tzeng et al. 2017;Ganin et al. 2016;Cao, Long, and Wang 2018). UDA makes the simplifying assumption that the label space of source and target domains are the same, and focuses on narrowing the distribution gap between source and target domains without any labelled samples from the target. Copyright c 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. An important but less-studied transfer learning problem setting is one where the source and target domains are with disjoint label spaces, recently highlighted by (Luo et al. 2017). In these problems, which we term Disjoint Label Space Transfer Learning (DLSTL), there are both a domain shift between source and target, as well as a new set of target classes to recognise with few (semi-supervised case) or no labelled (unsupervised case) sample per category. Thus, two main challenges exist simultaneously. On one hand, there is few or no target label to drive the adaptation. On the other hand, no clear path is provided to transfer source supervision to target domain due to the disjoint label spaces. As an example, consider object recognition in two cameras (domains) where the object categories (label-space) are different in each camera, and one source camera has dense labels, while the target camera has data but few or no labels. The traditional fine-tuning (Yosinski et al. 2014) and multi-domain training (Rebuffi, Bilen, and Vedaldi 2017) can address the supervised (few label) DLSTL variant, but break down if the labels are very few, and cannot exploit unlabelled data in the target camera, i.e., semi-supervised learning. Meanwhile UDA approaches (Ganin et al. 2016) based on distribution alignment are ineffective since the label-spaces are disjoint and feature distributions thus should not be indistinguishable. One approach that has the potential to handle DLSTL under both unsupervised and semi-supervised settings is based on modelling attributes, which can serve as a bridge across domains for better transferring the discriminative power Gebru, Hoffman, and Li 2017;. Source and target data can be aligned within the attribute space, in order to alleviate the impacts of disjoint label space in DLSTL problems. Nevertheless, attribute can be expensive to acquire which prevents it form being widely applicable. In this paper, a novel transfer learning model is proposed, which focuses on handling the most challenging setting, unsupervised DLSTL but is applicable to other settings including semi-supervised DLSTL and UDA. The model, termed common factorised space model (CFSM), is developed based on the simple idea that recognition should be performable in a shared latent factor space for both domains where each factor can be interpreted as latent attribute (Fu et al. 2014;Rastegari, Farhadi, and Forsyth 2012). In order to automatically discover such discriminative latent factors and align them for transferring knowledge across datasets/domains, our inductive bias is that input samples from both domains should generate low-entropy codes in this common space, i.e., near binary-codes (Salakhutdinov and Hinton 2009;). This is a weaker assumption than distribution matching, but does provide a criterion that can be optimised to align the two domains in the absence of common label space and/or labelled target domain training samples. Specifically, both domains should be explainable in terms of the same set of discriminative latent factors with high certainty. As a result, discriminative information from the source domain can be more effectively transferred to the target through this common factorised space. To implement this model in a neural network architecture, a common factorised space (CFS) layer is inserted between the feature output layer (the penultimate layer) and the classification layer (the final layer). This layer is shared between both domains and thus forms a common space. An unsupervised factorisation loss is then derived and applied on such common space which serves the purpose of optimising lowentropy criterion for discriminative latent factors discovery. Somewhat uniquely, cross-domain knowledge transfer of the proposed CFSM occurs at a relatively high layer (i.e., CFS layer). Particularly when the target domain problem is a retrieval one, it is important that this knowledge is propagated down from CFS to feature extraction for effective knowledge transfer. To assist this process we define a novel graph Laplacian-based loss -which builds a graph in the higher-level CFS, and regularises the lower-level network feature output to have matching similarity structure. i.e., that inter-sample similarity structure in the shared latent factor space should be reflected in earlier feature extraction. This top-down regularisation is opposite to the use of Laplacian regularisation in existing works (Belkin, Niyogi, and Sindhwani 2006; which are bottom-up, i.e., graph from lower-level regularises the higher-level features. This unique design is due to the fact that, although both spaces (CFS and feature) are latent, the former is closer to supervisions (e.g., from the labelled source data) and more aligned thanks to the factorisation loss, and thus more discriminative and 'trustworthy'. Contributions of the paper are as follows: 1. A unified approach to transfer learning is proposed. It can be applied to different transfer learning settings but is particularly at-tractive in handling the most challenging setting of unsupervised DLSTL. This setting is under-studied with the latest efforts focus on the easier semi-supervised DLSTL setting (Luo et al. 2017) with partially labelled target data. Several topical applications in computer vision such as person re-identification (Re-ID) and sketch-based image retrieval (SBIR) can be interpreted as unsupervised DLSTL which reveals its vital research and application values. 2. We propose a deep neural network based model, called common factorised space model (CFSM), that provides the first simple yet effective method for unsupervised DLSTL; it can be easily extended to semi-supervised DLSTL as well as conventional UDA problems. 3. A novel graph Laplacianbased loss is proposed to better exploit the more aligned and discriminative supervision from higher-level to improve deep feature learning. Finally, comprehensive experiments on various transfer learning settings, from UDA to DLSTL, are conducted. CFSM achieves state-of-the-art results on both unsupervised and semi-supervised DLSTL problems and performs competitively in standard UDA. The effectiveness and flexibility of the proposed model on transfer learning problems are thus demonstrated. Deep Binary Representation Learning The use of binary codes for hashing with deep networks goes back to (Salakhutdinov and Hinton 2009). In computer vision, hashing layers were inserted between feature-and classification-layers to provide a hashing code (Lin et al. 2015;). To produce a binary representation for fast retrieval, a threshold is applied on the sigmoid activated hashing layer (Lin et al. 2015). Our method is similar in working with a near-binary penultimate layer. However there are several key differences: First, our CFS serves a very different purpose to a hash code. We focus on TL to a new domain with new label-space, and the role of our CFS is to provide a representation with which different domains can be more aligned for knowledge transfer, rather than for efficient retrieval. In contrast, existing hashing methods follow the conventional supervised learning paradigm within a single domain. Second, the proposed CFS is only near-binary due to a low-entropy loss, rather than sacrificing representation power for an exactly binary code. Semi-supervised Learning Graph-based regularisation is popular for semi-supervised learning (SSL) which uses both labelled and unlabelled data to achieve better performance than learning with labelled data only (Zhu 2006;Belkin, Niyogi, and Sindhwani 2006). In SSL, graph based regularisation is applied to regularise model predictions to respect the feature-space manifold (Yue et al. 2017;Nadler, Srebro, and Zhou 2009;Belkin, Niyogi, and Sindhwani 2006). Moreover, exploiting the graph from lower-level to regularise higher-level features is widely adopted in other scenarios, e.g., unsupervised learning (Jia et al. 2015;). Due to the source→target knowledge transfer, the more 'trustworthy' layer in our method is the penultimate CFS layer, as it is closer to the supervision, rather than the feature space layer. Therefore our regularisation is applied to encourage the feature-extractor to learn representations that respect the CFS manifold shared by both domains, i.e., the regularisation direction is opposite to that in existing models. Entropy loss for unlabelled data is another widely used SSL regulariser (Zhu 2006;. It is applied at the classification layer in problems where the unlabelled and labelled data share the same label-space -and reflects the inductive bias that a classification boundary should not cut through the dense unlabelled data regions. Its typical use is on softmax classifier outputs where it encourages a classifier to pick a single label. In contrast we use entropy-loss to solve DLSTL problems by applying it element-wise on our intermediate CFS layer in order to weakly align domains by encouraging them to share a near-binary representation. Methodology Definition and notation For Disjoint Label Space Transfer Learning (DLSTL), there is a source (labelled) domain S and a target (unlabelled or partially labelled) domain T 1 . The key characteristic of DLSTL is the disjoint label space assumption, i.e., the source Y S and target Y T label spaces are potentially disjoint: Y S ∩ Y T = ∅. Instances from source/target domains are denoted X S and X T respectively. The combined inputs {X S , X T } are denoted as X. To present our model, we stick mainly to the most challenging unsupervised DLSTL setting where target labels are totally absent. The easier cases, e.g., semi-supervised DLSTL and UDA, can then be handled with minor modifications. Model Architecture The proposed model architecture consists of three modules, a feature extractor F = Φ θ M (X) that can be any deep neural network and is shared between all domains. This is followed by a fully connected layer and sigmoid activation σ, which define the Common Factorised Space (CFS) layer. This provides a representation of dimension d C , F C = Ψ θ C (·) = σ(W Φ θ M (·) + b). Recall that the goal of CFS is to learn a latent factor (low-entropy) representation for both source and target domains. The sigmoid activation means that the layer's scale is F C ∈ (0, 1) d C , so activations near 0 or 1 can be interpreted as the corresponding latent factor being present or absent. To encourage a near-binary representation, unsupervised factorisation loss is applied. For the labelled source domain only, the pre-activated F C are then classified by softmax classifier χ θ S with cross-entropy loss. The overall architecture is illustrated in Figure 2. Regularised Model Optimisation The parameters of the proposed CFSM are θ := {θ M , θ C , θ S } including parameters of the feature extractor θ M , CFS layer θ C and source classifier θ S . The training procedure can be formulated as a maximum-a-posterior (MAP) learning given labelled source {X S , Y S } and unlabelled tar- get data X T ,θ = argmax θ p(θ|X S , Y S , X T ),(1) where p(θ|X S , Y S , X T ) is the posterior of model parameter θ given data X S , Y S , X T . This can be rewritten as p(θ|X S , Y S , X T ) ∝p(θ, X S , Y S , X T ) ∝p(Y S |X S , X T , θ)p(θ|X S , X T ).(2) So the optimisation in Eq. 1 is equivalentlŷ θ = argmax θ p(Y S |X S , θ)p(θ|X).(3) Sampling mini-batch Figure 2: The proposed model architecture is illustrated. Different colours corresponding to different data streams. Green indicates source data. Blue is used for target data. Purple means joint data from both source and target domains. The first term p(Y S |X S , θ) in Eq. 3 represents the likelihood of source labels w.r.t. θ. Optimising this term is a conventional supervised learning task with a loss denoted sup (X S , Y S ; θ). The second term p(θ|X) in Eq. 3 is a prior depending on the input data X of both source and target datasets. From an optimisation perspective, this is the regulariser that will play the key role in solving unsupervised DLSTL problems since it requires no labels. Given the model architecture, it can be further decomposed as: p(θ|X) =p(θ M , θ C |X) =p(θ C |θ M , X)p(θ M |X),(4) where θ S is excluded since no supervision is used. Specifically, the first term p(θ C |θ M , X) serves as the regulariser on the CFS layer while the second term p(θ M |X) regularises the deep feature extractor Φ θ M . Low-Entropy Regularisation: Unsupervised Adaptation We first discuss how to define the prior p(θ C |θ M , X) regulariser for CFS layer. The sigmoid activated outputs F C from CFS layer Ψ θ C can be interpreted as multi-label predictions on latent factors. The uncertainty measure for label prediction can be defined by using its entropy h(θ C |θ M , X) = − N i=1 < F C,i , log(F C,i ) > = − N i=1 < Ψ θ C (x i ), log(Ψ θ C (x i )) >(5) where F C,i denotes the common factor representation Ψ θ C (x i ) of instance x i ∈ X. This is applied on both source and target data, so N is the number of instances in both datasets. log(·) is applied element-wise, and < ·, · > is vector inner product. According to the low-uncertainty criterion (Carlucci et al. 2017), optimising the prior term p(θ C |θ M , X) can be achieved by minimising this uncertainty measure. Eq. 5 is thus the regulariser corresponding to the prior p(θ C |θ M , X). Specifically, this loss biases the representation F C to contain more certain predictions, e.g., closer to 0 or 1 for each discovered latent factor. Therefore, we denote it as unsupervised factorisation loss. In summary, the low-entropy regulariser on CFS is built upon the assumption that the two domains share a set of latent attributes and that if a source classifier is well adapted to the target, then the presence/absence of these attributes should be certain for each instance. Therefore, it essentially generalises the low-uncertainty principle (widely used in existing unsupervised and semi-supervised learning literature) to the disjoint label space setting. Graph Regularisation: Robust Feature Learning The second prior in Eq. 4 is p(θ M |X) which acts as the regulariser for the feature extractor Φ θ M . The unique property of our setup so far is that the knowledge transfer into the target domain is via the CFS layer; therefore we are interested in ensuring that the feature extractor network extracts features whose similarity structure reflects that of the latent factors in the CFS layer. Unlike conventional graph Laplacian losses that regularise higher-level features with a graph built on lower-level features (Belkin, Niyogi, and Sindhwani 2006;Zhu 2006), we do the reverse and regularise the feature extractor Φ θ M to reflect the similarity structure in F C . This is particularly important for applications where the target problem is retrieval, because we use deep features F = Φ θ M (·) as an image representation. The proposed graph loss is expressed as Tr(F T ∆ F C F ),(6) where ∆ F C is the graph Laplacian (Cai et al. 2011) built on the common space features F C . Summary We unify the proposed model architecture θ := {θ M , θ C , θ D S } with source {X S , Y S } and target {X T } data for unsupervised DLSTL problems from an maximum-a-posterior (MAP) perspective. This decomposes into a standard supervised term p(Y S |X S , θ) (source data only) and data-driven priors for the CFS layer and feature extraction module. They correspond to supervised loss sup (X S , Y S ; θ), unsupervised factorisation loss (Eq. 5) and the graph loss (Eq. 6) respectively. Taking all terms into account, the final optimisation objective of Eq. 3 is L(θ) = sup (X S , Y S ; θ) + β M T r(F T ∆ F C F ) − β C 1 N N i=1 < F C,i , log(F C,i ) > .(7) where β C and β M are balancing hyper-parameters. In order to select β C and β M , the model is first run by setting all weights to 1; after the first few iterations, we check the values of each loss. We then set the two hyper-parameters to rescale the losses to a similar range so that all three terms contribute approximately equally to the training. Mini-batch organisation Convolutional Neural Networks (CNNs) are usually trained with SGD mini-batch optimisation, but Eq. 7 is expressed in a full-batch fashion. Converting Eq. 7 to mini-batch optimisation is straightforward. However, it is worth mentioning the mini-batch scheduling: each mini-batch contains samples from both source and target domains. The supervised loss is applied only to source samples with corresponding supervision, the entropy and graph losses are applied to both, and the graph is built per-mini-batch. In this work, the number of source and target samples are equally balanced in a mini-batch. Experiments The proposed model is evaluated on progressively more challenging problems. First, we evaluate CFSM on unsupervised domain adaptation (UDA). Second, different DLSTL settings are considered, including semi-supervised DLSTL classification and unsupervised DLSTL retrieval. CFSM handles all these scenarios with minor modifications. The effectiveness CFSM is demonstrated by its superior performance compared to the existing work. Finally insight is provided through ablation study and visualisation analysis. (Luo et al. 2017). Our CFSM is pre-trained on the source dataset with cross-entropy supervision and d C = 50, followed by joint training on source and target with our regularisers as in Eq. 7. Since the labelspace is shared in UDA, we also apply entropy loss on the softmax classification of the target . We set β M = 0.001 and β C = 0.01. Results We compare our method with two baselines. Source only: Supervised training on the source and directly apply to target data. Joint FT: Model is initialised with source pre-train, and fine-tuning on both domains with supervised loss for source and semi-supervised entropy loss for target. We also compare several deep UDA methods including Gradient Reversal (Ganin et al. 2016), Domain Confusion (Tzeng et al. 2015), ADDA (Tzeng et al. 2017), Label Efficient Transfer (LET) (Luo et al. 2017), Asym. tri-training (Saito, Ushiku, and Harada 2017) and Respara (Rozantsev, Salzmann, and Fua 2018). As shown in Table 1, CFSM boosts the performance on both baselines with clear margin (25.5% and 9.3% vs. Source only and Joint FT respectively). Moreover, it is 5.5% higher than LET (Luo et al. 2017), the nearest competitor and only alternative that also addresses the DLSTL setting. Semi-supervised DLSTL: Digit Recognition Dataset and Settings We follow the semi-supervised DL-STL recognition experiment of (Luo et al. 2017) where again two digit datasets, SVHN and MNIST, are used. Images of digits 0 to 4 from SVHN are fully labelled as source data while images of digits 5 to 9 from MNIST are target data. The target dataset has sparse labels (k labels per class) and unlabelled images available. Thus we also add a classifier χ θ T after the CFS layer Ψ θ C for the target categories. The feature extractor architecture Φ θ M is exactly the same as in (Luo et al. 2017) for fair comparison. We pre-train CFSM on source data as initialisation, and then train it with both source and target data using only loss in Eq. 7. We set d C = 10, β M = β C = 0.01. The learning rate is 0.001 and the Adam (Kingma and Ba 2014) optimiser is used. Results The results for several degrees of target label sparsity k = 2, 3, 4, 5 (corresponding to 10, 15, 20, 25 labelled samples, or 0.034%, 0.050%, 0.066%, 0.086% of total target training data respectively), are reported in Table 2. Results are averaged over ten random splits as in (Luo et al. 2017). Besides the FT matching nets (Vinyals et al. 2016) and state-of-the-art LET results from (Luo et al. 2017), we run two baselines: Train Target: Training CFSM architecture from scratch with partially labelled target data only, and FT Target: The standard pre-train/fine-tune pipeline, i.e., pretrain on the labelled source, and fine-tune on the labelled target samples only. As shown in Table 2, the performances of baseline models are significantly lower than LET and the proposed CFSM. The Train Target baseline performs poorly as it is hard to achieve good performance with few target samples and no knowledge transfer from source. The Fine-Tune Target baseline performs poorly as the annotation here is too sparse for effective fine-tuning on the target problem. Fine-Tune matching nets follows the 5-way (k − 1)-shot learning with sparsely labelled target data only, but no improvement is shown over the other baselines. Our proposed CFSM consistently outperforms the state-of-the-art LET alternative. For example, under the most challenging setting (k = 2), CFSM is 1.8% higher than LET on mean accuracy and 0.2% lower on standard error. Unsupervised DLSTL: ReID and SBIR ReID The person re-identification (ReID) problem is to match person detections across camera views. Annotating person image identities in every camera in a camera network for training supervised models is infeasible. This motivates the topical unsupervised Re-ID problem of adapting a Re-ID model trained on one dataset with annotation to a new dataset without annotation. Although they are evaluated with retrieval metrics, contemporary Re-ID models are trained using identity prediction (classification) losses. This means that unsupervised Re-ID fits the unsupervised DLSTL setting, as the label-spaces (person identities) are different in different Re-ID datasets, and the target dataset has no labels. We adopt two highly contested large-scale benchmarks k = 2 k = 3 k = 4 k = 5 Train Target 66.5 ± 1.7 77.2 ± 1.1 83.0 ± 0.9 88. for unsupervised person Re-ID: Market (Zheng et al. 2015) and Duke . ImageNet pretrained Resnet50 (He et al. 2016) is used as the feature extractor Φ θ M . Cross-entropy loss with label smoothing and triplet loss are used for the source domain as supervised learning objectives. We set d C = 2048, β M = 2.0, β C = 0.01. Adam optimiser is used with learning rate 3.5e −4 . We treat each dataset in turn as source/target and perform unsupervised transfer from one to the other. Rank 1 (R1) accuracy and mean Average Precision (mAP) results on the target datasets are used as evaluation metrics. In Table 3, We show that our method outperforms the state-of-the-art alternatives purpose-designed for unsupervised person Re-ID: UMDL (Peng et al. 2016), PT-GAN (Wei et al. 2018), PUL (Fan, Zheng, and Yang 2017), CAMEL (Yu, Wu, and Zheng 2017), TJ-AIDL (Wang et al. 2018), SPGAN ) and MMFA (Lin et al. 2018). Note that TJ-AIDL and MMFA exploit attribute labels to help alignment and adaptation. The proposed method automatically discovers latent factors with no additional annotation. However, CFSM improves at least 3.0% over TJ-AIDL and MMFA on the R1 accuracy of both settings. FG-SBIR Fine-grained Sketch Based Image Retrieval (SBIR) focuses on matching a sketch with its corresponding photo (Sangkloy et al. 2016). As demonstrated in (Sangkloy et al. 2016), object category labels play an important role in retrieval performance, so existing studies make a closed world assumption, i.e., all testing categories overlap with training categories. However, if deploying SBIR in a real application such as e-commerce (Yu et al. 2016), one would like to train the SBIR system once on some source object categories, and then deploy it to provide sketch-based image retrieval of new categories without annotating new data and re-training for the target object category. Unsupervised adaptation to new categories without sketch-photo pairing labels is therefore another example of the unsupervised DL-STL problem. Comparing to Re-ID, where instances are person images in different camera views, instances in SBIR are either photos or hand-drawn sketches of objects. There are 125 object classes in the Sketchy dataset (Sangkloy et al. 2016). We randomly split 75 classes as a labelled source domain and use the remaining 50 classes to define an unlabelled target domain with disjoint label space. ImageNet pre-trained Inception-V3 (Szegedy et al. 2016) is used as the feature extractor Φ θ M . Cross-entropy and triplet loss are used for source supervision. We set d C = 512, β M = 10 −3 , β C = 0.1. Adam optimiser with learning rate 10 −4 is used. As a baseline, Source Only is the direct transfer alternative that uses the same architecture but trains on the source labelled data only, and is applied directly to the target without adaptation. The retrieval performance on unseen classes (tar. cls.) are reported. Results are averaged over 10 random splits. As shown in Table 4, the proposed CFSM improves the retrieval accuracy on unseen cases by 2.48%. Source only CFSM tar. cls. 23.74 ± 0.24 26.22 ± 0.25 Table 4: SBIR: Sketch-photo retrieval results (%). Averaged Rank 1 accuracy and standard error. Further Analysis Ablation study Unsupervised person Re-ID is chosen as the main benchmark for an ablation study. Firstly because it is a challenging and realistic large-scale problem in the unsupervised DLSTL setting, and secondly because it provides a bidirectional evaluation for more comprehensive analysis. The following ablated variants are proposed and compared with the full CFSM. Source Only: The proposed architecture is learned with source data and supervised losses only. Source+Regs: The regularisers, unsupervised factorisation and graph losses can be added with source dataset only. CFSM−Graph: Our method without the proposed graph loss. CFSM+ClassicGraph: Replacing our proposed graph loss with a conventional Laplacian graph (i.e., graphs constructed in lower-level feature space extracted by Φ θ M to regularise the proposed CFS). AE: Other regularisers such as feature reconstruction as in autoencoder (AE) is used to provide the prior term p(θ|X). We reconstruct the deep features F using the outputs of CFS layer as hidden representations. In this case both source and target data are used and the reconstruction error provides the regularisation loss. The results are shown in Table 5. Firstly, by comparing the variants that use source data only (Source Only and Source+Regs) with the joint training methods, we find they are consistently inferior. This illustrates that it is crucial to leverage target domain data for adaptation. Secondly, CFSM and its variants consistently achieve better results than AE, illustrating that our unsupervised factorisation loss and graph losses provide better regularisation for cross-domain/cross-task adaptation. The effectiveness of our graph loss is illustrated by two comparisons: (1) CFSM−Graph is worse than CFSM, showing the contribution of the graph loss; and (2) replacing our graph loss with the conventional Laplacian graph loss (CFSM+ClassicGraph) shows worse results than ours, justifying our choice of regularisation direction. Finally, we note that applying our regularisers to the source only (Source+Regs) still improves the performance slightly on target dataset vs Source Only. This shows that training with these regularisers has a small benefit to representation transferability even without adaptation. Visualisation analysis To understand the impact of unsupervised factorisation loss, Figure 3 illustrates the distribution of target CFS activations in the semi-supervised DLSTL setting (SVHN→MNIST). The left plot shows the activations without any such loss, leading to a distribution of moderate predictions peaked around 0.5. In contrast, the right plot shows the activation distribution on the target dataset of CFSM. We can see that our regulariser has indeed induced the target dataset to represent images with a low-entropy near-binary code. We also compare training a source model by adding low-entropy CFS loss, and then applying it to the target data. This leads to a low-entropy representation of the source data, but the middle plot shows that when transferred to the target dataset or adaptation the representation becomes high-entropy. That is, joint training with our losses is crucial to drive the adaptation that allows target dataset to be represented with near-binary latent factor codes. Qualitative Analysis We visualise the discovered latent attributes qualitatively. For each element in F C , we rank images in both source and target domains by their activation. Person images corresponding to the highest ten values of a specific F C are recorded. Figure 4 shows two example factors with images from the source (first row) and target (second row) dataset. We can see that the first example in Figure 4 ering both people's bags and clothes. The second example in Figure 4(b) is a higher-level latent attribute that is selective for both females, as well as textured clothes and bagcarrying. Importantly, these factors have become selective for the same latent factors across datasets, although the target dataset has no supervision (i.e., unsupervised DLSTL). Conclusion We studied a challenging transfer learning setting DLSTL, where the label space between source and target labels are disjoint, and the target dataset has few or no labels. In order to transfer the discriminative cues from the labelled source to the target, we propose a simple yet effective model which uses an unsupervised factorisation loss to discover a common set of discriminative latent factors between source and target datasets. And to improve feature learning for subsequent tasks such as retrieval, a novel graph-based loss is further proposed. Our method is both the first solution to the unsupervised DLSTL, and also uniquely provides a single framework that is effective at both unsupervised and semisupervised DLSTL as well as the standard UDA.
5,054
1812.02605
2903301099
In this paper, a unified approach is presented to transfer learning that addresses several source and target domain label-space and annotation assumptions with a single model. It is particularly effective in handling a challenging case, where source and target label-spaces are disjoint, and outperforms alternatives in both unsupervised and semi-supervised settings. The key ingredient is a common representation termed Common Factorised Space. It is shared between source and target domains, and trained with an unsupervised factorisation loss and a graph-based loss. With a wide range of experiments, we demonstrate the flexibility, relevance and efficacy of our method, both in the challenging cases with disjoint label spaces, and in the more conventional cases such as unsupervised domain adaptation, where the source and target domains share the same label-sets.
Entropy loss for unlabelled data is another widely used SSL regulariser @cite_42 @cite_12 . It is applied at the classification layer in problems where the unlabelled and labelled data share the same label-space -- and reflects the inductive bias that a classification boundary should not cut through the dense unlabelled data regions. Its typical use is on softmax classifier outputs where it encourages a classifier to pick a single label. In contrast we use entropy-loss to solve DLSTL problems by applying it element-wise on our intermediate CFS layer in order to weakly align domains by encouraging them to share a near-binary representation.
{ "abstract": [ "Door lock apparatus in which a door latch mechanism is operated by inner and outer door handles coupled to a latch shaft extending through the latch mechanism. Handles are coupled to ends of latch shaft by coupling devices enabling door to be locked from the inside to prevent entry from the outside but can still be opened from the inside by normal operation of outside handle. Inside coupling device has limited lost-motion which is used to operate cam device to unlock the door on actuation of inner handles.", "The recent success of deep neural networks relies on massive amounts of labeled data. For a target task where labeled data is unavailable, domain adaptation can transfer a learner from a different source domain. In this paper, we propose a new approach to domain adaptation in deep networks that can jointly learn adaptive classifiers and transferable features from labeled data in the source domain and unlabeled data in the target domain. We relax a shared-classifier assumption made by previous methods and assume that the source classifier and target classifier differ by a residual function. We enable classifier adaptation by plugging several layers into deep network to explicitly learn the residual function with reference to the target classifier. We fuse features of multiple layers with tensor product and embed them into reproducing kernel Hilbert spaces to match distributions for feature adaptation. The adaptation can be achieved in most feed-forward models by extending them with new residual layers and loss functions, which can be trained efficiently via back-propagation. Empirical evidence shows that the new approach outperforms state of the art methods on standard domain adaptation benchmarks." ], "cite_N": [ "@cite_42", "@cite_12" ], "mid": [ "2136504847", "2279034837" ] }
Disjoint Label Space Transfer Learning with Common Factorised Space
Deep learning methods are now widely used in diverse applications. However, their efficacy is largely contingent on a large amount of labelled data in the target task and domain of interest. This issue continues to motivate intense interest in cross-task and cross-domain knowledge transfer. A wide range of transfer learning settings are considered which differ in whether the label spaces of source and target domains are overlapped (i.e., aligned or disjoint), as well as the amount of supervision/labelled training samples available in the target domain (see Figure 1). The standard practice of fine-tuning (Yosinski et al. 2014) treats a pre-trained source model as a good initialisation for training a target problem model. It is adopted when the label spaces of both domains are either aligned or disjoint, but always requires a significant amount of labelled data from the target, albeit less than learning from scratch. Another popular problem is the unsupervised domain adaptation (UDA), where knowledge is transferred from a labelled source domain to an unlabelled target domain (Tzeng et al. 2017;Ganin et al. 2016;Cao, Long, and Wang 2018). UDA makes the simplifying assumption that the label space of source and target domains are the same, and focuses on narrowing the distribution gap between source and target domains without any labelled samples from the target. Copyright c 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. An important but less-studied transfer learning problem setting is one where the source and target domains are with disjoint label spaces, recently highlighted by (Luo et al. 2017). In these problems, which we term Disjoint Label Space Transfer Learning (DLSTL), there are both a domain shift between source and target, as well as a new set of target classes to recognise with few (semi-supervised case) or no labelled (unsupervised case) sample per category. Thus, two main challenges exist simultaneously. On one hand, there is few or no target label to drive the adaptation. On the other hand, no clear path is provided to transfer source supervision to target domain due to the disjoint label spaces. As an example, consider object recognition in two cameras (domains) where the object categories (label-space) are different in each camera, and one source camera has dense labels, while the target camera has data but few or no labels. The traditional fine-tuning (Yosinski et al. 2014) and multi-domain training (Rebuffi, Bilen, and Vedaldi 2017) can address the supervised (few label) DLSTL variant, but break down if the labels are very few, and cannot exploit unlabelled data in the target camera, i.e., semi-supervised learning. Meanwhile UDA approaches (Ganin et al. 2016) based on distribution alignment are ineffective since the label-spaces are disjoint and feature distributions thus should not be indistinguishable. One approach that has the potential to handle DLSTL under both unsupervised and semi-supervised settings is based on modelling attributes, which can serve as a bridge across domains for better transferring the discriminative power Gebru, Hoffman, and Li 2017;. Source and target data can be aligned within the attribute space, in order to alleviate the impacts of disjoint label space in DLSTL problems. Nevertheless, attribute can be expensive to acquire which prevents it form being widely applicable. In this paper, a novel transfer learning model is proposed, which focuses on handling the most challenging setting, unsupervised DLSTL but is applicable to other settings including semi-supervised DLSTL and UDA. The model, termed common factorised space model (CFSM), is developed based on the simple idea that recognition should be performable in a shared latent factor space for both domains where each factor can be interpreted as latent attribute (Fu et al. 2014;Rastegari, Farhadi, and Forsyth 2012). In order to automatically discover such discriminative latent factors and align them for transferring knowledge across datasets/domains, our inductive bias is that input samples from both domains should generate low-entropy codes in this common space, i.e., near binary-codes (Salakhutdinov and Hinton 2009;). This is a weaker assumption than distribution matching, but does provide a criterion that can be optimised to align the two domains in the absence of common label space and/or labelled target domain training samples. Specifically, both domains should be explainable in terms of the same set of discriminative latent factors with high certainty. As a result, discriminative information from the source domain can be more effectively transferred to the target through this common factorised space. To implement this model in a neural network architecture, a common factorised space (CFS) layer is inserted between the feature output layer (the penultimate layer) and the classification layer (the final layer). This layer is shared between both domains and thus forms a common space. An unsupervised factorisation loss is then derived and applied on such common space which serves the purpose of optimising lowentropy criterion for discriminative latent factors discovery. Somewhat uniquely, cross-domain knowledge transfer of the proposed CFSM occurs at a relatively high layer (i.e., CFS layer). Particularly when the target domain problem is a retrieval one, it is important that this knowledge is propagated down from CFS to feature extraction for effective knowledge transfer. To assist this process we define a novel graph Laplacian-based loss -which builds a graph in the higher-level CFS, and regularises the lower-level network feature output to have matching similarity structure. i.e., that inter-sample similarity structure in the shared latent factor space should be reflected in earlier feature extraction. This top-down regularisation is opposite to the use of Laplacian regularisation in existing works (Belkin, Niyogi, and Sindhwani 2006; which are bottom-up, i.e., graph from lower-level regularises the higher-level features. This unique design is due to the fact that, although both spaces (CFS and feature) are latent, the former is closer to supervisions (e.g., from the labelled source data) and more aligned thanks to the factorisation loss, and thus more discriminative and 'trustworthy'. Contributions of the paper are as follows: 1. A unified approach to transfer learning is proposed. It can be applied to different transfer learning settings but is particularly at-tractive in handling the most challenging setting of unsupervised DLSTL. This setting is under-studied with the latest efforts focus on the easier semi-supervised DLSTL setting (Luo et al. 2017) with partially labelled target data. Several topical applications in computer vision such as person re-identification (Re-ID) and sketch-based image retrieval (SBIR) can be interpreted as unsupervised DLSTL which reveals its vital research and application values. 2. We propose a deep neural network based model, called common factorised space model (CFSM), that provides the first simple yet effective method for unsupervised DLSTL; it can be easily extended to semi-supervised DLSTL as well as conventional UDA problems. 3. A novel graph Laplacianbased loss is proposed to better exploit the more aligned and discriminative supervision from higher-level to improve deep feature learning. Finally, comprehensive experiments on various transfer learning settings, from UDA to DLSTL, are conducted. CFSM achieves state-of-the-art results on both unsupervised and semi-supervised DLSTL problems and performs competitively in standard UDA. The effectiveness and flexibility of the proposed model on transfer learning problems are thus demonstrated. Deep Binary Representation Learning The use of binary codes for hashing with deep networks goes back to (Salakhutdinov and Hinton 2009). In computer vision, hashing layers were inserted between feature-and classification-layers to provide a hashing code (Lin et al. 2015;). To produce a binary representation for fast retrieval, a threshold is applied on the sigmoid activated hashing layer (Lin et al. 2015). Our method is similar in working with a near-binary penultimate layer. However there are several key differences: First, our CFS serves a very different purpose to a hash code. We focus on TL to a new domain with new label-space, and the role of our CFS is to provide a representation with which different domains can be more aligned for knowledge transfer, rather than for efficient retrieval. In contrast, existing hashing methods follow the conventional supervised learning paradigm within a single domain. Second, the proposed CFS is only near-binary due to a low-entropy loss, rather than sacrificing representation power for an exactly binary code. Semi-supervised Learning Graph-based regularisation is popular for semi-supervised learning (SSL) which uses both labelled and unlabelled data to achieve better performance than learning with labelled data only (Zhu 2006;Belkin, Niyogi, and Sindhwani 2006). In SSL, graph based regularisation is applied to regularise model predictions to respect the feature-space manifold (Yue et al. 2017;Nadler, Srebro, and Zhou 2009;Belkin, Niyogi, and Sindhwani 2006). Moreover, exploiting the graph from lower-level to regularise higher-level features is widely adopted in other scenarios, e.g., unsupervised learning (Jia et al. 2015;). Due to the source→target knowledge transfer, the more 'trustworthy' layer in our method is the penultimate CFS layer, as it is closer to the supervision, rather than the feature space layer. Therefore our regularisation is applied to encourage the feature-extractor to learn representations that respect the CFS manifold shared by both domains, i.e., the regularisation direction is opposite to that in existing models. Entropy loss for unlabelled data is another widely used SSL regulariser (Zhu 2006;. It is applied at the classification layer in problems where the unlabelled and labelled data share the same label-space -and reflects the inductive bias that a classification boundary should not cut through the dense unlabelled data regions. Its typical use is on softmax classifier outputs where it encourages a classifier to pick a single label. In contrast we use entropy-loss to solve DLSTL problems by applying it element-wise on our intermediate CFS layer in order to weakly align domains by encouraging them to share a near-binary representation. Methodology Definition and notation For Disjoint Label Space Transfer Learning (DLSTL), there is a source (labelled) domain S and a target (unlabelled or partially labelled) domain T 1 . The key characteristic of DLSTL is the disjoint label space assumption, i.e., the source Y S and target Y T label spaces are potentially disjoint: Y S ∩ Y T = ∅. Instances from source/target domains are denoted X S and X T respectively. The combined inputs {X S , X T } are denoted as X. To present our model, we stick mainly to the most challenging unsupervised DLSTL setting where target labels are totally absent. The easier cases, e.g., semi-supervised DLSTL and UDA, can then be handled with minor modifications. Model Architecture The proposed model architecture consists of three modules, a feature extractor F = Φ θ M (X) that can be any deep neural network and is shared between all domains. This is followed by a fully connected layer and sigmoid activation σ, which define the Common Factorised Space (CFS) layer. This provides a representation of dimension d C , F C = Ψ θ C (·) = σ(W Φ θ M (·) + b). Recall that the goal of CFS is to learn a latent factor (low-entropy) representation for both source and target domains. The sigmoid activation means that the layer's scale is F C ∈ (0, 1) d C , so activations near 0 or 1 can be interpreted as the corresponding latent factor being present or absent. To encourage a near-binary representation, unsupervised factorisation loss is applied. For the labelled source domain only, the pre-activated F C are then classified by softmax classifier χ θ S with cross-entropy loss. The overall architecture is illustrated in Figure 2. Regularised Model Optimisation The parameters of the proposed CFSM are θ := {θ M , θ C , θ S } including parameters of the feature extractor θ M , CFS layer θ C and source classifier θ S . The training procedure can be formulated as a maximum-a-posterior (MAP) learning given labelled source {X S , Y S } and unlabelled tar- get data X T ,θ = argmax θ p(θ|X S , Y S , X T ),(1) where p(θ|X S , Y S , X T ) is the posterior of model parameter θ given data X S , Y S , X T . This can be rewritten as p(θ|X S , Y S , X T ) ∝p(θ, X S , Y S , X T ) ∝p(Y S |X S , X T , θ)p(θ|X S , X T ).(2) So the optimisation in Eq. 1 is equivalentlŷ θ = argmax θ p(Y S |X S , θ)p(θ|X).(3) Sampling mini-batch Figure 2: The proposed model architecture is illustrated. Different colours corresponding to different data streams. Green indicates source data. Blue is used for target data. Purple means joint data from both source and target domains. The first term p(Y S |X S , θ) in Eq. 3 represents the likelihood of source labels w.r.t. θ. Optimising this term is a conventional supervised learning task with a loss denoted sup (X S , Y S ; θ). The second term p(θ|X) in Eq. 3 is a prior depending on the input data X of both source and target datasets. From an optimisation perspective, this is the regulariser that will play the key role in solving unsupervised DLSTL problems since it requires no labels. Given the model architecture, it can be further decomposed as: p(θ|X) =p(θ M , θ C |X) =p(θ C |θ M , X)p(θ M |X),(4) where θ S is excluded since no supervision is used. Specifically, the first term p(θ C |θ M , X) serves as the regulariser on the CFS layer while the second term p(θ M |X) regularises the deep feature extractor Φ θ M . Low-Entropy Regularisation: Unsupervised Adaptation We first discuss how to define the prior p(θ C |θ M , X) regulariser for CFS layer. The sigmoid activated outputs F C from CFS layer Ψ θ C can be interpreted as multi-label predictions on latent factors. The uncertainty measure for label prediction can be defined by using its entropy h(θ C |θ M , X) = − N i=1 < F C,i , log(F C,i ) > = − N i=1 < Ψ θ C (x i ), log(Ψ θ C (x i )) >(5) where F C,i denotes the common factor representation Ψ θ C (x i ) of instance x i ∈ X. This is applied on both source and target data, so N is the number of instances in both datasets. log(·) is applied element-wise, and < ·, · > is vector inner product. According to the low-uncertainty criterion (Carlucci et al. 2017), optimising the prior term p(θ C |θ M , X) can be achieved by minimising this uncertainty measure. Eq. 5 is thus the regulariser corresponding to the prior p(θ C |θ M , X). Specifically, this loss biases the representation F C to contain more certain predictions, e.g., closer to 0 or 1 for each discovered latent factor. Therefore, we denote it as unsupervised factorisation loss. In summary, the low-entropy regulariser on CFS is built upon the assumption that the two domains share a set of latent attributes and that if a source classifier is well adapted to the target, then the presence/absence of these attributes should be certain for each instance. Therefore, it essentially generalises the low-uncertainty principle (widely used in existing unsupervised and semi-supervised learning literature) to the disjoint label space setting. Graph Regularisation: Robust Feature Learning The second prior in Eq. 4 is p(θ M |X) which acts as the regulariser for the feature extractor Φ θ M . The unique property of our setup so far is that the knowledge transfer into the target domain is via the CFS layer; therefore we are interested in ensuring that the feature extractor network extracts features whose similarity structure reflects that of the latent factors in the CFS layer. Unlike conventional graph Laplacian losses that regularise higher-level features with a graph built on lower-level features (Belkin, Niyogi, and Sindhwani 2006;Zhu 2006), we do the reverse and regularise the feature extractor Φ θ M to reflect the similarity structure in F C . This is particularly important for applications where the target problem is retrieval, because we use deep features F = Φ θ M (·) as an image representation. The proposed graph loss is expressed as Tr(F T ∆ F C F ),(6) where ∆ F C is the graph Laplacian (Cai et al. 2011) built on the common space features F C . Summary We unify the proposed model architecture θ := {θ M , θ C , θ D S } with source {X S , Y S } and target {X T } data for unsupervised DLSTL problems from an maximum-a-posterior (MAP) perspective. This decomposes into a standard supervised term p(Y S |X S , θ) (source data only) and data-driven priors for the CFS layer and feature extraction module. They correspond to supervised loss sup (X S , Y S ; θ), unsupervised factorisation loss (Eq. 5) and the graph loss (Eq. 6) respectively. Taking all terms into account, the final optimisation objective of Eq. 3 is L(θ) = sup (X S , Y S ; θ) + β M T r(F T ∆ F C F ) − β C 1 N N i=1 < F C,i , log(F C,i ) > .(7) where β C and β M are balancing hyper-parameters. In order to select β C and β M , the model is first run by setting all weights to 1; after the first few iterations, we check the values of each loss. We then set the two hyper-parameters to rescale the losses to a similar range so that all three terms contribute approximately equally to the training. Mini-batch organisation Convolutional Neural Networks (CNNs) are usually trained with SGD mini-batch optimisation, but Eq. 7 is expressed in a full-batch fashion. Converting Eq. 7 to mini-batch optimisation is straightforward. However, it is worth mentioning the mini-batch scheduling: each mini-batch contains samples from both source and target domains. The supervised loss is applied only to source samples with corresponding supervision, the entropy and graph losses are applied to both, and the graph is built per-mini-batch. In this work, the number of source and target samples are equally balanced in a mini-batch. Experiments The proposed model is evaluated on progressively more challenging problems. First, we evaluate CFSM on unsupervised domain adaptation (UDA). Second, different DLSTL settings are considered, including semi-supervised DLSTL classification and unsupervised DLSTL retrieval. CFSM handles all these scenarios with minor modifications. The effectiveness CFSM is demonstrated by its superior performance compared to the existing work. Finally insight is provided through ablation study and visualisation analysis. (Luo et al. 2017). Our CFSM is pre-trained on the source dataset with cross-entropy supervision and d C = 50, followed by joint training on source and target with our regularisers as in Eq. 7. Since the labelspace is shared in UDA, we also apply entropy loss on the softmax classification of the target . We set β M = 0.001 and β C = 0.01. Results We compare our method with two baselines. Source only: Supervised training on the source and directly apply to target data. Joint FT: Model is initialised with source pre-train, and fine-tuning on both domains with supervised loss for source and semi-supervised entropy loss for target. We also compare several deep UDA methods including Gradient Reversal (Ganin et al. 2016), Domain Confusion (Tzeng et al. 2015), ADDA (Tzeng et al. 2017), Label Efficient Transfer (LET) (Luo et al. 2017), Asym. tri-training (Saito, Ushiku, and Harada 2017) and Respara (Rozantsev, Salzmann, and Fua 2018). As shown in Table 1, CFSM boosts the performance on both baselines with clear margin (25.5% and 9.3% vs. Source only and Joint FT respectively). Moreover, it is 5.5% higher than LET (Luo et al. 2017), the nearest competitor and only alternative that also addresses the DLSTL setting. Semi-supervised DLSTL: Digit Recognition Dataset and Settings We follow the semi-supervised DL-STL recognition experiment of (Luo et al. 2017) where again two digit datasets, SVHN and MNIST, are used. Images of digits 0 to 4 from SVHN are fully labelled as source data while images of digits 5 to 9 from MNIST are target data. The target dataset has sparse labels (k labels per class) and unlabelled images available. Thus we also add a classifier χ θ T after the CFS layer Ψ θ C for the target categories. The feature extractor architecture Φ θ M is exactly the same as in (Luo et al. 2017) for fair comparison. We pre-train CFSM on source data as initialisation, and then train it with both source and target data using only loss in Eq. 7. We set d C = 10, β M = β C = 0.01. The learning rate is 0.001 and the Adam (Kingma and Ba 2014) optimiser is used. Results The results for several degrees of target label sparsity k = 2, 3, 4, 5 (corresponding to 10, 15, 20, 25 labelled samples, or 0.034%, 0.050%, 0.066%, 0.086% of total target training data respectively), are reported in Table 2. Results are averaged over ten random splits as in (Luo et al. 2017). Besides the FT matching nets (Vinyals et al. 2016) and state-of-the-art LET results from (Luo et al. 2017), we run two baselines: Train Target: Training CFSM architecture from scratch with partially labelled target data only, and FT Target: The standard pre-train/fine-tune pipeline, i.e., pretrain on the labelled source, and fine-tune on the labelled target samples only. As shown in Table 2, the performances of baseline models are significantly lower than LET and the proposed CFSM. The Train Target baseline performs poorly as it is hard to achieve good performance with few target samples and no knowledge transfer from source. The Fine-Tune Target baseline performs poorly as the annotation here is too sparse for effective fine-tuning on the target problem. Fine-Tune matching nets follows the 5-way (k − 1)-shot learning with sparsely labelled target data only, but no improvement is shown over the other baselines. Our proposed CFSM consistently outperforms the state-of-the-art LET alternative. For example, under the most challenging setting (k = 2), CFSM is 1.8% higher than LET on mean accuracy and 0.2% lower on standard error. Unsupervised DLSTL: ReID and SBIR ReID The person re-identification (ReID) problem is to match person detections across camera views. Annotating person image identities in every camera in a camera network for training supervised models is infeasible. This motivates the topical unsupervised Re-ID problem of adapting a Re-ID model trained on one dataset with annotation to a new dataset without annotation. Although they are evaluated with retrieval metrics, contemporary Re-ID models are trained using identity prediction (classification) losses. This means that unsupervised Re-ID fits the unsupervised DLSTL setting, as the label-spaces (person identities) are different in different Re-ID datasets, and the target dataset has no labels. We adopt two highly contested large-scale benchmarks k = 2 k = 3 k = 4 k = 5 Train Target 66.5 ± 1.7 77.2 ± 1.1 83.0 ± 0.9 88. for unsupervised person Re-ID: Market (Zheng et al. 2015) and Duke . ImageNet pretrained Resnet50 (He et al. 2016) is used as the feature extractor Φ θ M . Cross-entropy loss with label smoothing and triplet loss are used for the source domain as supervised learning objectives. We set d C = 2048, β M = 2.0, β C = 0.01. Adam optimiser is used with learning rate 3.5e −4 . We treat each dataset in turn as source/target and perform unsupervised transfer from one to the other. Rank 1 (R1) accuracy and mean Average Precision (mAP) results on the target datasets are used as evaluation metrics. In Table 3, We show that our method outperforms the state-of-the-art alternatives purpose-designed for unsupervised person Re-ID: UMDL (Peng et al. 2016), PT-GAN (Wei et al. 2018), PUL (Fan, Zheng, and Yang 2017), CAMEL (Yu, Wu, and Zheng 2017), TJ-AIDL (Wang et al. 2018), SPGAN ) and MMFA (Lin et al. 2018). Note that TJ-AIDL and MMFA exploit attribute labels to help alignment and adaptation. The proposed method automatically discovers latent factors with no additional annotation. However, CFSM improves at least 3.0% over TJ-AIDL and MMFA on the R1 accuracy of both settings. FG-SBIR Fine-grained Sketch Based Image Retrieval (SBIR) focuses on matching a sketch with its corresponding photo (Sangkloy et al. 2016). As demonstrated in (Sangkloy et al. 2016), object category labels play an important role in retrieval performance, so existing studies make a closed world assumption, i.e., all testing categories overlap with training categories. However, if deploying SBIR in a real application such as e-commerce (Yu et al. 2016), one would like to train the SBIR system once on some source object categories, and then deploy it to provide sketch-based image retrieval of new categories without annotating new data and re-training for the target object category. Unsupervised adaptation to new categories without sketch-photo pairing labels is therefore another example of the unsupervised DL-STL problem. Comparing to Re-ID, where instances are person images in different camera views, instances in SBIR are either photos or hand-drawn sketches of objects. There are 125 object classes in the Sketchy dataset (Sangkloy et al. 2016). We randomly split 75 classes as a labelled source domain and use the remaining 50 classes to define an unlabelled target domain with disjoint label space. ImageNet pre-trained Inception-V3 (Szegedy et al. 2016) is used as the feature extractor Φ θ M . Cross-entropy and triplet loss are used for source supervision. We set d C = 512, β M = 10 −3 , β C = 0.1. Adam optimiser with learning rate 10 −4 is used. As a baseline, Source Only is the direct transfer alternative that uses the same architecture but trains on the source labelled data only, and is applied directly to the target without adaptation. The retrieval performance on unseen classes (tar. cls.) are reported. Results are averaged over 10 random splits. As shown in Table 4, the proposed CFSM improves the retrieval accuracy on unseen cases by 2.48%. Source only CFSM tar. cls. 23.74 ± 0.24 26.22 ± 0.25 Table 4: SBIR: Sketch-photo retrieval results (%). Averaged Rank 1 accuracy and standard error. Further Analysis Ablation study Unsupervised person Re-ID is chosen as the main benchmark for an ablation study. Firstly because it is a challenging and realistic large-scale problem in the unsupervised DLSTL setting, and secondly because it provides a bidirectional evaluation for more comprehensive analysis. The following ablated variants are proposed and compared with the full CFSM. Source Only: The proposed architecture is learned with source data and supervised losses only. Source+Regs: The regularisers, unsupervised factorisation and graph losses can be added with source dataset only. CFSM−Graph: Our method without the proposed graph loss. CFSM+ClassicGraph: Replacing our proposed graph loss with a conventional Laplacian graph (i.e., graphs constructed in lower-level feature space extracted by Φ θ M to regularise the proposed CFS). AE: Other regularisers such as feature reconstruction as in autoencoder (AE) is used to provide the prior term p(θ|X). We reconstruct the deep features F using the outputs of CFS layer as hidden representations. In this case both source and target data are used and the reconstruction error provides the regularisation loss. The results are shown in Table 5. Firstly, by comparing the variants that use source data only (Source Only and Source+Regs) with the joint training methods, we find they are consistently inferior. This illustrates that it is crucial to leverage target domain data for adaptation. Secondly, CFSM and its variants consistently achieve better results than AE, illustrating that our unsupervised factorisation loss and graph losses provide better regularisation for cross-domain/cross-task adaptation. The effectiveness of our graph loss is illustrated by two comparisons: (1) CFSM−Graph is worse than CFSM, showing the contribution of the graph loss; and (2) replacing our graph loss with the conventional Laplacian graph loss (CFSM+ClassicGraph) shows worse results than ours, justifying our choice of regularisation direction. Finally, we note that applying our regularisers to the source only (Source+Regs) still improves the performance slightly on target dataset vs Source Only. This shows that training with these regularisers has a small benefit to representation transferability even without adaptation. Visualisation analysis To understand the impact of unsupervised factorisation loss, Figure 3 illustrates the distribution of target CFS activations in the semi-supervised DLSTL setting (SVHN→MNIST). The left plot shows the activations without any such loss, leading to a distribution of moderate predictions peaked around 0.5. In contrast, the right plot shows the activation distribution on the target dataset of CFSM. We can see that our regulariser has indeed induced the target dataset to represent images with a low-entropy near-binary code. We also compare training a source model by adding low-entropy CFS loss, and then applying it to the target data. This leads to a low-entropy representation of the source data, but the middle plot shows that when transferred to the target dataset or adaptation the representation becomes high-entropy. That is, joint training with our losses is crucial to drive the adaptation that allows target dataset to be represented with near-binary latent factor codes. Qualitative Analysis We visualise the discovered latent attributes qualitatively. For each element in F C , we rank images in both source and target domains by their activation. Person images corresponding to the highest ten values of a specific F C are recorded. Figure 4 shows two example factors with images from the source (first row) and target (second row) dataset. We can see that the first example in Figure 4 ering both people's bags and clothes. The second example in Figure 4(b) is a higher-level latent attribute that is selective for both females, as well as textured clothes and bagcarrying. Importantly, these factors have become selective for the same latent factors across datasets, although the target dataset has no supervision (i.e., unsupervised DLSTL). Conclusion We studied a challenging transfer learning setting DLSTL, where the label space between source and target labels are disjoint, and the target dataset has few or no labels. In order to transfer the discriminative cues from the labelled source to the target, we propose a simple yet effective model which uses an unsupervised factorisation loss to discover a common set of discriminative latent factors between source and target datasets. And to improve feature learning for subsequent tasks such as retrieval, a novel graph-based loss is further proposed. Our method is both the first solution to the unsupervised DLSTL, and also uniquely provides a single framework that is effective at both unsupervised and semisupervised DLSTL as well as the standard UDA.
5,054
1812.02293
2903322478
Clustering is a fundamental machine learning task and can be used in many applications. With the development of deep neural networks (DNNs), combining techniques from DNNs with clustering has become a new research direction and achieved some success. However, few studies have focused on the imbalanced-data problem which commonly occurs in real-world applications. In this paper, we propose a clustering method, regularized deep embedding clustering (RDEC), that integrates virtual adversarial training (VAT), a network regularization technique, with a clustering method called deep embedding clustering (DEC). DEC optimizes cluster assignments by pushing data more densely around centroids in latent space, but it is sometimes sensitive to the initial location of centroids, especially in the case of imbalanced data, where the minor class has less chance to be assigned a good centroid. RDEC introduces regularization using VAT to ensure the model's robustness to local perturbations of data. VAT pushes data that are similar in the original space closer together in the latent space, bunching together data from minor classes and thereby facilitating cluster identification by RDEC. Combining the advantages of DEC and VAT, RDEC attains state-of-the-art performance on both balanced and imbalanced benchmark real-world datasets. For example, accuracies are as high as 98.41 on MNIST dataset and 85.45 on a highly imbalanced dataset derived from the MNIST, which is nearly 8 higher than the current best result.
In view of this problem, representation learning and clustering are performed simultaneously in recent works such as [ @cite_25 ], [ @cite_18 ], and [ @cite_1 ]. However, these works do not sufficiently address the imbalanced-data problem. DEC [ @cite_18 ] exhibits some degrees of robustness to imbalanced datasets, but work is needed to further improve its robustness. Several methods based on generative models have also been proposed [ @cite_23 ], [ @cite_9 ]. VaDE [ @cite_23 ] models a data generation procedure based on the variational autoencoder, where the data distribution in the latent space is modeled by GMM, the representations are sampled and then mapped into the space via the DNN. This approach is novel and can work well in some cases. However, because the class distribution is unknown in imbalanced dataset, it is difficult to learn a good generative model, which may lead to low versatility and robustness.
{ "abstract": [ "Clustering is central to many data-driven application domains and has been studied extensively in terms of distance functions and grouping algorithms. Relatively little work has focused on learning representations for clustering. In this paper, we propose Deep Embedded Clustering (DEC), a method that simultaneously learns feature representations and cluster assignments using deep neural networks. DEC learns a mapping from the data space to a lower-dimensional feature space in which it iteratively optimizes a clustering objective. Our experimental evaluations on image and text corpora show significant improvement over state-of-the-art methods.", "We study a variant of the variational autoencoder model (VAE) with a Gaussian mixture as a prior distribution, with the goal of performing unsupervised clustering through deep generative models. We observe that the known problem of over-regularisation that has been shown to arise in regular VAEs also manifests itself in our model and leads to cluster degeneracy. We show that a heuristic called minimum information constraint that has been shown to mitigate this effect in VAEs can also be applied to improve unsupervised clustering performance with our model. Furthermore we analyse the effect of this heuristic and provide an intuition of the various processes with the help of visualizations. Finally, we demonstrate the performance of our model on synthetic data, MNIST and SVHN, showing that the obtained clusters are distinct, interpretable and result in achieving competitive performance on unsupervised clustering to the state-of-the-art results.", "Most learning approaches treat dimensionality reduction (DR) and clustering separately (i.e., sequentially), but recent research has shown that optimizing the two tasks jointly can substantially improve the performance of both. The premise behind the latter genre is that the data samples are obtained via linear transformation of latent representations that are easy to cluster; but in practice, the transformation from the latent space to the data can be more complicated. In this work, we assume that this transformation is an unknown and possibly nonlinear function. To recover the clustering-friendly' latent representations and to better cluster the data, we propose a joint DR and K-means clustering approach in which DR is accomplished via learning a deep neural network (DNN). The motivation is to keep the advantages of jointly optimizing the two tasks, while exploiting the deep neural network's ability to approximate any nonlinear function. This way, the proposed approach can work well for a broad class of generative models. Towards this end, we carefully design the DNN structure and the associated joint optimization criterion, and propose an effective and scalable algorithm to handle the formulated optimization problem. Experiments using different real datasets are employed to showcase the effectiveness of the proposed approach.", "Clustering is among the most fundamental tasks in computer vision and machine learning. In this paper, we propose Variational Deep Embedding (VaDE), a novel unsupervised generative clustering approach within the framework of Variational Auto-Encoder (VAE). Specifically, VaDE models the data generative procedure with a Gaussian Mixture Model (GMM) and a deep neural network (DNN): 1) the GMM picks a cluster; 2) from which a latent embedding is generated; 3) then the DNN decodes the latent embedding into observables. Inference in VaDE is done in a variational way: a different DNN is used to encode observables to latent embeddings, so that the evidence lower bound (ELBO) can be optimized using Stochastic Gradient Variational Bayes (SGVB) estimator and the reparameterization trick. Quantitative comparisons with strong baselines are included in this paper, and experimental results show that VaDE significantly outperforms the state-of-the-art clustering methods on 4 benchmarks from various modalities. Moreover, by VaDE's generative nature, we show its capability of generating highly realistic samples for any specified cluster, without using supervised information during training. Lastly, VaDE is a flexible and extensible framework for unsupervised generative clustering, more general mixture models than GMM can be easily plugged in.", "" ], "cite_N": [ "@cite_18", "@cite_9", "@cite_1", "@cite_23", "@cite_25" ], "mid": [ "2173649752", "2556467266", "2950803263", "2952006246", "" ] }
RDEC: Integrating Regularization into Deep Embedded Clustering for Imbalanced Datasets
Clustering is a fundamental machine learning method that groups data into clusters according to some measure of similarity or distance. It is commonly used to understand or summarize data about which we have no prior knowledge. Clustering can be widely utilized in practical applications, such as customer segmentation [Ngai et al. (2009)], text categorization [Steinbach et al. (2000)], genome analysis [Sturn et al. (2002)], intrusion prevention, and outlier detection [Hodge and Austin (2004)]. There are thus great needs to study the clustering problem. Improving the performance of clustering has been approached in many ways [Estivill-Castro (2002)] [Berkhin (2006)], such as by creating or refining algorithms that directly perform the clustering task and by processing the data to make it more clustering-friendly. Traditional clustering algorithms, such as K-means, DBSCAN, Spectral Clustering, have been widely used for clustering analysis [Estivill-Castro (2002)]. Dimensionality reduction and representation-learning techniques, such as principal component analysis (PCA) and nonnegative matrix factorization (NMF) [Lee and Seung (2001)], have also been used extensively alongside clustering. In real-world applications, due to the diversity of datasets, careful selection of clustering algorithms and data processing techniques is required [Liu and Yu (2005)]. With the development of deep learning techniques, the concept of deep clustering, which integrates deep neural networks (DNNs) with conventional clustering methods, has attracted considerable attention among researchers. For example, deep embedded clustering (DEC) [Xie et al. (2016)] defines an effective objective function as the KL divergence loss between the predicted distribution and an auxiliary target distribution of labels. Variational deep embedding (VaDE) [Jiang et al. (2016)] models a data generation procedure and picks clusters from Gaussian mixture models (GMM). Information maximizing self-augmented training (IMSAT) [Hu et al. (2017)] learns the distribution of labels by maximizing the information-theoretic dependency between data and their representations. Deep clustering using these and other methods has become increasingly widespread. However, few studies have focused on the imbalanced-data problem, which arises naturally in real-world applications. A dataset is said to be imbalanced when the numbers of data points belonging to different classes are significantly different, a common occurrence. Examples include the existence of rare diseases in medical diagnostics datasets, and the existence of rare defective products in production inspection datasets. Imbalance typically appears as a significant reduction in the performance attainable by most methods, which assume a relatively even label distribution [Sun et al. (2007)]. This problem also exists in supervised and semi-supervised learning, where some studies have been performed. However, traditional approaches, such as re-sampling [Chawla et al. (2004)] and class-weighted cross-entropy [Ronneberger et al. (2015)], cannot be used in clustering because they require prior knowledge of the labels. There remains a need for an effective method that can improve performance for both balanced and imbalanced datasets. From a theoretical analysis and preliminary experiments on the latest deep clustering methods, we find DEC to be a promising method that is relatively robust to imbalanced data. DEC optimizes clustering assignments by pushing data more densely around centroids in latent space. When each centroid is initialized at a location surrounded by similar data, DEC is expected to perform well regardless of whether the dataset is balanced or imbalanced. However, DEC still has room for improvement with regard to class imbalance. In this work, we focus on two properties of DEC. First, DEC is sensitive to the initial location of centroids, which are randomly determined with K-means. With imbalanced data, in particular, minor classes are less likely to be assigned to good centroids, degrading DEC performance. We call this the initial centroid problem. Second, DEC tends to assign marginal data points (those far from all cluster centroids) to smaller clusters. Because it is unreasonable to set higher priorities to smaller clusters without knowing the class distributions, this rule may significantly degrade performance when applied to imbalanced data. We call this the marginal data problem. In this work, we apply virtual adversarial training (VAT) ] to mitigate these problems. VAT is a data augmentation technique originally proposed for supervised and semi-supervised learning. It aims to minimize difference in label distributions of input data and augmented data, the latter of which is generated by adding a small perturbation to the input data. The essential task of clustering is to gather similar data together, so VAT is in good agreement with clustering on this point since the augmented data can be interpreted as similar data. Our method thus uses VAT to augment the DEC loss function as a regularization term. We call our method regularized deep embedded clustering (RDEC). By integrating VAT, data points located near each other in the original space tend to be located together in the latent space, and data assembly considers not only centroids but also nearby data points. The contributions of this work are summarized as follows. • We propose RDEC as a deep clustering method that improves conventional DEC in two ways: 1) by improving accuracy for the whole dataset despite centroids not being placed in particularly good locations during initialization, a common occurrence with imbalanced datasets, and 2) by improving accuracy for data near the margins of clusters. • We conduct extensive experimental evaluation of deep clustering for comparison with current methods and analyse why RDEC works well. Our experimental results show that RDEC outperforms conventional methods on most benchmark datasets, particularly on imbalanced datasets. • We apply RDEC to a real-world application, namely clustering in wafer defect maps to find typical defective patterns in semiconductor manufacturing. Good performance on this dataset demonstrates RDEC's promising effectiveness for real-world applications. Regularized deep embedded clustering Notation Consider a dataset X consisting of n data vectors with dimensionality d. Let x i ∈ R d denote item i in X (index i can be omitted for simplicity when there is no need for specificity), and let K denote the number of clusters, which is assumed to come from prior knowledge. Clusters are indexed from 0 to K − 1, and each cluster is represented by a centroid u j (j = 0, 1, ..., K −1). Our task is to assign items x into K clusters. Instead of clustering directly in the data space X, data points are represented in a latent space Z via a nonlinear mapping f θ : X → Z, where θ is a set of learnable parameters. DNN is used to parametrize f θ . Figure 1 shows the RDEC network model, which comprises three sub-models. That connected by solid lines is the main clustering model, where data x are mapped into embedded representations z, and then transformed into their predicted distributions via a clustering layer. The clustering layer is the same as that of DEC. The other two sub-models connected by dotted lines are the autoencoder and VAT models. The autoencoder model is used for initialization and the VAT model for regularization. Both promote the clustering model. The learning process is divided into two phases: pretraining with autoencoder model and fine-tuning with the clustering and VAT models. Because the decoder is not used after pretraining and the VAT output is not needed in our final results, they are grayed out in Figure 1. We describe the clustering model and VAT model separately below. Their details can be found in [Xie et al. (2016)] and ], respectively. Clustering model Clustering is performed in the latent space Z. First, embedded data z i is assigned to cluster u j with probability q ij (q ij ∈ Q). Each q ij is the similarity between z i and u j measured by Student's t-distribution [Maaten and Hinton (2008)] as q ij = (1 + ||z i − u j || 2 /α) − α+1 2 j ′ (1 + ||z i − u j ′ || 2 /α) − α+1 2 ,(1) where α is the degree of freedom, set to 1 in this model. Centroids u j (j = 0, 1, ..., K − 1) are initialized with K-means on the embedded data z. Elements q ij of Q are also called soft assignments, and Q is called predicted distribution of labels. An auxiliary target distribution P corresponding to Q is defined, with each p ij ∈ P computed as p ij = q s ij /f j j ′ q s ij ′ /f j ′ , ( where f j = i q ij are the soft cluster frequencies and s is a constant. The original DEC fixed s = 2, but we found that it works well in a larger range, e.g. s ≥ 1. In this definition, q ij is raised to the power of s. Compared with the predicted distribution Q, the bell-shaped curve of target distribution P has a higher peak and lower tails. The numerator q s ij aims to attract data toward centroids. When the initial centroids are placed unfavourably, the initial centroid problem described in Section 1 occurs. The denominator f j has a large effect on marginal data points that are far from all centroids. When q ij is nearly equal over j, p ij corresponding to the lower frequency f j will become higher and marginal data will be assigned to smaller clusters. This dependency on f j is ill-suited to imbalanced datasets and will lead to the marginal data problem described in Section 1. The clustering model is trained by matching soft assignments to the target distribution. The objective function is defined as a KL divergence loss between Q and P as L D = KL[P Q] = i j p ij log p ij q ij .(3) VAT model VAT uses data augmentation to impose the intended invariance on the label distribution. It enables similar data in the original space to follow similar distributions in the latent space. The objective function is defined as a KL divergence loss between predicted distributions of x and augmented data x + r adv as L V = KL[Q Q(x + r adv )],(4) where r adv is adversarial perturbation computed in an adversarial way as r adv = arg max r;||r||≤ǫ KL[Q||Q(x + r)],(5) where r is a perturbation that does not alter the meaning of the data point and ǫ is the perturbation size, which is a hyper-parameter. The impact of ǫ is discussed in Section 4.7. Objective of RDEC From the above, the objective function of RDEC can be written as L = L D + γL V (6) = KL[P Q] + γKL[Q Q(x + r adv )],(7) where γ ≥ 0 is a weight that controls the balance of L D and L V in the loss function. Regularization with VAT reduces DEC's dependency on centroids and soft cluster frequencies by considering local similar data. The effects of VAT and γ are discussed in Sections 4.5 and 4.6, respectively. The objective function in Eq. (7) is optimized using mini-batch stochastic gradient decent (SGD) and backpropagation. Latent representations z i , cluster centroids u j , and soft assignments q ij are updated at each iteration, while the target distribution P is updated at an interval of τ iterations. The learning procedure is stopped when the rate of changes in assignments between two consecutive iterations falls below a threshold σ or the maximum number of iterations Itr max is reached. Because L V is learned in an adversarial way, RDEC would take triple the computation to update network parameters as compared with DEC. However, updating target distribution P accounts for the main computation in both RDEC and DEC, so the gap in computation times is not large. The time complexities of DEC and RDEC in each iteration can be respectively written as O( 1 τ n) + O(b) and O( 1 τ n) + O(b ′ ), where n is the number of samples, b is the mini-batch size, and b ′ = 3b. Experiments on benchmark datasets We quantitatively compared the performance of RDEC with the performance of several baseline methods on a set of benchmark datasets. The baseline methods were K-means, AE+Kmeans, AE+DBSCAN, DEC, IMSAT, VaDE, and DCN. AE+K-means and AE+DBSCAN are two-stage methods that execute K-means and DBSCAN after pretraining on latent features. Adjusted rand index (ARI) [Yeung and Ruzzo (2001)] and unsupervised clustering accuracy(ACC) are used as performance metrics. Following [Xie et al. (2016) ], ACC = max m n i=1 1{l i =m(c i )} n , where l i is the ground-truth label, c i is the cluster assignment, and m ranges over all possible one-to-one mappings between clusters and labels. Datasets Three benchmark datasets, MNIST, STL, and Reuters, are used in our experiments. Each has a relatively balanced label distribution. Because we focus on the performance of imbalanced datasets, we generated imbalanced versions of these datasets for comparison. To describe the degree of imbalance, we used a metric named minimum retention rate r min , defined as the ratio of disproportion between the numbers of samples from the minority and majority classes in the dataset. • MNIST: A dataset consisting of 70, 000 handwritten digits 0 to 9. All classes contain nearly equal numbers of samples [LeCun (1998)]. Each digit is size-normalized as a 28 × 28-px image, represented as a 784-dimensional vector. • MNIST-Imb-0: A variant of MNIST generated by reducing the number of 0-labeled samples to 1/10 its original value. The r min of MNIST-Imb-0 is thus 0.1. • MNIST-Imb-all: Another variant of MNIST where the numbers of samples in the ten classes differ significantly. The respective numbers of samples from class 0 to 9 were 10, 30, 50, 1, 000, 200, 500, 300, 6, 000, 4, 000, and 800, so the r min is 1/600. • STL: A set of 96 × 96-px color images [Coates et al. (2011)]. There are ten classes labeled as airplane, bird, car, and so on. Each class contains 1, 300 images and each image is represented as a 27, 648-dimensional vector. • STL-VGG: 2, 048-dimensional feature vectors of STL, extracted using the VGG-16 model with weights pretrained on ImageNet [Simonyan and Zisserman (2014)]. • STL-VGG-Imb: A STL-VGG variant where the number of samples in one class is reduced to 1/10, so the r min of this dataset is 0.1. • Reuters: A document dataset consisting of news stories labeled with a category tree [Lewis (2004)]. Following [Xie et al. (2016)], four root categories are used as labels and uniquely labeled documents are extracted. Each article is represented as a feature vector using the tf-idf method on the 2, 000 most frequent words. The number of articles from the four classes are 40, 635, 25, 457, 22, 356, and 8, 502, respectively. • Reuters-Imb: A variant of Reuters, generated by reducing the number of samples of the minority class to meet r min = 0.1. • Reuters-10K: A random subset of 10, 000 documents sampled from Reuters. The number of articles from the four classes are 4, 022, 2, 703, 2, 380, and 895, respectively. Parameter settings Following [Xie et al. (2016)], we used fully connected networks with dimensions d − 500 − 500 − 2000 − 10 for the encoder and 10 − 2000 − 500 − 500 − d for the decoder, where d is the input data dimension. Except for the input, output, and embedding layers, all internal layers are activated using the ReLU nonlinearity function. All datasets used the same network settings. Rather than greedy layer-wise training as in DEC, in the pretraining phase, we directly trained the autoencoder. Our preliminary experiments showed that direct training works well at a lower cost. The optimizer SGD with learning rate lr = 1 and momentum β = 0.9 was used for MNIST and Reuters, and the optimizer Adam [Kingma and Ba (2014)] with learning rate λ = 0.001, β 1 = 0.9, β 2 = 0.999 was used for STL. Pretraining epochs for MNIST, Reuters, and STL were set to 300, 50, and 10, respectively. In the fine-tuning phase, the clustering network was initialized with the trained encoder. For all datasets, the optimizer SGD with learning rate lr = 0.01 and momentum β = 0.9 was used, the maximum number of iterations Itr max was set to 20, 000, and the convergence threshold σ was set to 0.01. The update intervals τ for MNIST, Reuters, and STL were respectively set to 140, 30, and 30. Unless specifically stated otherwise, in these experiments parameters γ = 5 and s = 2. For VAT-related parameters, perturbation size ǫ = 1, mesh size ξ = 10, and power iterations ip = 1. In both the pretraining and finetuning phases, the mini-batch size was 256. All parameter settings of variant datasets were the same as in their original datasets. All experiments were repeated five times and the means of ARI and ACC were used for comparison, and the standard deviations were also recorded for reference. Note that because two parameters of DBSCAN must be tuned depending on datasets, we used a grid search method to choose appropriate values for each dataset. Table 1 shows the clustering results of the baseline methods compared with RDEC. From these results, we can see that RDEC outperforms other methods for most datasets, according to both metrics ACC and ARI. In particular, RDEC yielded a high accuracy of 98.41% on MNIST with a small standard deviation of 0.01, which is comparable with supervised and semi-supervised methods. IMSAT also performed well on MNIST, with small gaps in ACC and ARI between it and RDEC. However, for the imbalanced MNIST, gaps exceeded 20% in the case of MNIST-Imb-all. The results also validated our analysis of IMSAT in Section 2. Note that because the metric ACC is in good agreement with ARI in our results, following previous works such as DEC, in the following we reserve ACC for only comparison. RDEC clearly performed better than did DEC. For example, RDEC improved ACC on MNIST and MNIST-Imb-all by nearly 5% and 25%, respectively. These results reflect the effect of VAT. Table 1 also shows that all methods yielded poor performance for STL. For this difficult dataset, other techniques need to be combined with clustering. The results of STL-VGG demonstrated the feasibility of this idea. For example, VGG-16 significantly increased the accuracy on STL-VGG. Since the weights in VGG-16 are trained on ImageNet, from which STL was acquired, it is reasonable that STL-VGG yielded good performance. These results suggest that pretrained weights obtained from a dataset, those similar to the provided dataset but with easier-to-obtain labels, can be used for the current dataset. Results Performance on imbalanced datasets To further examine the effect of our method on imbalanced datasets, following the experimental method of DEC [Xie et al. (2016)], we sampled subsets of MNIST with various retention rates. For minimum retention rate r min , class-0 data points are kept at a probability of r min and those for class 9 with a probability of 1, with the other classes retained according to linear interpolation between. The results shown in Table 2 also demonstrate the superiority of RDEC. Even for the most imbalanced subset with r min = 0.1, ACC obtained by RDEC is higher than 85%. To investigate the performance on the minority class, we also calculated the recall, precision, and F-measure values of class 0, shown in Table 3. It is evident that the results as evaluated by recall, precision, and F-measure are in good agreement with those above as evaluated by ACC. RDEC indeed improves clustering performance on imbalanced datasets. There are two reasons why RDEC performs well on imbalanced datasets. First, DEC inherently emphasizes cluster purity, improving robustness against imbalanced data to some degree. In addition, VAT makes up for the weaknesses of DEC, allowing RDEC to significantly outperform DEC. In DEC, all assignments are performed on the basis of distances between embedded points and centroids, which are independent of cluster size. From the definition of the target distribution P in Eq. (2), we can see that soft assignments q are raised to the power of s and then divided by cluster frequency f , so data close to a centroid will move more closely toward that centroid, and data far from all cluster centroids will move to the one with less samples in the learning process. The whole learning process makes sure that data points assemble around the centroids. DEC performs well if the location of centroids is proper to the dataset and soft assignments have a relatively high level of confidence. RDEC alleviates the initial centroid and marginal data problems. By jointly optimizing L D and L V , data points are not only drawn by their centroids but also grabbed by other nearby points. This reduces centroid dependency and makes marginal data select centroid following data near them. This hypothesis is confirmed in the following Section 4.5. Effect of VAT We investigated the effects of VAT on balanced and imbalanced datasets using two subsets of MNIST, one consisting of all 0-labeled and 6-labeled samples, and another consisting of 1/10 of 0-labeled and all 6-labeled samples. To visualize the learning process, the dimension of the embedded layer was set to 2. DEC and RDEC were compared in both experiments, with the seed value fixed to guarantee that the two methods learned under the same conditions. Figure 2(a) illustrates the learning processes on the balanced MNIST subset. The first column displays the initial state where data points are embedded by autoencoder and centroids are initialized by K-means. Centroids and 0-labeled and 6-labeled data points are respectively colored red, blue, and green. To improve visibility, only a small number of points (about 280 0-labeled and 220 6-labeled points around one centroid) were selected to be tracked in the following learning processes. The leftmost panel in the first row displays the whole data, and the panel subfigure in the second row displays the tracked data. From the second column, the first row displays the DEC learning process and the second row displays that of RDEC, thereby visualizing processes at intervals of 140 iterations. Both the DEC and RDEC learning processes show clear trends in which data points assemble around the centroids. This reflects the fact that both methods optimize the L D in Eq. (3) in the same way. More importantly, data movements in RDEC are significantly clustering-friendly, with similar data joined and attracted toward the centroid as a whole. This phenomenon validates the hypothesis in Section 4.4. This property of VAT is very useful, especially for data near margins, where centroid selection has a low level of confidence. The accuracies of DEC and RDEC in this case are 84.82% and 98.29%, respectively. Figure 2(b) illustrates learning processes on the imbalanced MNIST subset. The effect of VAT is more significant in the imbalanced scenario. The performance of DEC was not very good in this case. Although the data points assemble around centroids just as well as in the example shown in Figure 2(a), the less-favourable centroid location divided data points from the same class by a perpendicular bisector line between two centroids, resulting in impure clusters. The RDEC learning process, in contrast, demonstrated strong robustness to this centroid location. Data movements by VAT are more significant than those by centroids. The accuracies of DEC and RDEC in this case are 61.57% and 99.37%, respectively. Effect of parameter γ Parameter γ has an adjusting leverage function in balancing two terms of the objective function Eq. (7). When γ is small (resp., large), the effect of L V is small (resp., large) and the effect of L D is large (resp., small). This effect was investigated by executing RDEC on MNIST and imbalanced MNIST with r min = 0.1 under different settings. All experiments were executed five times, the mean and standard deviation of accuracies were shown in Figure 3. Overall, RDEC yields stable performance with high accuracies and low deviations on MNIST when 2 ≤ γ ≤ 14. Accuracies on imbalanced MNIST, by contrast, increase with γ when 2 ≤ γ ≤ 16, though the performance becomes unstable when γ ≥ 7. Performance on both datasets suffered obvious degradations at extreme γ settings, showing that both the L D and L V terms are important for RDEC. We recommend setting γ between 2 and 6. Figure 3: Effect of parameter γ. Figure 4: Impact of parameter ǫ. Figure 5: Convergence of RDEC. . Impact of perturbation size ǫ Perturbation size ǫ specifies the range of nearby points requiring consideration during learning. It plays an important role and has been investigated in ]. For simplicity, we fixed ǫ = 1 based on our experiments on MNIST (Figure 4). In other specific cases, alternatives such as relative distance from each data to its ith-nearest neighbor could be worthy to be considered [Hu et al. (2017)]. Convergence of RDEC According to the objective function in Eq. (7), RDEC will converge when both its terms are minimized. To intuitively demonstrate this convergence, we examined loss and ACC values during the RDEC clustering processes. Figure 5 illustrates an example MNIST experiment. In this experiment, σ was set to as low as 0.0001, and as a result clustering stopped after about 56, 000 iterations. From this figure we can see a gradual decline in loss throughout the learning process and high, steady performance in ACC after a sharp increase over the first 10, 000 iterations. The loss decline indicates the effectiveness of our optimizing process, and the steady performance of ACC illustrated the reasonability of that clustering procedure stops when the rate of changes in assignments becomes sufficiently small. Application to defective wafer map detection We adopt RDEC to the detection of defect patterns in wafer maps produced in semiconductor manufacturing. Wafer defect maps show spatial patterns of defective chips on wafers. Characteristic patterns in defect maps found in manufacturing test results suggest crucial trouble occurring somewhere in the fabrication processes. Early detection of such patterns is thus essential for yield improvement. To reduce the time and labor costs of visually checking all wafer maps, previous works [Liao et al. (2014)] [Nakata et al. (2017)] have investigated automated classification with clustering. However, these works do not address the imbalanced-data problem, which is inevitable because there are usually far fewer maps showing defect patterns than not. In this work, real-world data collected in semiconductor fabrication are used to evaluate the performance of RDEC. Dataset and experimental settings A wafer defect map is represented as a two-dimensional binarized image. Figure 6 shows some example defect wafer maps. Each circle corresponds to one wafer, and colors indicate whether the manufactured chip is defective (purple) or non-defective (gray). As the figure shows, defective chips can appear in characteristic patterns, such as large central circles and linear scratches. In this work, each wafer defect map is converted into a binary vector with dimension equal to the number of chips manufactured on the wafer. In this experiment, defective maps are selected from 11 classes. For evaluation, we visually checked maps and prepared 11 labels based on their defect patterns, such as CEN-TER DOT, TOP EDGE, BOTTOM EDGE, UPPER LEFT SCRATCH, or BOTTOM MIDDLE. We labeled about 13, 000 wafers sampled from the results of quality testing in semiconductor fabrication. We evaluated the performance of clustering methods with ACC as in Section 4. Although the number of class is not known in advance in reality, we set the number of clustering to the optimal value of 11 in this experiment. As the number of similar maps varies depending on the magnitude of the trouble, label distributions in the wafer dataset are highly imbalanced. Table 4 shows the ratio of each label in the raw wafer dataset. The majority class covers 68.57% of whole dataset, while five minor classes cover less than 1.0% of wafers. The r min of 0.0006 is extremely low, but such values are not unusual because failures are generally rare in real-world manufacturing. The network and parameter settings are the same as those for MNIST, except that the update interval τ was set to 20, the maximum number of iterations Itr max was set to 500, and a convergence constraint was added so that learning stops when the lowest loss of L D has not been updated in 50 consecutive iterations. This experiment compared RDEC with three methods, K-means, AE+K-means, and DEC. For pretraining, we utilized a hand-selected dataset consisting of wafer defect maps found by on-site engineers during fabrication. In the STL experiment in Section 4, feature extraction with a network developed by VGG-16 largely improves the performance of clustering, and this result suggests the importance of correct feature extraction. We expect that pretraining with the hand-selected dataset also allows networks to learn appropriate features for wafer dataset to be clustered. Table 5 shows clustering results for the wafer dataset. RDEC significantly outperformed the other methods; compared to DEC, RDEC improves accuracy by 37.76%. While DEC, K-means, and AE+K-means divide wafers from the largest class 1 into multiple clusters, RDEC correctly gather most of these into a single cluster, resulting in a high accuracy. It is also remarkable that RDEC collects maps from a small class, such as classes 10 and 11, into one cluster. Since finding small but distinctive clusters is essential to early detection of patterns in wafer defect maps, RDEC have a competitive advantage in this application. Results Conclusions This paper proposed RDEC as a method for jointly performing deep clustering and network regularization. The data argumentation technique VAT was combined with the DEC in RDEC clustering method. The effect of this combination was evaluated through analyses and experiments. By combining VAT with DEC, RDEC alleviated the initial centroid and marginal data problems, and yielded higher performance than current methods on most of datasets. For example, the RDEC accuracy on MNIST was 98.41%, which is comparable to results of supervised and semi-supervised learning. Particularly, RDEC significantly outperformed other methods on imbalanced datasets. For example, it yielded a high accuracy of 85.45% on a highly imbalanced dataset sampled from MNIST, which is nearly 40% and 8% higher than the accuracy of K-means and DEC, respectively.
5,004
1812.01963
2902850193
In this report, we describe the design and implementation of Ibdxnet, a low-latency and high-throughput transport providing the benefits of InfiniBand networks to Java applications. Ibdxnet is part of the Java-based DXNet library, a highly concurrent and simple to use messaging stack with transparent serialization of messaging objects and focus on very small messages (< 64 bytes). Ibdxnet implements the transport interface of DXNet in Java and a custom C++ library in native space using JNI. Several optimizations in both spaces minimize context switching overhead between Java and C++ and are not burdening message latency or throughput. Communication is implemented using the messaging verbs of the ibverbs library complemented by an automatic connection management in the native library. We compared DXNet with the Ibdxnet transport to the MPI implementations FastMPJ and MVAPICH2. For small messages up to 64 bytes using multiple threads, DXNet with the Ibdxnet transport achieves a bi-directional message rate of 10 million messages per second and surpasses FastMPJ by a factor of 4 and MVAPICH by a factor of 2. Furthermore, DXNet scales well on a high load all-to-all communication with up to 8 nodes achieving a total aggregated message rate of 43.4 million messages per second for small messages and a throughput saturation of 33.6 GB s with only 2 kb message size.
The message passing interface @cite_38 defines a standard for high level networking primitives to send and receive data between local and remote processes, typically used for HPC applications.
{ "abstract": [ "The Message Passing Interface Forum (MPIF), with participation from over 40 organizations, has been meeting since November 1992 to discuss and define a set of library standards for message passing. MPIF is not sanctioned or supported by any official standards organization. The goal of the Message Passing Interface, simply stated, is to develop a widely used standard for writing message-passing programs. As such the interface should establish a practical, portable, efficient and flexible standard for message passing. , This is the final report, Version 1.0, of the Message Passing Interface Forum. This document contains all the technical features proposed for the interface. This copy of the draft was processed by LATEX on April 21, 1994. , Please send comments on MPI to [email protected]. Your comment will be forwarded to MPIF committee members who will attempt to respond." ], "cite_N": [ "@cite_38" ], "mid": [ "1575350781" ] }
Ibdxnet: Leveraging InfiniBand in Highly Concurrent Java Applications
Todays big data applications generate hundreds or even thousands of terabytes of data. Commonly, Java based applications are used for further analysis. A single commodity machine, for example in a data center or typical cloud environment, cannot store and process the vast amounts of data making distribution mandatory. Thus, the machines have to use interconnects to exchange data or coordinate data analysis. However, commodity interconnects used in such environments, e.g. Gigabit Ethernet, cannot provide high throughput and low latency compared to alternatives like InfiniBand to speed up data analysis of the target applications. In this report, we describe the design and implementation of Ibdxnet, a low-latency and high-throughput transport providing the benefits of InfiniBand networks to Java applications. Ibdxnet is part of the Java-based DXNet library, a highly concurrent and simple to use messaging stack with transparent serialization of messaging objects and focus on very small messages (< 64 bytes). Ibdxnet implements the transport interface of DXNet in Java and a custom C++ library in native space using JNI. Several optimizations in both spaces minimize context switching overhead between Java and C++ and are not burdening message latency or throughput. Communication is implemented using the messaging verbs of the ibverbs library complemented by an automatic connection management in the native library. We compared DXNet with the Ibdxnet transport to the MPI implementations FastMPJ and MVAPICH2. For small messages up to 64 bytes using multiple threads, DXNet with the Ibdxnet transport achieves a bi-directional message rate of 10 million messages per second and surpasses FastMPJ by a factor of 4 and MVAPICH by a factor of 2. Furthermore, DXNet scales well on a high load all-to-all communication with up to 8 nodes achieving a total aggregated message rate of 43.4 million messages per second for small messages and a throughput saturation of 33.6 GB/s with only 2 kb message size. Introduction Interactive applications, especially on the web [6,28], simulations [34] or online data analysis [14,41,43] have to process terabytes of data often consisting of small objects. For example, social networks are storing graphs with trillions of edges resulting in a per object size of less than 64 bytes for the majority of objects [10]. Other graph examples are brain simulations with billions of neurons and thousands of connections each [31] or search engines for billions of indexed web pages [20]. To provide high interactivity to the user, low latency is a must in many of these application domains. Furthermore, it is also important in the domain of mobile networks moving state management into the cloud [23]. Big data applications are processing vast amounts of data which require either an expensive supercomputer or distributed platforms, like clusters or cloud environments [21]. High performance interconnects, such as InfiniBand, are playing a key role to keep processing and response times low, especially for highly interactive and always online applications. Today, many cloud providers, e.g. Microsoft, Amazon or Google, offer instances equipped with InfiniBand. InfiniBand offers messaging verbs and RDMA, both providing one way single digit microsecond latencies. It depends on the application requirements whether messaging verbs or RDMA is the better choice to ensure optimal performance [38]. In this report, we focus on Java-based parallel and distributed applications, especially big data applications, which commonly communicate with remote nodes using asynchronous and synchronous messages [10,16,13,42]. Unfortunately, accessing InfiniBand verbs from Java is not a built-in feature of the commonly used JVMs. There are several external libraries, wrappers or JVMs with built-in support available but all trade performance for transparency or require proprietary environments ( §3.1). To use InfiniBand from Java, one can rely on available (Java) MPI implementations. But, these are not providing features such as serialization for messaging objects and no automatic connection management ( §3.2). We developed the network subsystem DXNet ( §2) which provides transparent and simple to use sending and event based receiving of synchronous and asynchronous messages with transparent serialization of messaging objects [8]. It is optimized for high concurrency on all operations by implementing lock-free synchronization. DXNet is implemented in Java and open source and available at Github [1]. In this report, we propose Ibdxnet, a transport for the DXNet network subsystem. The transport uses reliable messaging verbs to implement InfiniBand support for DXNet and provides low latency and high throughput messaging for Java. Ibdxnet implements scalable and automatic connection and queue pair management, the msgrc transport engine, which uses InfiniBand messaging verbs, and a JNI interface. We present best practices applied to ensure scalability across multiple threads and nodes when working with InfiniBand verbs by elaborating on the implementation details of Ibdxnet. We carefully designing an efficient and low latency JNI layer to connect the native Ibdxnet subsystem to the Java based IB transport in DXNet. The IB transport uses the JNI layer to interface with Ibdxnet, extends DXNet's outgoing ring buffer for InfiniBand usage and implements scalable scheduling of outgoing data for many simultaneous connections. We evaluated DXNet with the IB transport and Ibdxnet, and compared then to two MPI implementations supporting InfiniBand: the well known MVAPICH2 and the Java based FastMPJ implementations. Though, MPI is discussed in related work ( §3.2) and two implementations are evaluated and compared to DXNet ( §9), DXNet, the IB transport nor Ibdxnet are implementing the MPI standard. The term messaging is used by DXNet to simply refer to exchanging data in the form of messages (i.e. additional metadata identifies message on receive). DXNet does not implement any by the standard defined MPI primitives. Various low-level libraries to use InfiniBand in Java are not compared in this report, but in a separate one. The report is structured in the following way: In Section 2, we present a summary of DXNet and its aspects important to this report. In Section 3, we discuss related work which includes a brief summary of available libraries and middleware for interfacing InfiniBand in Java applications. MPI and selected implementations supporting InfiniBand are presented as available middleware solutions and compared to DXNet. Lastly, we discuss target applications in the field of Big-Data which benefit from InfiniBand usage. Section 4 covers In-finiBand basics which are of concern for this report. Section 5 discusses JNI usage and presents best practices for low latency interfacing with native code from Java using JNI. Section 6 gives a brief overview of DXNet's multi layered stack when using InfiniBand. Implementation details of the native part Ibdxnet are given in Section 7 and the IB transport in Java are presented in Section 8. Section 9 presents and com- DXNet DXNet is a network library for Java targeting, but not limited to, highly concurrent big data applications. DXNet implements an asynchronous event driven messaging approach with a simple and easy to use application interface. Messaging describes transparent sending and receiving of complex (even nested) data structures with implicit serialization and de-serialzation. Furthermore, DXNet provides a built in primitive for transparent request-response communication. DXNet is optimized for highly multi-threaded sending and receiving of small messages by using lock-free data structures, fast concurrent serialization, zero copy and zero allocation. The core of DXNet provides automatic connection and buffer management, serialization of message objects and an interface for implementing different transports. Currently, an Ethernet transport using Java NIO sockets and an InifiniBand transport using ibverbs ( §7) is implemented. The following subsections describe the most important aspects of DXNet and its core which are depicted in Figure 1 and relevant for further sections of this report. A more detailed insight is given in a dedicated paper [8]. The source code is available at Github [1]. Automatic Connection Management To relieve the programmer from explicit connection creation, handling and cleanup, DXNet implements automatic and transparent connection creation, handling and cleanup. Nodes are addressed using an abstract and unique 16-bit nodeID. Address mappings must be registered to allow associating the nodeIDs of each remote node with a corresponding implementation dependent endpoint (e.g. socket, queue pair). To provide scalability with up to hundreds of simultaneous connections, our event driven system does not create one thread per connection. A new connection is cre-ated automatically once the first message is either sent to a destination or received from one. Connections are closed once a configurable connection limit is reached using a recently used strategy. Faulty connections (e.g. remote node not reachable anymore) are handled and cleaned up by the manager. Error handling on connection errors or timeouts is propagated to the application using exceptions. Sending of Messages Messages are serialized Java objects and sent asynchronously without waiting for a completion. A message can be targeted towards one or multiple receivers. Using the message type Request, it is sent to one receiver, only. When sending a request, the sender waits until receiving a corresponding response message (transparently handled by DXNet) or skips waiting and collects the response later. We expect applications calling DXNet concurrently with multiple threads to send messages. Every message is automatically and concurrently serialized into the Outgoing Ring Buffer (ORB), a natively allocated and lock-free ring buffer. Messages are automatically aggregated which increases send throughput. The ORB, one per connection, is allocated in native memory to allow direct and zero-copy access by the low-level transport. A transport runs a decoupled dedicated thread which removes the serialized and ready to send data from the ORB and forwards it to the hardware. Receiving of Messages The network transport handles incoming data by writing it to pooled native buffers to avoid burdening the Java garbage collection. Depending on how a transport writes and reads data, the buffers might contain fully serialized messages or just fragments. Every received buffer is pushed to the ring buffer based Incoming Buffer Queue (IBQ). Both, the buffer pool as well as the IBQ are shared among all connections. Dedicated handler threads pull buffers from the IBQ and process them asynchronously by de-serializing them and creating Java message objects. The messages are passed to pre-registered callback methods of the application. Flow Control DXNet implements its own flow control (FC) mechanism to avoid flooding a remote node with many (very small) messages. This would result in an increased overall latency and lower throughput if the receiving node cannot keep up with processing incoming messages. On sending a message, the per connection dedicated FC checks if a configurable threshold is exceeded. This threshold describes the number of bytes sent by the current node but not fully processed by the receiving node. Once the configurable threshold is exceeded, the receiving node slices the number of bytes received into equally sized windows (window size configurable) and sends the number of windows confirmed back to the source node. Once the sender receives this confirmation, the number of bytes sent but not processed is reduced by the number of received windows multiplied with the configured window size. If an application send thread was previously blocked due to exceeding this threshold, it can now continue with processing. Transport Interface DXNet provides a transport interface allowing implementations of different transport types. On initialization of DXNet, one of the implemented transports can be selected. Afterwards when using DXNet, the transport is transparent for the application. The following tasks must be handled by every transport implementation: • Connection: Create, close and cleanup • Get ready to send data from ORB and send it (ORB triggers callback once data is available) • Handle received data by pushing it to the IBQ • Manage flow control when sending/receiving data Every other task that is not exposed directly by one of the following methods must be handled internally by the transport. The core of DXNet relies on the following methods of abstract Java classes/interfaces which must be implemented by every transport: • Connection: open, close, dataPosted • ConnectionManager: createConnection, closeConnection • FlowControl: sendFlowControlData, getAndReset-FlowControlData We elaborate on further details about the transport interface in Section 8 where we describe the transport implementation for Ibdxnet. Java and InfiniBand Before developing Ibdxnet and the InfiniBand transport for DXNet, we evaluated available (low-level) solutions for leveraging InfiniBand hardware in Java applications. This includes using NIO sockets with IP over InfiniBand (IPoIB) [25], jVerbs [37], JSOR [40], libvma [2] and native c-verbs with ibverbs. Extensive experiments analyzing throughput and latency of both messaging verbs and RDMA were conducted to determine a suitable candidate for using InfiniBand with Java applications and are published in a separate report. Summerized, the results show that transparent solutions like IPoIB, libvma or JSOR, which allow existing socketbased applications to send and receive data transparently over InfiniBand hardware, are not able to deliver an overall adequate throughput and latency. For the verbs-based libraries, jVerbs gets close to the native ibverbs performance but, like JSOR, requires a proprietary JVM to run. Overall, none of the analyzed solutions, other than ibverbs, are delivering an adequate performance. Furthermore, we want DXNet to stay independent of the JVM when using Infini-Band hardware. Thus, we decided to use the native ibverbs library with the Java Native Interface to avoid the known performance issues of the evaluated solutions. MPI The message passing interface [19] defines a standard for high level networking primitives to send and receive data between local and remote processes, typically used for HPC applications. An application can send and receive primitive data types, arrays, derived or vectors of primitive data types, and indexed data types using MPI. The synchronous primitives MPI_Send and MPI_Recv perform these operations in blocking mode. The asynchronous operations MPI_Isend and MPI_Irecv allow non blocking communication. A status handle is returned with each started asynchronous operation. This can be used to check the completion of the operation or to actively wait for one or multiple completions using MPI_Wait or MPI_Waitall. Furthermore, there are various collective primitives which implement more advanced operations such as scatter, gather or reduce. Sending and receiving of data with MPI requires the application to issue a receive for every send with a target buffer that can hold at least the amount of data sent by the remote. DXNet relieves the application from this responsibility. Application threads can send messages with variable size and-DXNet manages the buffers used for sending and receiving. The application does not have to issue any receive operations and wait for data to arrive actively. Incoming messages are dispatched to pre-registered callback handlers by dedicated handler threads of DXNet. DXNet supports transparent serialization and de-serialization of complex (even nested) data types (Java objects) for messages. MPI primitives for sending and receiving data require the application to use one of the available data types supported and doesn't offer serialization for more complex datatypes such as objects. However, the MPI implementation can benefit from the lack of serialization by avoiding any copying of data, entirely. Due to the nature of serialization, DXNet has to create a (serialized) "copy" of the message when serializing it into the ORB. Analogously, data is copied when a message is created from incoming data during de-serialization. Messages in DXNet are sent asynchronously while requests offer active waiting or probing for the corresponding response. These communication patterns can also be applied by applications using MPI. The communication primitives currently provided by DXNet are limited to messages and request-response. Nevertheless, using these two primitives, other MPI primitives, such as scatter, gather or reduce, can be implemented by the application if required. DXNet does not implement multiple protocols for different buffer sizes like MPI with eager and rendezvous. A transport for DXNet might implement such a protocol but our current implementations for Ethernet and InfiniBand do not. The aggregated data available in the ORB is either sent as a whole or sliced and sent as multiple buffers. The transport on the receiving side passes the stream of buffers to DXNet and puts them into the IBQ. Afterwards, the buffers are reconnected to a stream of data by the MCC before extracting and processing the messages. An instance using DXNet runs within one process of a Big Data application with one or multiple application threads. Typically, one DXNet instance runs per cluster node. This allows the application to dynamically scale the number of threads up or down within the same DXNet instance as needed. Furthermore, fast communication between multiple threads within the same process is possible, too. Commonly, an MPI application runs a single thread per process. Multiple processes are spawned according to the number of cores per node with IPC fully based on MPI. MPI does offer different thread modes which includes issuing MPI calls using different threads in a process. Typically, this mode is used in combination with OpenMP [4]. However, it is not supported by all MPI implementations which also offer InfiniBand support ( §3.3). Furthermore, DXNet supports dynamic up and down scaling of instances. MPI implementations support up-scaling (for non singletons) but down scaling is considered an issue for many implementations. Processes cannot be removed entirely and might cause other processes to get stuck or crash. Connection management and identifying remote nodes are similar with DXNet and MPI. However, DXNet does not come with deployment tools such as mpirun which assigns the ids/ranks to identify the instances. This intentional design decision allows existing applications to integrate DXNet without restrictions to the bootstrapping process of the application. Furthermore, DXNet supports dynamically adding and removing instances. With MPI, an application must be created by using the MPI environment. MPI applications must be run using a special coordinator such as mpirun. If executed without a communicator, an MPI world is limited to the current process it is created in which doesn't allow communication with any other instances. Separate MPI worlds can be connected but the implementation must support this feature. To our knowledge, there is no implementation (with InfiniBand support) that currently supports this. MPI Implementations Supporting Infini-Band This section only considers MPI implementations supporting InfiniBand directly. Naturally, IPoIB can be used to run any MPI implementation supporting Ethernet networks over InfiniBand. But, as previously discussed ( §3.1), the network performance is very limited when using IPoIB. MVAPICH2 is a MPI library [32] supporting various network interconnects, such as Ethernet, iWARP, Omni-Path, RoCE and InfiniBand. MVAPICH2 includes features like RDMA fast path or RDMA operations for small message transfers and is widely used on many clusters over the world. Open MPI [3] is an open source implementation of the MPI standard (currently full 3.1 conformance) supporting a variety of interconnects, such as Ethernet using TCP sockets, RoCE, iWARP and InfiniBand. mpiJava [7] implements the MPI standard by a collection of wrapper classes that call native MPI implementations, such as MVAPICH2 or OpenMPI, through JNI. The wrapper based approach provides efficient communication relying on native libraries. However, it is not threadsafe and, thus, is not able to take advantage of multi-core systems using multithreading. FastMPJ [17] uses Java Fast Sockets [39] and ibvdev to provide a MPI implementation for parallel systems using Java. Initially, ibvdev [18] was implemented as a low-level communication device for MPJ Express [35], a Java MPI implementation of the mpiJava 1.2 API specification. ibvdev implements InfiniBand support using the low-level verbs API and can be integrated into any parallel and distributed Java application. FastMPJ optimizes MPJ Express collective primitives and provides efficient non-blocking communication. Currently, FastMPJ supports issuing MPI calls using a single thread, only. Other Middleware UCX [36] is a network stack designed for next generation systems for applications with an highly multi-threaded environment. It provides three independent layers: UCS is a service layer with different cross platform utilities, such as atomic operations, thread safety, memory management and data structures. The transport layer UCT abstracts different hardware architectures and their low-level APIs, and provides an API to implement communication primitives. UCP implements high level protocols such as MPI or PGAS programming models by using UCT. UCX aims to be a common computing platform for multithreaded applications. However, DXNet does not and, thus, does not include its own atomic operations, thread safety or memory management for data structures. Instead, it relies on the multi-threading utilities provided by the Java environment. DXNet does abstract different hardware like UCX but only network interconnects and not GPUs or other coprocessors. Furthermore, DXNet is a simple networking library for Java applications and does not implement MPI or PGAS models. Instead, it provides simple asynchronous messaging and synchronous request-response communication, only. Target Applications using InfiniBand Providing high throughput and low latency, InfiniBand is a technology which is widely used in various big-data applications. Apache Hadoop [22] is a well known Java big-data processing framework for large scale data processing using the MapReduce programming model. It uses the Hadoop Distributed File System for storing and accessing application data which supports InfiniBand interconnects using RDMA. Also implemented in Java, Apache Spark is a framework for big-data processing offering the domain-specific-language Spark SQL, a stream processing and machine learning extension and the graph processing framework GraphX. It supports InfiniBand hardware using an additional RDMA plugin [5]. Numerous key-value storages for big-data applications have been proposed that use InfiniBand and RDMA to provide low latency data access for highly interactive applications. RAMCloud [33] is a distributed key-value storage optimized for low latency data access using InfiniBand with messaging verbs. Multiple transports are implemented for network communication, e.g. using reliable and unreliable connections with InfiniBand and Ethernet with unreliable connections. FaRM [15] implements a key-value and graph storage using a shared memory architecture with RDMA. It performs well with a throughput of 167 million key-value lookups and 31 us latency using 20 machines. Pilaf [30] also implements a key-value storage using RDMA for get operations and messaging verbs for put operations. MICA [27] implements a key-value storage with a focus on NUMA architectures. It maps each CPU core to a partition of data and communicates using a request-response approach using unreliable connections. HERD [24] borrows the design of MICA and implements networking using RDMA writes for the request to the server and messaging verbs for the response back to the client. InfiniBand and ibverbs Basics This section covers the most important aspects of the Infini-Band hardware and the native ibverbs library which are relevant for this report. Abbreviations introduced here (most of them commonly used in the InfiniBand context) are used throughout the report from this point on. The host channel adapter (HCA) connected to the PCI bus of the host system is the network device for communicating with other nodes. The offloading engine of the HCA processes outgoing and incoming data asynchronously and is connected to other nodes using copper or optical cables via one or multiple switches. The ibverbs API provides the interface to communicate with the HCA either by exchanging data using Remote Direct Memory Access (RDMA) or messaging verbs. A queue pair (QP) identifies a physical connection to a remote node when using reliable connected (RC) communication. Using non connected unreliable datagram (UD) communication, a single QP is sufficient to send data to multiple remotes. A QP consists of one send queue (SQ) and one receive queue (RQ). On RC communication, a QP's SQ and RQ are always cross connected with a target's QP, e.g. node 0 SQ connects to node 1 RQ and node 0 RQ to node 1 SQ. If an application wants to send data, it posts a work request (WR) containing a pointer to the buffer to send and the length to the SQ. A corresponding WR must be posted on the RQ of the connected QP on the target node to receive the data. This WR also contains a pointer to a buffer and its size to receive any incoming data to. Once the data is sent, a work completion (WC) is generated and added to a completion queue (CQ) associated with the SQ. A WC is also generated for the corresponding WCQ of the remote's RQ receiving the data, once the data arrived. The WC of the send task tells the application that the data was successfully sent to the remote (or provides error information otherwise). On the remote receiving the data, the WC indicates that the buffer attached to the previously posted WR is now filled with the remote's data. When serving multiple connections, every single SQ and RQ does not need a dedicated CQ. A single CQ can be used as a shared completion queue (SCQ) with multiple SQs or RQs. Furthermore, when receiving data from multiple sources, instead of managing many RQs to provide buffers for incoming data, a shared receive queue (SRQ) can be used on multiple QPs instead of single RQs. When attaching a buffer to a WR, it is attached as a scatter gather element (SGE) of a scatter gather list (SGL). For sending, the SGL allows the offloading engine to gather the data from many scattered buffers and send it as one WR. For receiving, the received data is scattered to one or multiple buffers by the offloading engine. Low Latency Data Exchange Between Java and C In this section, we describe our experiences with and best practices for the Java Native Interface (JNI) to avoid performance penalties for latency sensitive applications. These are applied to various implementation aspects of the IB transport which are further explained in their dedicated sections. Using JNI is mandatory if the Java space has to interface with native code, e.g. for IO operations or when using native libraries. As we decided to use the low-level ibverbs library to benefit from full control, high flexibility and low latency ( §3.1), we had to ensure that interfacing with native code from Java does not introduce too much overhead compared to the already existing and evaluated solutions. The Java Native Interface (JNI) allows Java programmers to call native code from C/C++ libraries. It is a well known method to interface with native libraries that are not available in Java or access IO using system calls or other native libraries. When calling code of a native library, the library has to expose and implement a predefined interface which allows the JVM to connect the native functions to native declared Java methods in a Java class. With every call from Java to the native space and vice versa, a context switch is required to be executed by the JVM environment. This involves tasks related to thread and cache management adding latency to every native call. This increases the duration of such a call and is crucial, especially regarding the low latency of IB. Exchanging data with a native library without adding considerable overhead is challenging. For single primitive values, passing parameters to functions is convenient and does not add any considerable overhead. However, access to Java classes or arrays from native space requires synchronization with the JVM (and its garbage collector) which is very expensive and must be avoided. Alternatively, one can use ByteBuffers allocated as DirectByte-Buffers which allocates memory in native memory. Java can access the memory through the ByteBuffer and the native library can get the native address of the array and the size with the functions GetDirectBufferAddress and GetDirectBufferCapacity. However, these two calls increase the latency by tenth to even hundreds of microseconds (with high variation). This problem can be solved by allocating a native buffer in the native space, passing its address and size to the Java space and access it using the Unsafe API or wrap it as a newly allocated (Direct) ByteBuffer. The latter requires reflection to access the constructor of the DirectByteBuffer and Figure 2: Microbenchmarks to evaluate JNI call overhead and data exchange overhead using different types of memory access set the address and size fields. We decided to use the Unsafe API because we map native structs and don't require any of the additional features the ByteBuffer provides. The native address is cached which allows fast exchange of data from Java to native and vice versa. To improve convenience when accessing fields of a data structure, a helper class with getter and setter wrapper methods is created to access the fields of the native struct. We evaluated different means of passing data from Java to native and vice versa as well as the function/method call overhead. Figure 2 shows the results of the microbenchmarks used to evaluate JNI call overhead as well as overhead of different memory access methods. The results displayed are the averages of three runs of each benchmark executing the operation 100,000,000 times. A warm-up of 1,000 operations preceeds each benchmark run. For JNI context switching, we measured the latency introduced of Java to native (jtn), native to Java (ntj), native to Java with exception checking (ntjexc) and native to Java with thread detaching (ntjdet). For exchanging data between Java and native, we measured the latency introduced by accessing a 64 byte buffer in both spaces for a primitive Java byte array (ba), Java DirectByte-Buffer (dbb) and Unsafe (u). The benchmarks were executed on a machine with Intel Core i7-5820K CPU and Java 1.8 runtime. The results show that the average single costs for context switching are neglectable with an average switching time of only up to 0.1 µs. We exchange data using primitive function arguments, only. Data structures are mapped and accessed as C-structs in the native space. In Java, we access the native Cstructs using a helper class which utilizes the Unsafe library [29] as this is the fastest method in both spaces. These results influenced the important design decision to run native threads, attached once as daemon threads to the JVM, which call to Java instead of Java threads calling native methods ( §7.2.3, §7.2.4). Furthermore, we avoid using any of the JNI provided helper functions where possible [26]. For example: attaching a thread to the JVM involves expensive operations like creating a new Java thread object and various state changes to the JVM environment. Avoiding them on every context switch is crucial to latency and performance on every call. Lastly, we minimized the number of calls to the Java space by combining multiple tasks into a single cross-space call instead of yielding multiple calls. For inter space communication, we highly rely on communication via buffers mapped to structs in native space and wrapper classes in Java (see above). This is highly application dependable and not always possible. But if possible and applied, this can improve the overall performance. We applied this technique of combining multiple tasks into a single cross-space call to sending and receiving of data to minimize latency and context switching overhead. The native send and receive threads implement the most latency critical logic in the native space which is not simply wrapping ibverbs functions to be exposed to Java ( §7.2.3 and 7.2.4).. The counterpart to the native logic is implemented in Java ( §8). In the end, we are able to reduce sending and receiving of data to a single context switching call. Overview Ibdxnet and Java InfiniBand Transport This section gives a brief top-down introduction of the full transport implementation. Figure 3 depicts the different components and layers involved when using InfiniBand with DXNet. The Java InfiniBand transport (IB transport) Figure 4: Simplified architecture of Ibdxnet with the msgrc transport engine ( §8) implements DXNet's transport interface ( §2.5) and uses JNI to connect to the native counterpart. Ibdxnet uses the native ibverbs library to access the hardware and provides a separate subsystem for connection management, sending and receiving data. Furthermore, it implements a set of functions for the Java Native Interface to connect to the Java implementation. Ibdxnet: Native InfiniBand Subsystem with Transport Engine This section elaborates on the implementation details of our native InfiniBand subsystem Ibdxnet which is used by the IB transport implementation in DXNet to utilize InfiniBand hardware. Ibdxnet provides the following key features: a basic foundation with re-usable components for implementations using different means of communication (e.g. messaging verbs, RDMA) or protocols, automatic connection management and transport engines using different communication primitives. Figure 4 shows an outline of the different components involved. Ibdxnet provides an automatic connection and QP manager ( §7.1) which can be used by every transport engine. An interface for the connection manager and a connection object allows implementations for different transport engines. The engine msgrc (see Figure 4) uses the provided connection management and is based on RC messaging verbs. The engine msgud using UD messaging verbs is already implemented and will be discussed and extensively evaluated in a separate publication. A transport engine implements its own protocol to send/receive data and exposes a low-level interface. It creates an abstraction layer to hide direct interaction with the ibverbs library. Through the low-level interface, a transport implementation ( §8) provides data-to-send and forwards received data for further processing. For example: the lowlevel interface of the msgrc engine does not provide concur-rency control or serialization mechanisms for messages. It accepts a stream of data in one or multiple buffers for sending and provides buffers creating a stream of data on receive ( §7.2). This engine is connected to the Java transport counterpart via JNI and uses the existing infrastructure of DXNet ( §8). Furthermore, we implemented a loopback like stand alone transport for debugging and measuring performance of the native engine, only. The loopback transport creates a continuous stream of data for sending to one or multiple nodes and throws away any data received. This ensures that sending and receiving introduce no additional overhead and allows measuring the performance of different low-level aspects of our implementation. This was used to determine the maximum possible throughput with Ibdxnet ( §9.2.4). In the following sections, we explain the implementation details of Ibdxnet's connection manager ( §7.1) and the messaging engine msgrc ( §7.2). Additionally, we describe best practices for using the ibverbs API and optimizations for optimal hardware utilization. Furthermore, we elaborate on how Ibdxnet connects to the IB transport in Java using JNI and how we implemented low overhead data exchange between Java and native space. Dynamic, Scalable and Concurrent Connection Management Efficient connection management for many nodes is a challenging task. For example, hundreds of application threads want to send data to a node but the connection is not yet established. Who creates the connection and synchronizes access of other threads? How to avoid synchronization overhead or blocking of threads that want to get an already established connection? How to manage the lifetime of a connection? These challenges are addressed by a dedicated connection manager in Ibdxnet. The connection manager handles all tasks required to establish and manage connections and hides them from the higher level application. For our higher level Java transport ( §8.1), complexity and latency is reduced for connection setup by avoiding context switching. First, we explain how nodes are identified, the contents of a connection and how online/offline nodes are discovered and handled. Next, we describe how existing connections are accessed and non-existing connections are created on the fly during application runtime. We explain the details how a connection creation job is handled by the internal job manager, how connection data is exchanged with the remote in order to create a QP. At last, we briefly describe our previous attempt which failed to address the above challenges properly. A node is identified by a unique 16-bit integer nodeID (NID). The NID is assigned to a node on start of the connection manager and cannot be changed during runtime. A con- Figure 5: Connection manager: Creating non-existing connections (send thread: node 1 to node 0) and re-using existing connections (recv thread: node 1 to node 5). Automatic connection creation with QP data exchange (node 3 to node 0). The job CR0 is added to the back of the queue to initiate this process. The dedicated thread processes the queue by removing jobs from the front and processing them according to their type. nection consists of the source NID (the current node) and the destination NID (the target remote node). Depending on the transport implementation, an existing connection holds one or multiple ibverbs QPs, buffers and other data necessary to send and receive data using that connection. The connection manager provides a connection interface for the transport engines which allows them to implement their own type of connection. The following example describes a connection with a single QP, only. Before a connection to a remote node can be established, the remote node must be discovered and known as available. The job type node discovery (further details about the job system follow in the next paragraphs) detects online/offline nodes using UDP sockets over Ethernet. On startup, a list of node hostnames is provided to the connection manager. The list can be extended by adding/removing entries during runtime for dynamic scaling. The discovery job tries to con-tact all non-discovered nodes of that list in regular intervals. When a node was discovered, it is removed from the list and marked as discovered. A connection can only be established with an already discovered node. If a connection to the node was already created and is lost (e.g. node crash), the NID is added back to the list in order to re-discovered the node on the next iteration of the job. Node discovery is mandatory for InfiniBand in order to exchange QP information on connection creation. Figure 5 shows how existing connections are accessed and new connections are created when two threads, e.g. a send and a receive thread, are accessing the connection manager. The send thread wants to send new data to node 0 and the receive thread has received some data (e.g. from a SRQ). It has to forward it for further processing which requires information stored in each connection (e.g. a queue for the incoming data). If the connection is already established (the receive thread gets the connection to node 5), a connection handle (H5) is returned to the calling thread. If no connection has been established so far (the send thread wants to get the connection to node 0), a job to create the specific connection (CR0 = create to node 0) is added to the internal job queue. The calling thread has to wait until the job is dispatched and the connection is created before being able to send the data. Figure 6 shows how connection creation is handled by the internal job thread. The job CR0 (yielded by the send thread from the previous example in figure 5) is pushed to the back of the job queue. The job queue might contain jobs which affect different connections, i.e. there is no per connection dedicated queue. The dedicated connection manager thread is processing the queue by removing a job from the front and dispatching by type. There are three types of jobs: create a connection to a node with a NID, discover other connection managers, close an existing connection to node. To create a new connection with a remote node, the current node has to create an ibverbs QP with a SQ and RQ. Both queues are cross-connected to a remote QP (send with recv, recv with send) which requires data exchange using another communication channel (Sockets over Ethernet). For the job CR0, the thread creates a new QP on the current node (3) and exchanges its QP data with the remote it wants to connect to (0) using UDP sockets. The remote (0) also creates a QP and uses the received connection information (of 3). It replies with its own QP data (0 to 3) to complete QP creation. The newly established connection is added to the connection table and is now accessible (by the send and receive thread). At last, we briefly describe our lessons learned from our first attempt for an automatic connection manager. It was relying on active connection creation. The first thread calling the connection manager to acquire a connection creates it on the fly, if it does not exist. The calling thread executes connection exchange, waits for the remote data and finishes connection creation. This requires coordination of all threads accessing the connection manager either to create a new connection or getting an existing one. It introduced a very complex architecture with high synchronization overhead and latency especially when many threads are concurrently accessing the connection manager. Furthermore, it was error prone and difficult to debug. We encountered severe performance issues when creating connections with one hundred nodes in a very short time range (e.g. all-to-all communication). This resulted in connection creation times of up to half a minute. Even with a small setup of 4 to 8 nodes, creating a connection could take up to a few seconds if multiple threads tried to create the same or different connections simultaneously. msgrc: Transport Engine for Messaging using RC QPs This section describes the msgrc transport engine. It uses reliable QPs to implement messaging using a dedicated send and receive thread. The engine's interface allows a transport to provide a stream of data (to send) in form of variable sized buffers and provides a stream of data (received) to a registered callback handler. This interface is rather low-level and the backend does not implement any means of serialization/deserialization for sending/receiving of complex data structures. In combination with DXNet ( §2), the logic for these tasks resides in the Java space with DXNet and is shared with other transports such as the NIO Ethernet transport [9]. However, there are no restrictions to implement these higher level components for the msgrc engine natively, if required. Further details on how the msgrc engine is connected with the Java transport counterpart are given in Section 8. The following subsections explain the general architecture and interface of the transport, sending and receiving of data using dedicated threads and how various features of Infini-Band were used for optimal hardware utilization. Architecture This section explains the basic architecture as well as the low-level interface of the engine. Figure 4 includes the msgrc transport and can be referred to for an abstract representation of the most important components. The engine relies on our dedicated connection manager ( §7.1) for connection handling. We decided to use one dedicated thread for sending ( §7.2.3) and one for receiving ( §7.2.4) to benefit from the following advantages: a clear separation of responsibilities resulting in a less complex architecture, no scheduling of send/receive jobs when using a single thread for both and higher concurrency because we can run both threads on different CPU cores concurrently. The architecture allows us to create decoupled pipeline stages using lock-free queues and ring buffers. Thereby, we avoid complex and slow synchronization between the two threads and with hundreds of threads concurrently accessing shared resources. The low-level interface allows fine-grained control for the target transport over the engine. The interface for sending data is depicted in Listing 1 and receiving is depicted in Listing 2. Both interfaces create an abstraction hiding connection and QP management as well as how the hardware is driven with the ibverbs library. For sending data, the interface provides the callback GetNextDataToSend. This function is called by the send thread to pull new data to send from the transport (e.g. from the ORB, see 8.2). When called, an instance of each of the two structures PrevWorkPackageResults and CompletedWorkList are passed to the implementation of the callback as parameters: the first contains information about the previous call to the function and how much data was actually sent. If the SQ is full, no further data can be sent. Instead of introducing an additional callback, we combine getting the next data with returning information about the previous send call to reduce call overhead (important for JNI access). The second parameter contains data about completed work requests, i.e. data sent for the transport. This must be used in the transport to mark data processed (e.g. moving the pointers of the ORB). 26 27 uint32_t Received(IncomingRingBuffer* ringBuffer); 28 29 void ReturnBuffer(IbMemReg* buffer); Listing 2: Structure and callback of the msgrc engine's receive interface If data is received, the receive thread calls the callback function Received with an instance of the IncomingRing-Buffer structure as its parameter. This parameter holds a list of received buffers with their source NID. The transport can iterate this list and forward the buffers for further processing such as de-serialization. If the transport has to return the number of elements processed and, thus, is able to control the amount of buffers it can process. Once the received buffers are processed by the transport, they must be returned back to the RecvBufferPool by calling ReturnRecvBuffer to allow re-using them for further receives. Sending of Data This section explains the data and control flow of the dedicated send thread which asynchronously drives the engine for sending data. Listing 3 depicts a simplified version of the contents of its main loop with the relevant aspects for this section. Details of the functions involved in the main flow are explained further below. The loop starts with getting a workPackage, the next data to send (line 1), using the engine's low-level interface ( §7.2.2). The instance prevWorkResults contains information about posted and non-posted data from the previous loop iteration. The instance completionList holds data about completed sends. Both instances are reseted/nulled (line 2-3) for re-use in the current iteration. If the workPackage is valid (line 5), i.e. data to send is available, the nodeId from that package is used to get the connection to the send target from the connection manager (line 6). The connection and workPackage are passed to the SendData function (line 7). It processes the workPackage and returns how much data was processed, i.e. posted to the SQ of the connection, and how much data could not be processed. The latter happens if the SQ is full and must be kept track of to not lose any data. Afterwards, the thread returns the connection to the connection manager (line 8). At the end of a loop iteration, the thread polls the SCQ to remove any available WCs. We share the completion queue among all SQs/connections to avoid iterating over many connections for a task. The loop iteration ends and the thread starts from the beginning by calling GetNext-DataToSend and provides the work results of our previous iteration. Data about polled WCs from the SCQ are stored in the completionList and forwarded via the interface (to the transport). If no data is available (line 5), lines 6-8 are skipped and the thread executes a completion poll, only. This is important to ensure that any outstanding WCs are processed and passed to the transport (via the completionList and calling GetNext-DataToSend). Otherwise, if no data is sent for a while, the transport will not receive any information about previously processed data. This leads to false assumptions about the available buffer space for sending data, e.g. assuming that data fits into the buffer but actually does not because the processed buffer space is not free'd, yet. In the following paragraphs, we further explain how the functions SendData and PollCompletions make optimal use of the ibverbs library and how this cooperates with the interleaved control flow of the main thread loop explained above. The SendData function is responsible for preparing and posting of FC data and normal data (payload). FC data, which determines the number of flow control windows to confirm, is a small number (< 128) and, thus, does not require a lot of space. We post it as part of the immediate data, which can hold up to 4 bytes of data, with the WR instead of using a separate side channel, e.g. another QP. This avoids overhead of posting and polling of another QP which benefits overall performance, especially with many simultaneous connections. With FC data using 1 byte of the immediate data field, we use further 2 bytes to include the NID of the source node. This allows us to identify the source of the incoming WC on the remote. Otherwise, identifying the source would be very inconvenient. The only information provided with the incoming WC is the sender's unique physical QP id. In our case, this id must be mapped to the corresponding NID of the sender. However, this introduces an indirection every time a package arrives which hurts performance. For sending normal data (payload), the provided work-Package holds two pointers, front and back, which enclose a memory area of data to send. This memory area belongs to a buffer (e.g. the ORB) which was registered with the protection domain on start to allow access by the HCA. Figure 7 depicts an example with three (aggregated) ready to send messages in the ORB. We create a WR for the data to send and provide a single SGE which takes the pointers of the enclosed memory area. The HCA will directly read from that area without further copying of the data (zero copy). For buffer wrap arounds, two SGEs are created and attached to one WR: one SGE for the data from the front pointer to the end of the buffer, another SGE for the data from the start of the buffer to the back pointer. If the size of the area to send (sum of all SGEs) exceeds the maximum configurable receive size, the data to send must be sliced into multiple WRs. Multiple WRs are chained to a link list to minimize call overhead when posting them to the SQ using ibv_post_send. This greatly increases performance compared to posting multiple standalone WRs with single calls. The number of SGEs of a WR can be 0, if no normal data is available to send but FC data is available. To send FC data only, we write it to the immediate data field of a WR along with our source NID and post it without any SGEs attached which results in a 0 length data WR. The PollCompletions function calls ibv_poll_cq, once, to poll for any completions available on the SCQ. A SCQ is used instead of per connection CQs to avoid iterating the CQs of all connections which impacts performance. The send thread keeps track of the number of posted WRs and, thus, knows how many WCs are outstanding and expected to arrive on the SCQ. If none are being expected, polling is skipped. ibv_poll_cq is called once per PollCompletion call, only, and every call tries to poll WCs in batches to keep the call overhead minimal. Experiments have shown that most calls to ibv_poll_cq, even on high loads, will return empty, i.e. no WRs have completed. Thus, polling the SCQ until at least one completion is received is the wrong approach and greatly impacts overall performance. If the SQ of another connection is not full and there is data available to send, this method wastes CPU resources on busy polling instead of processing further data to send. The performance impact (resulting in low throughput) increases with the number of simultaneous connections being served. Furthermore, this increases the chance of SQs running empty because time is wasted on waiting for completions instead of keeping all SQs filled. Full SQs ensure that the HCA is kept busy which is the key to optimal performance. Data is received using a SRQ and SCQ instead of multiple receive and completions queues. This avoids iterating over all open connections and checking for data availability which introduces overhead with increasing number of simultaneous connections. Equally sized buffers for receiving data (configurable size and amount) are pooled and returned for re-use by the transport, once processed ( §7.2.2). Receiving of Data The loop starts by calling PollCompletions (line 1) to poll the SCQ for WCs. Before processing the WCs returned, the SRQ is refilled by calling Refill (line 4), if the SRQ is not filled, yet. Next, if any WCs were polled previously, they are processed by calling ProcessCompletions (line 8). This step pushes them to the Incoming Ring Buffer (IRB), a temporary ring buffer, before dispatching them. Finally, if the IRB is not empty (line 11), the thread tries to forward the contents of the IRB by calling DispatchReceived via the interface to the transport ( §7.2.2). The following paragraphs are further elaborating on how PollCompletions, Refill, ProcessCompletions and Dis-patchReceived make optimal use of the ibverbs library and how this cooperates with the interleaved control flow of the main thread loop explained above. The PollCompletions function is very similar to the one explained in Section 7.2.3 already. WCs are polled in batches of max. currently available IRB space and buffered before being processed. The Refill function adds new receive WRs to the SRQ, if the SRQ is not completely filled and receive buffers from the receive buffer pool are available. Every WR consists of a configurable number of SGEs which make up the maximum receive size. This is also the limiting size the send thread can post with a single WR (sum of sizes of SGE list). Using this method, the receive thread does not have to take care of any software slicing of received data because the HCA scatters one big chunk of send data transparently to multiple (smaller) receive buffers on the receiver side. At last, Refill chains the WRs to a linked list which is posted on a single call to ibv_post_srq_recv for minimal overhead. If WCs are buffered from the previous call to PollCompletions, the ProcessReceived function iterates this list of WCs. For each WC of the list, it gets the source NID and FC data from the immediate data field. If the recv length of this WC is non zero, the attached SGEs contain the received data scattered to the receive buffers of the SGE list. As the receive thread does not know or have any means of determining the size of the next incoming data, the challenge is optimal receive buffer usage with minimal internal fragmentation. Here, fragmentation describes the amount of receive buffers provided with a WR as SGEs in relation to the amount of received data written to that block of buffers. The less data written to the buffers, the higher the fragmenta-tion. In the example shown in figure 7, the three aggregated and serialized messages are received in five buffers but the last buffer is not completely used. This fragmentation cannot be avoided but handled to avoid negative results like empty buffer pools or low per buffer utilization. Receive buffers/SGEs of a WR that do not contain any received data, because the amount of received data is less than the total size of the list of buffers of the SGE list, are pushed back to the buffer pool. All receive buffers of the SGE list that contain valid received data are pushed to the IRB (in the order they were received). Depending on the target application, the fragmentation degree can be lowered if one configures the receive buffer and pool sizes accordingly. Applications typically sending small messages are performing well with small receive buffer sizes. However, throughput might decrease slightly for applications sending mainly big messages on small receive buffer sizes requiring more WRs per send data send (data sliced into multiple WRs). If the IRB contains any elements, the DispatchReceived function tries to forward them to the transport via the Received callback ( §7.2.2). The callback returns the number of elements it consumed from the IRB and, thus, is allowed to consume none or up to what's available. The consumed buffers are returned asynchronously to the receive buffer pool by transport, once it finished processing them. Load Adaptive Thread Parking The send and receive threads must be kept busy running their loops to send and receive data as fast as possible to ensure low latency. However, pure busy polling without any sleeping or yielding introduces high CPU load and occupying two cores of the CPU permanently. This is unnecessary during periods when the network is not used frequently. We do not want the send and receive threads to waste CPU resources and, therewith, decrease the overall node performance. Experiments have shown that simply adding sleep or yield operations highly impacts network latency and throughput and introduces high fluctuations [8]. To solve this, we used a simple but efficient wait pattern we call load adaptive thread parking. After a defined amount of time (e.g. 100 ms) of polling and no data available, the thread enters a yield phase and calls yield on every loop iteration if no data is available. After another timeframe passed (e.g. 1 sec), the thread enters a parking phase calling sleep-/park with a minimum value of 1 ns on every loop iteration reducing CPU load significantly. The lowest value possible (1 ns) ensure that the scheduler of the operating system sends the thread sleeping for the shortest period of time possible. Once data is available, the current phase is interrupted and the timer is reset. This ensures busy looping for the next iterations keeping latency for successive messages and on high loads low. For further details including evaluation results re- fer to our DXNet publication [8]. DXNet IB Transport Implementation in DXNet (Java) This section describes the transport implementation for DXNet in Java which utilizes the low-level transport engines, e.g. msgrc ( §7.2), provided by Ibdxnet ( §7). We describe the native interface which implements the low-level interface exposed by the engine ( §7.2.2) and how it is used in the DXNet IB transport for higher level connection management ( §8.1), sending serialized data from the ORB ( §8.2) and handling incoming receive buffers from remote nodes ( §8.3). Figure 8 depicts the involved components with the main aspects of their data and control flow which are referred to in the following subsection. If an application wants to send one or multiple messages, it calls DXNet which serializes them into the ORB and signals the WriteInterestManager (WIM) about available data ( §2.2). The native send thread checks the WIM for data to send periodically and, if available, gets it from the ORB. Depending on the size, the data to send might be sliced into multiple elements which are posted to the SQ as one or multiple work requests ( §7.2.3). Received data on the recv queue is written to one or multi- ple buffers (depending on the amount of data) from a native buffer pool ( §7.2.4). Without further processing, the buffers are forwarded to the Java space and pushed to the Incom-ingBufferQueue (IBQ). DXNet's de-serialization is processing the buffers in order and creates messages (Java objects) which are dispatched to pre-registered callbacks using dedicated message handler threads ( §2.3). Connection Handling To implement new transports in DXNet, it provides an interface to create specific connection types for the transport to implement. The DXNet core, which is shared across all transport implementations, manages the connections for the target application by automatically creating new connections on demand or closing connections if a configurable threshold is exceeded ( §2.1). For the IB transport implementation, the derived connection does not have to store further data or implement functionality. This is already stored and handled by the connection manager of Ibdxnet. It reduces overall architectural complexity by avoiding split functionality between Java and native space. Furthermore, it avoids context switching between Java and native code. Only the NID of either the target node to send to or the source node of the received data is exchanged between the Java and native space and vice versa. Thus, Connection setup in the transport implementation in Java is limited to creating the Java connection object for DXNet's connection manager. Connection close and cleanup is similar with an additional callback to the native library to signal a connection was closed to Ibdxnet's connection management. Dispatch of Ready-to-send Data The engine msgrc is running dedicated threads for sending data. The send thread pulls new data from the transport via the GetNextDataToSend function of the low-level interface ( §7.2.2, §7.2.3). In order to allow this and other callbacks (for connection management and receiving data) to be available to the IB transport, a lightweight JNI binding with the aspects explained in Section 5 was created. The transports implement the GetNextDataToSend function exposed by the JNI binding. To get new data to send, the send thread calls the JNI binding which is implemented in the IB transport in Java. Next, we elaborate on the implementation of GetNext-DataToSend in the IB transport, how the send thread gets data to send and how the different states for the data (posted, not posted, send completed) are handled in combination with the existing ORB data structure. Application threads using DXNet and sending messages are concurrently serializing them into the ORB ( §2.2). Once serialization completes, the thread signals the transport that there is ready to send (RTS) data in the ORB. For the IB transport, this signal adds a write interest to the dedicated Write Interest Manager (WIM). The WIM manages interest tokens using a lock-free list (based on a ring buffer) and a per connection atomic counter for both, RTS normal data from the ORB and FC data. Each type has a separate atomic counter, but, if not explicitly stated, we refer to them as one for ease of comprehension. The list contains the nodeIDs of the connections that have RTS data in the order they were added. The atomic counter is used to keep track of the number of interests signalled, i.e. the number of times the callback was triggered for the selected NID. Figure 9 depicts this situation with two threads (T1 and T2) which finished serializing data to the ORBs of two independent connections (3 and 2). The table with atomic counters keeps track of the number of signaled interests for RTS data/messages per connection. By calling GetNext-DataToSend, the send thread from Ibdxnet checks a lock-free list which contains nodeIDs of the connections with at least one write interest available. The nodeIDs are added in order to the list but only if it is not already in the list. This is detected by checking if the atomic counter returned 0 after a fetch and add operation. This mechanism ensures that data from many connection is processed in a round robin fashion. Furthermore, avoiding duplicates in the queue sets an upper bound for memory requirement which is sizeof(nodeID) * maxNumConnections. Otherwise, the queue can grow depending on the load and number of active connections. If the queue of the WIM is empty, the send thread aborts and returns to the native space. The send thread uses the NID it removed from the queue to get and reset the number of interests of the corresponding atomic counter. If there are any interests available for FC data, the send thread processes them by getting the FC from the connection and getting, but not yet removing, the stored FC data. For interests concerning normal data, the send thread gets the ORB from the connection and reads the current front and back pointers. The pointers of the ORB are ORB avail. for sending B P F B NP data posted to send queue but not confirmed B P next data to post to send queue F end of data to send and start of free area for serialization Serialization Cores: Sending Core: B NP Figure 10: Extended outgoing ring buffer used by IB transport. not modified, only read (details below). With this data, along with the NID of the connection, the send thread returns to the native space for processing ( §7.2.3). Every time the send thread returns to the Java space to get more data to send, it carries the parameters prevWorkResults, which contains data about the previous send operation, and completionList, which contains data about completed WRs, i.e. data send confirmations ( §7.2.3). For performance reasons, this data resides in native memory as structs and is mapped and accessed using DirectByteBuffers ( §5). The asynchronous workflow used to send and receive data by posting WRs and polling WCs must be adopted by updating the ORB and FC accordingly. Depending on the fill level of the SQ, the send thread might not be able to post all normal data or FC it retrieved in the previous iteration. The prevWorkResults parameter contains this information about how much normal and FC data was processed and could not be processed. This information must be preserved for the next send operation to avoid sending data multiple times. For the ORB however, we cannot move the front pointer because this frees up the memory which is not confirmed to be sent, yet. Thus, we introduce a second front pointer, front posted, which is only known to and modified by the send thread and allows it to keep track of already posted data. Figure 10 depicts the most important aspects of the enhanced ORB which is used for the IB transport. In total, this creates three virtual areas of memory designated to the following states: • Data posted but not confirmed: front to front posted • Data RTS and not posted: front posted to back • Free memory for send threads to serialize to: back to front Using the parameter prevWorkResults, the front posted pointer is moved by the amount of data posted. Any non processed data remains unprocessed (front posted not moved to cover entire area of RTS data). For data provided with the parameter completionList, the front pointer is updated according to the number of bytes now confirmed to be sent. A similar but less complex approach is applied to updating FC. Process Incoming Buffers The dedicated receive thread of msgrc is pushing received data to the low-level interface. Analogous to how RTS data is pulled from the IB transport via the JNI binding, the receive thread uses a received function provided by the binding to push the received buffers to the IB transport into Java space. All received buffers are stored as a batch in the recvPackage data structure ( §7.2.2) to minimize context switching overhead. For performance reasons, this data resides in native memory as structs and is mapped and accessed using Direct-ByteBuffers ( §5). The receive thread iterates the package in Java space, dispatches received FC data to each connection and pushes the received buffers (including the connection of the source node) to the IBQ ( §2.3). The buffers are handled and processed asynchronously by the MessageCreationCoordinator and one or multiple MessageHandlers of the DXNet core (all of them are Java threads). Once the buffers are processed (de-serializing its contents), the Java threads return them asynchronously to the transport engines receive buffer pool ( §7.2.4). Evaluation For better readability, we refer to DXNet with the IB transport Ibdxnet and msgrc engine as DXNet from here onwards. We implemented commonly used microbenchmarks to compare DXNet to two MPI implementations supporting In-finiBand: MVAPICH2 and FastMPJ. We decided to compare against two MPI implementations for the following reasons: To the best of our knowledge, there is no other system available that offers all features of DXNet and big data applications implementing their dedicated network stack do not offer it as a separate application/library like DXNet does. MPI can be used to partially cover some features of DXNet but not all ( §3). We are aware that MPI is targeting a different application domain, mainly HPC, whereas DXNet is targeting big data. However, MPI was already used in big data applications as well and several aspects related to the network stack and the technologies are overlapping in both application domains. Bandwidth with two nodes is compared using typical uniand bi-directional benchmarks. We also compared scalability using an all-to-all benchmark (worst-case scenario) with up to 8 nodes. Latency is compared by measuring the RTT with a request-response communication pattern. These benchmarks are executed single threaded to compare all three systems. Furthermore, we compared how DXNet and MVAPICH2 perform in a multi-threaded environment which is typical for Big Data but not HPC applications. However, we can only compare it using three benchmarks. Latency multi-threaded is not possible since it would require MVAPICH2 to implement additional infrastructure to store and map requests with responses and dynamic dispatching callbacks to handlers of incoming data to multiple receive threads (similar to DXNet). MVAPICH2 does not provide such a processing pipeline. FastMPJ cannot be compared at all here because it only supports single threaded environments. Table 1 summerizes the systems and benchmarks executed. All benchmarks were executed on up to 8 nodes of our private cluster, each with a single socket Intel Xeon CPU E5-1650 v3, 6 cores running at 3.50 GHz per core clock speed and 64 GB RAM. The nodes are running Ubuntu 16.04 with kernel version 4.4.0-57. All nodes are equipped with a Mellanox MT27500 HCA, connected with 56 Gbps links to a single Mellanox SX6015 18 port switch. For Java applications, we used the Oracle JVM version 1.8.0_151. Benchmarks The osu benchmarks included with MVAPICH2 implement typical micro benchmarks to measure uni-and bi-directional bandwidth and uni-directional latency which reflect basic usage of any network stack for point-to-point communication. osu_latency is used as a foundation and extended with recording of all RTTs to determine the 95th, 99th and 99.9th percentile after execution. The latency measured is the full RTT when the source is sending a request to the destination up to when the corresponding response is received by the source. For evaluating throughput, the benchmarks osu_bw and osu_bibw were combined to a single benchmark and extended to enable all-to-all bi-directional execution with more than two nodes. We consider this a relevant benchmark to show if the system is capable of handling multiple connections under high load. This is a common situation found in big data applications as well as backend storages [11]. On all-to-all, every node receives from all other nodes and sends messages to all other nodes in a round robin fashion. The bi-directional and all-to-all results presented are the aggregated send throughputs of all participating nodes. We added options to support multi-threaded sending and receiving using a configurable number of send and receive threads. As the per-processor core count increases, the multi-threading aspect becomes more and more important. Furthermore, our target application domain big data relies heavily on multithreaded environments. For the evaluation of FastMPJ, we ported the osu benchmarks to Java. The benchmarks for evaluating a multithreaded MPI process were omitted because FastMPJ does not support multi-threaded processes. DXNet comes with its own benchmarks already implemented which are comparable to the osu benchmarks. The osu benchmarks use a configurable parameter win-dow_size (WS) which denotes the number of messages sent in a single batch. Since MPI does not support implicit message aggregation like DXNet, we executed all MPI experiments with increasing WS to determine bandwidth peaks and saturation under optimal conditions and ensure a fair comparison to DXNet's built in aggregation. No MPI collectives are required for the benchmarks and, thus, aren't evaluated. All benchmarks are executed three times and their variance is displayed using error bars. Throughputs are specified in GB/s, latencies/RTTs in us and message rates in mmps (million messages per second). All throughput benchmarks send 100 million messages and all latency benchmarks 10 million messages. The total number of messages is incrementally halved starting with 4 kb message size to avoid unnecessary long running benchmark runs. All throughputs measured are based on the total amount of sent payload bytes. This does not include any overhead like message headers or envelopes that are required by the systems for message identification or routing. Furthermore, we included the results of the ib perf tools ib_write_bw and ib_write_lat as baselines to all end-to-end type benchmarks. These simple perf tools cannot be compared directly to the complex systems evaluated. But, these baselines show the best possible network performance (without any overhead by the evaluated system) and for rough comparisons of the systems across multiple plots. We chose parameters that reflect the configuration values of DXNet as close as possible (but still allow comparisons to FastMPJ and MVAPICH2 as well): receive queue size 2000 and send queue size 20 for both bandwidth and latency measurements; 100,000,000 messages for bandwidth and 10,000,000 for latency. DXNet with Ibdxnet Transport We configured DXNet using the parameters depicted in Table 2. The configuration values were determined with various debugging statistics and experiments, and are currently considered optimal configuration parameters. For comparing single threaded performance, the number of application threads and message handlers (referred to as MH) is limited to one each to allow comparing it to FastMPJ and MVAPICH2. DXNet's multi-threaded architecture does not allow combining the logic of the application send thread and a message handler into a single thread. Thus, DXNet's "single threaded" benchmarks are always executed with one dedicated send and one dedicated receive thread. The following subsections present the results of the various benchmarks. First, we present the results of all single threaded benchmarks with one send thread: uni-and bi-directional throughput, uni-directional latency and all-toall with increasing node count. Afterwards, the results of the same four benchmarks are presented with multiple send threads. Uni-directional Throughput The results of the uni-directional benchmark are depicted in figure 11. Considering one MH, DXNet's throughput peaks at 5.9 GB/s at a message size of 16 kb. For larger messages (32 kb to 1 MB), one MH is not sufficient to de-serialize and dispatch all incoming messages fast enough and drops to a peak bandwidth of 5.4 GB/s. However, this can be resolved by simply using two MHs. Now, DXNet's throughput peaks and saturates at 5.9 GB/s with a message size of just 4 kb and stays saturated up to 1 MB. Message sizes smaller than 4 kb also benefit significantly from the shorter receive processing times by utilizing two MHs. Further MHs can still improve performance but only slightly for a few message sizes. For small messages up to 64 bytes, DXNet achieves peak Compared to the baseline performance of ib_send_bw, DXNet's peak performance is approx. 0.5 to 1.0 mmps less. With increasing message size, this gap closes and DXNet even surpasses the baseline 1 kb to 32 kb message sizes when using multiple threads. DXNet peaks close to the baseline's peak performance of 6.0 GB/s. The results with small message sizes are fluctuating independent of the number of MHs. This can be observed on all other benchmarks with DXNet measuring message/payload throughput as well. It is a common issue which can be observed when running high load throughput benchmarks using the bare ibverbs API as well. This benchmark shows that DXNet is capable of handling a vast amount of small messages efficiently. The application send thread and, thus, the user does not have to bother with aggregating messages explicitly because DXNet handles this transparently and efficiently. The overall performance benefits from multiple message handlers increasing receive throughput. Large messages do impact performance with one MH because the de-serialization of data consumes most of the processing time during receive. However, simply adding at least another MH solves this issue and further increases performance. The peak aggregated message rate for small messages up to 64 bytes is varying from approx. 6 to 6.9 mmps with one MH. Using more MHs cannot improve performance significantly for this benchmark. Due to the multi-threaded and highly pipelined architecture of DXNet, these variations cannot be avoided, especially when exclusively handling many small messages. Bi-directional Throughput Compared to the baseline performance of ib_send_bw, there is still room for improvement for DXNet's performance on small message sizes (up to 2.5 mmps difference). For medium message sizes, ib_send_bw yields slightly higher throughput for up to 1 kb message size. But, DXNet surpasses ib_send_bw on 1 kb to 16 kb message size. DXNet's peak performance is approx. 1.1 GB/sec less than ib_send_bw's (11.5 GB/sec). Overall, this benchmark shows that DXNet can deliver Figure 13: DXNet: 2 nodes, uni-directional RTT and message rate with one application send thread, increasing message size great performance especially for small messages similar to the uni-directional benchmark ( §9.2.1). Figure 13 depicts the average RTTs as well as the 95th, 99th and 99.9th percentile of the uni-directional latency benchmark with one send thread and one MH. For message sizes up to 512 bytes, DXNet achieves an avg. RTT of 7.8 to 8.3 µs, a 95th percentile of 8.5 to 8.9 µs, a 99th percentile of 8.9 to 9.2 and 99.9th percentile of 11.8 to 12.7 µs. This results in a message rate of approx 0.1 mmps. As expected, starting with 1 kb message size, latency increases with increasing message size. Uni-directional Latency The RTT can be broken down into three parts: DXNet, Ibdxnet and hardware processing. Taking the lowest avg. of 7.8 µs, DXNet requires approx. 3.5 µs of the total RTT (the full breakdown is published in our other publication [8]) and the hardware approx. 2.0 µs (assuming avg. one way latency of 1 µs for the used hardware). Message de-and serialization as well as message object creation and dispatching are part of DXNet. For Ibdxnet, this results in approx. 2.3 µs processing time which includes JNI context switching as well as several pipeline stages explained in the earlier sections. Compared to the baseline performance of ib_send_lat, DXNet's latency is significantly higher. Obviously, additional latency cannot be avoided with such a long and complex processing pipeline. Considering the breakdown mentioned above, the native part Ibdxnet, which calls ibverbs to send and receive data, is to some degree comparable to the minimal perf tool ib_send_bw. With a total of 2.3 µs (of the full pipeline's 7.8 µs), the total RTT is just slightly higher than ib_send_bw's 1.8 µs. But, Ibdxnet already includes various data structures for state handling and buffer scheduling ( §7.2.3, §7.2.4) which ib_send_bw doesn't. Buffers for sending data are re-used instantly and the data received is 4 GB/s. Incrementally adding two nodes, throughput is increased by 8.5 GB/s (for 2 to 4 nodes), by 7.1 GB/s (for 4 to 6 nodes) and 6.4 GB/s (for 6 to 8 nodes). One would expect approx. equally large throughput increments but the gain is noticeably lowered with every two nodes added. We tried different configuration parameters for DXNet and ibverbs like different MTU sizes, SGE counts, receive buffer sizes, WRs per SQ/SRQ or CQ size. No combination of settings allowed us to improve this situation. We assume that the all-to-all communication pattern puts high stress on the HCA which, at some point, cannot keep up with processing outstanding requests. To rule out any software issues with DXNet first, we implemented a low-level "loopback" like test which uses the native part of Ibdxnet, only. The loopback test does not involve any dynamic message posting when sending data or data processing when receiving. Instead, a buffer equally to the size of the ORB is processed by Ibdxnet's send thread on every iteration and posted to every participating SQ. This ensures that all SQs are filled and are quickly refilled once at least one WR was processed. When receiving data on the SRQ, all buffers received are directly put back into the pool without processing Figure 15: DXNet: 2 nodes, uni-directional throughput and message rate with multiple application send threads, increasing message size and 4 message handlers and the SRQ is refilled. This ensures that no additional processing overhead is added for sending and receiving data. Thus, Ibdxnet's loopback test comes close to a perftool like benchmark. We executed the benchmark with 2, 4, 6 and 8 nodes which yielded aggregated throughputs of 11.7 GB/s, 21.7 GB/s, 28.3 GB/s and 34.0 GB/s. These results are very close to the performance of the full DXNet stack but don't rule out all software related issues, yet. The overall aggregated bandwidth could still somehow be limited by Ibdxnet. Thus, we executed another benchmark which, first, executes all-to-all communication with up to 8 nodes, then, once bandwidth is saturated, switching to a ring formation for communication without restarting the benchmark (every node sends to its successor determined by NID, only). Once the nodes switch the communication pattern during execution, the per node aggregated bandwidth increases very quickly and reaches a maximum aggregated bandwidth of approx. (11.7/2 × num_nodes) GB/s independent of the number of nodes used. This rules out total bandwidth limitations for software and hardware. Furthermore, we can now rule out any performance issues in DXNet or even ibverbs with connection management (e.g. too many QPs allocated). This leads to the assumption that the HCA cannot keep up with processing outstanding WRQs when SQs are under high load (always filled with WRQs). With more than 3 SQs per node, the total bandwidth drops noticably. Similar results with other systems further support this assumption ( §9.3.4 and 9.4.4). Figure 15 shows the uni-directional benchmark executed with 4 MHs and 1 to 16 send threads. For 1 to 4 send threads throughput saturates at 5.9 GB/s at either 4 kb or 8 kb messages. For 256 byte to 8 kb, using one thread yields better Figure 16: DXNet: 2 nodes, bi-directional throughput and message rate with multiple application send threads, increasing message size and 4 message handlers throughput than two or sometimes four threads. However, running the benchmark with 8 and 16 send threads increases overall throughput for all messages greater 32 byte significantly with saturation starting at 2 kb message size. DXNet's pipeline benefits from the many threads posting messages to the ORB concurrently. This results in greater aggregation of multiple messages and allows higher buffer utilization for the underlaying transport. DXNet also increases message throughput on small message sizes up to 512 byte. from approx. 4.0 mmps up to 6.7 mmps for 16 send threads. Again, performance is slightly worse with two and four compared to a single thread. Uni-directional Throughput Multi-threaded Furthermore, DXNet even surpasses the baseline performance of ib_send_bw when using multiple send threads. However, the peak performance cannot be improved further which shows the current limit of DXNet for this benchmark and the hardware used. Figure 16 shows the bi-directional benchmark executed with 4 MHs and 1 to 16 send threads. With more than one send thread, the aggregated throughput peaks at approx. 10.4 and 10.7 GB/s with messages sizes of 2 and 4 kb. DXNet delivers higher throughputs for all medium and small messages with increasing send thread count. The baseline performance of ib_send_bw is reached on small message sizes and even surpassed with medium sized messages up to 16 kb. The peak throughput is not reached showing DXNet's current limit with the used hardware. Bi-directional Throughput Multi-threaded The overall performance with 8 and 16 send threads don't differ noticeably which indicates saturation of DXNet's processing pipeline. For small messages (less than 512 byte), the message rates also increase with increasing send thread count. Again, saturation starts with 8 send threads with a message rate of approx. 8.6 to 10.2 mmps. Figure 18: DXNet: 2 nodes, uni-directional 95th, 99th and 99.9th percentile RTT and message rate with multiple application send threads, increasing message size and 4 message handlers DXNet is capable of handling a multi-threaded environment under high load with CPU over-provisioning and still delivers high throughput. Especially for small messages, DXNet's pipeline even benefits from the highly concurrent activity by aggregating many messages. This results in higher buffer utilization and, for the user, higher overall throughput. DXNet's internal threads, MH and send threads, exceed the core count of the CPU, DXNet switches to different parking strategies for the different thread types which slightly increase latency but greatly reduce overall CPU load ( §7.2.5). Uni-directional Latency Multi-threaded The message rate can be increased up to 0.33 mmps with up to 4 send threads as, practically, every send thread can use a free MH out of the 4 available. With 8 and 16 send threads, the MHs on the remote must be shared and DXNet's overprovisioning is active which reduces the overall throughput. The percentiles shown in figure 18 reflect this sitution very well and increase noticeably. With a single thread, as already discussed in 9.2.3, the difference of the avg. (7.8 to 8.3 µs) and 99.9th percentile (11.8 to 12.7 µs) RTT for message sizes less than 1 kb is approx. 4 to 5 µs. When doubling the send thread count, the 99.9th percentiles roughly double as well. When over-provisioning the CPU, we cannot avoid the higher than usual RTT caused by the increasing amount of messages getting posted. 9.2.8 All-to-all Throughput with up to 8 Nodes Multithreaded Figure 19 shows the results of the all-to-all benchmark with up to 8 nodes, 16 These results show that DXNet delivers high throughputs and message rates under high loads with increasing node and thread count. Small messages profit significantly through better aggregation and buffer utilization. Summary Results This section briefly summerizes the most important results and numbers of the previous benchmarks. All values are considered "up to" and show the possible peak performance in the given benchmark. Single-threaded: • Uni-directional throughput One MH: saturation with 16 kb messages, peak throughput at 5.9 GB/s; Figure 20: FastMPJ: 2 nodes, uni-directional throughput and message rate with increasing message and window size FastMPJ This section describes the results of the benchmarks executed with FastMPJ and compares them to the results of DXNet presented in the previous sections. We used FastMPJ 1.0_7 with the device ibvdev to run the benchmarks on InfiniBand hardware. The osu benchmarks of MVAPICH2 were ported to Java ( §9.1) and used for all following experiments. Since FastMPJ does not support multithreading in a single process, all benchmarks were executed single threaded and compared to the single threaded results of DXNet, only. Figure 20 shows the results of executing the uni-directional benchmark with two nodes with increasing message size. Furthermore, the benchmark was executed with increasing WS to ensure bandwidth saturation. As expected, throughput increases with increasing message size and bandwidth saturation starts at a medium message size of 64k with approx. 5.7 GB/s. The actual peak throughput is reached with large 512k message for a WS of 64 with 5.9 GB/s. For small message sizes up to 512 byte and independent of the WS, FastMPJ achieves a message rate of approx. 1.0 mmps. Furthermore, the results show that the WS doesn't matter for message sizes up to 64 KB. For 128 KB to 1 MB, FastMPJ profits from explicit aggregation with increasing WS. This indicates that ibvdev might include some message aggregation mechanism. Uni-directional Throughput Compared to the baseline performance of ib_send_bw, FastMPJ's performance is always inferior to it with a peak performance of 5.9 GB/s close to ib_send_bw's with 6.0 GB/s. Compared to the results of DXNet ( §9.2.1), DXNet's throughput saturates and peaks earlier at a message size of 16 kb with 5.9 GB/s. However, if using one MH, throughput drops for larger messages down to 5.4 GB/s due to increased message processing time (de-serialization). However, such a Bi-directional Throughput The results of the bi-directional benchmark are depicted in figure 21. Again, throughput increases with increasing message size peaking at 10.8 GB/s with WS 2 and large 512 kb messages. However, when handling messages of 128 kb and greater, throughput peaks at approx 10.2 GB/s for the WSs 4 to 32 and saturation varies depending on the WS. For WSs 4 to 32, throughput is saturated with 64 kb messages, for WSs 1 and 2 at 512 kb. Starting at 128 kb message size, WSs of 1 and 2 achieve slightly better results than the greater WSs. Especially WS 64 drops significantly with message sizes of 128 kb and greater. However, for message sizes of 64 kb to 512 kb, FastMPJ profits from explicit aggregation. Compared to the uni-directional results ( §9.3.1), FastMPJ does profit to some degree from explicit aggregation for small messages with 1 to 128 bytes. WS 1 to 16 allow higher message throughputs with WS 16 as an optimal value peaking at approx. 2.4 mmps for 1 to 128 byte messages. Greater WSs degrade message throughput significantly. However, this does not apply to message sizes of 256 bytes where greater explcit aggregation does always increase message throughput. Compared to the baseline performance of ib_send_bw, FastMPJ's performance is again always inferior to it with a difference in peak performance of 0.7 GB/sec (10.8 GB/s to 11.5 GB/s). When comparing to DXNet's results ( §9.2.2), the throughputs are nearly equal with 10.7 GB/s also at 512 kb message Uni-directional Latency The results of the latency benchmark are depicted in figure 22. Compared to the baseline performance of ib_send_lat, FastMPJ's average RTT comes close to its 1.8 µs and closes that gap slightly further starting with 256 byte message size. Comparing the avg. RTT and 95th percentile to DXNet's results ( §9.2.3), FastMPJ outperforms DXNet by a up to four times lower RTT. This is also reflected by the message rate of 0.41 mmps for FastMPJ and 0.1 mmps for DXNet. The breakdown given Section 9.2.3 explains the rather high RTTs and the amount of processing time spent by DXNet on major sections of the pipeline. However, even DXNet's avg. RTT for message sizes up to 512 byte is higher than FastMPJ's, DXNet achieves lower 99th (8.9 to 9.2 µs) and 99.9th percentile (11.8 to 12.7 µs) than FastMPJ. Summary Results This section briefly summerizes the most important results and key numbers of the previous benchmarks. All values are considered "up to" and show the possible peak performance in the given benchmark and are single-threaded, only. All results benefit from explicit aggregation using the WS. • Uni-directional throughput Saturation at 64 kb message size with 5.7 GB/s; Peak throughput at 512 kb message size with 5.9 GB/s; Compared to DXNet's single threaded results, it outperforms FastMPJ on small messages with a up to 4 times higher message rate on both un-und bi-directional benchmarks. However, FastMPJ achieves a lower average and 95th percentile latency on the uni-directional latency benchmark. But, even with a more complicated and dynamic pipeline, DXNet achieves lower 99th and 99.9th percentile than FastMPJ demonstrating high stability. On all-to-all communication with up to 8 nodes, DXNet reaches similar throughputs to FastMPJ's for large messages but outperforms FastMPJ's message rate by up to three times for small messages. DXNet is always better for small messages. MVAPICH2 This section describes the results of the benchmarks executed with MVAPICH2 and compares them to the results of DXNet. All osu benchmarks ( §9.1) were executed with MVAPICH2-2.3. Since MVAPICH2 supports MPI calls with multiple threads of the same process, some benchmarks were executed single and multi-threaded. We set the following environmental variables for optimal performance and comparability: • MV2_DEFAULT_MAX_SEND_WQE=128 • MV2_DEFAULT_MAX_RECV_WQE=128 • MV2_SRQ_SIZE=1024 • MV2_USE_SRQ=1 • MV2_ENABLE_AFFINITY=1 Additionally for the multi-threaded benchmarks, the following environmental variables were set: • MV2_CPU_BINDING_POLICY=hybrid • MV2_THREADS_PER_PROCESS=X (where X equals the number of threads we used when executing the benchmark) • MV2_HYBRID_BINDING_POLICY=linear Uni-directional Throughput The results of the uni-directional single threaded benchmark are depicted in figure 26. Compared to the baseline performance of ib_send_bw, MVAPICH2's peak performance is approx. 1.0 mmps less for small messages. With increasing message size, on a WS of 64, the performance comes close to the baseline and even exceeds it for 2 kb to 8 kb messages. MVAPICH2 peaks very close to the baseline's peak performance of 6.0 GB/s. DXNet achieves very similar results ( §9.2.1) compared to MVAPICH2 but without relying on explicit aggregation. DXNet's throughput saturates and peaks earlier at a message size of 16 kb with 5.9 GB/s. However, if using one MH, throughput drops for larger messages down to 5.4 GB/s due to increased message processing time (de-serialization). As already explained in Section 9.3.1, this can be resolved by using two MHs. For small messages of up to 64 bytes, DXNet achieves an equal to slightly higher message rate of 4.0 to 4.5 mmps. Compared to the baseline performance of ib_send_bw, MVAPICH2's peak performance for small messages is approx. half of ib_send_bw's 9.5 mmps. With increasing message size, the throughput of MVAPICH2 comes close ib_send_bw's with WS 64 and 32 for 4 and 8 kb messages, only. Peak throughput for large messages comes close to ib_send_bw's 11.5 GB/s. Compared to DXNet's results ( §9.2.2), the aggregated throughput is slightly higher than DXNet's (10.7 GB/s). However, DXNet outperforms MVAPICH2 for medium sized messages by reaching a peak throughput of 10.4 GB/s compared to 9.5 GB/s (on WS 64) for just 8 kb messages. Furthermore, DXNet offers a higher message rate of 6 to 7.2 mmps on small messages up to 64 bytes. DXNet achieves overall higher performance without relying on explicit message aggregation. Figure 28 shows the results of the uni-directional single threaded latency benchmark. MVAPICH2 achieves a very low average RTT of 2.1 to 2.4 µs for up to 64 byte messages and up to 3.9 µs for up to 512 byte messages. The 95th, 99th and 99.9th percentile are just slightly higher than the average Compared to DXNet's results ( §13), MVAPICH2 achieves an overall lower latency. DXNet's average with 7.8 to 8.3 µs is nearly four times higher. The 95h (8.5 to 8.9 µs), 99th (8.9 to 9.2 µs) and 99.9th percentile (11.8 to 12.7 µs) are also at least two to three times higher. MVAPICH2 implements a very thin layer of abstraction, only. Application threads issuing MPI calls, are pinned to cores and are directly calling ibverbs functions after passing through these few layers of abstraction. DXNet however implements multiple pipeline stages with de/-serialization and multiple (JNI) context/thread switches. Naturally, data passing through such a long pipeline takes longer to process which impacts overall latency. However, DXNet traded latency for multithreading support and performance as well as efficient handling of small messages. MVAPICH2 achieves a peak throughput of 19.5 GB/s with 128 kb messages on WSs 16, 32 and 64 and starts at approx 32 kb message size. WS 8 gets close to the peak throughput as well but the remaining WSs peak lower for messages With WS 2, a message rate 8.4 to 8.8 mmps for up to 64 byte messages is achieved and 6.6 to 8.8 mmps for up to 512 byte. Running the benchmark with 6 nodes, MVAPICH2 hits a peak throughput of 27.3 GB/s with 512 kb messages on WSs 16, 32 and 64. Saturation starts with a message size of approx. 64 to 128 kb depending on the WS. For 1 kb to 32 kb messages, the fluctuations increased compared to executing the benchmark with 4 nodes. Again, message rate is degraded when using large WS for small messages. An optimal message rate of 11.9 to 13.1 is achieved with WS 2 for up to 64 byte messages. Uni-directional Latency With 8 nodes, the benchmark peaks at 33.3 GB/s with 64 kb messages on a WS of 64. Again, WS does matter for large messages as well with WS 16, 32 and 64 reaching the peak throughput and starting saturation at approx. 128 kb message size. The remaining WSs peak significantly lower. Figure 32: MVAPICH2: 2 nodes, bi-directional throughput and message rate, multi-threaded with one send and one recv thread with increasing message and window size The fluctuations for mid range messages sizes of 1 kb to 64 kb increased further compared to 6 nodes. Most notable, the performance with 4 kb messages and WS 4 is nearly 10 GB/s better than 4 kb with WS 64. With up to 64 byte messages, a message rate of 16.5 to 17.8 mmps is achieved. For up to 512 byte messages, the message rate varies with 13.5 to 17.8 mmps. As with the previous node counts, a smaller WS increases the message rate significantly while larger WSs degrade performance by a factor of two. MVAPICH2 has the same "scalability issues" as DXNet ( §9.2.4) and FastMPJ ( §9.3.4). The maximum achievable bandwidth matches what was determined with the other systems. With the same results on three different systems, it's very unlikely that this is some kind of software issue like a bug or bad implementation but most likely a hardware limitation. So far, we haven't seen this issue discussed in any other publication and think it is noteworthy to know what the hardware is currently capable of. Compared to DXNet ( §9.2.4), MVAPICH2 reaches slightly higher peak throughputs for large messages. However, this peak as well as saturation is reached later at 32 to 512 kb messages compared to DXNet with approx. 16 kb. The fluctuations for mid range size messages cannot be compared as DXNet does not rely on explicit aggregation. For small messages up to 64 byte, DXNet achieves significantly higher message rates, with peaks at 7.0 mmps, 15.0 mmps, 21.1 mmps and 27.3 mmps for 2 to 8 nodes, compared to MVAPICH2. Figure 32 shows the results of the bi-directional multithreaded benchmark with two threads (on each node): a separate thread for sending and receiving each. In our case, this is the simplest multi-threading configuration to utilize more than one thread for MPI calls. The plot shows highly fluctu-ating results of the three runs executed as well as overall low throughput compared to the single threaded results ( §9.4.2). Throughput peaks at 8.8 GB/s with a message size of 512 kb for WS 16. A message rate of 0.78 to 1.19 mmps is reached for for up to 64 byte messages for WS 32. Bi-directional Throughput Multi-threaded We tried varying the configuration values (e.g. queue sizes, buffer sizes, buffer counts) but could not find configuration parameters that yielded significantly better, especially less fluctuating, results. Furthermore, the benchmarks could not be finished with sending 100,000,000 messages. When using MPI_THREAD_MULTIPLE, the memory consumption increases continuously and exhausts the total memory available on our machine (64 GB). We reduced the number of messages to 1,000,000 which still consumes approx. 20% of the total main memory but at least executes and finishes within a reasonable time. This does not happen with the widely used MPI_THREAD_SINGLE mode. MVAPICH2 implements multi-threading support using a single global lock for various MPI calls which includes MPI_Isend and MPI_Irecv used in the benchmark. This fulfils the requirements described in the MPI standard and avoids a complex architecture with lock-free data structures. However, a single global lock reduces concurrency significantly and does not scale well with increasing thread count [12]. This effect impacts performance less on applications with short bursts and low thread count. However, for multithreaded applications under high load, a single-threaded approach with one dedicated thread driving the network decoupled from the application threads, might be a better solution. Data between application threads and the network thread can be exchanged using data structures such as buffers, queues or pools like provided by DXNet. MVAPICH2's implementation of multi-threading does not allow to improve performance by increasing the send or receive thread counts. Thus, further multi-threaded experiments using MVAPICH2 are not reasonable. Summary Results This section briefly summerizes the most important results and numbers of the previous benchmarks. All values are considered "up to" and show the possible peak performance in the given benchmark. Single-threaded: • Uni-directional throughput Saturation with 64 kb to 128 kb message size, peak at 5. Compared to DXNet, the uni-directional results are similar but DXNet does not require explicit message aggregation to deliver high throughput. On bi-directional communication, MVAPICH2 achieves a slightly higher aggregated peak throughput than DXNet but DXNet performs better by approx 0.9 GB/s on medium sized messages. DXNet outperforms MVAPICH2 on small messages with a up to 1.8 times higher message rate. But, MVAPICH2 clearly outperforms DXNet on the uni-directional latency benchmark with an overall lower average, 95th, 99th and 99.9th percentile latency. On all-to-all communication with up to 8 nodes, MVAPICH2 reaches slightly higher peak throughputs for large messages but DXNet reaches its saturation earlier and performs significantly better on small message sizes up to 64 bytes. The low multi-threading performance of MVAPICH2 cannot be compared to DXNet's due to the following reasons: First, MVAPICH2 implements synchronization using a global lock which is the most simplest but very often least performant method to ensure thread safety. Second, MVA-PICH2, like many other MPI implementations, typically create multiple processes (one process per core) to enable concurrency on a single processor socket. However, as already discussed in related work ( §3), this programming model is not suitable for all application domains, especially in big data applications. DXNet is better for small messages and multi-threaded access like required in big-data applications. Conclusions We presented Ibdxnet, a transport for the Java messaging library DXNet which allows multi-threaded Java applications to benefit from low latency and high-throughput using InfiniBand hardware. DXnet provides transparent connection management, concurrency handling, message serialization and hides the transport which allows the application to switch from Ethernet to InfiniBand hardware transparently, if the hardware is available. Ibdxnet's native subsystem provides dynamic, scalable, concurrent and automatic connection management and the msgrc messaging engine implementation. The msgrc engine uses a dedicated send and receive thread and to drive RC QPs asynchronously which ensures scalability with many nodes. Load adaptive parking avoids high loads on idle but ensures low latency when busy. SGEs are used to simplify buffer handling and increase buffer utilization when sending data provided by the higher level DXNet core. A carefully crafted architecture minimizes context switching between Java and the native space as well us exchanging data using shared memory buffers. The evaluation shows that DXNet with the Ibdxnet transport can keep up with FastMPJ and MVAPICH2 on single threaded applications and even exceed them in multi-threaded applications on high load applications. DXNet with Ibdxnet is capable of handling concurrent connections and data streams with up to 8 nodes. Furthermore, multi-threaded applications benefit significantly from the multi-threaded aware architecture. The following topics are of interest for future research with DXnet and Ibdxnet: • Experiments with more than 100 nodes on our university's cluster
16,870
1812.01963
2902850193
In this report, we describe the design and implementation of Ibdxnet, a low-latency and high-throughput transport providing the benefits of InfiniBand networks to Java applications. Ibdxnet is part of the Java-based DXNet library, a highly concurrent and simple to use messaging stack with transparent serialization of messaging objects and focus on very small messages (< 64 bytes). Ibdxnet implements the transport interface of DXNet in Java and a custom C++ library in native space using JNI. Several optimizations in both spaces minimize context switching overhead between Java and C++ and are not burdening message latency or throughput. Communication is implemented using the messaging verbs of the ibverbs library complemented by an automatic connection management in the native library. We compared DXNet with the Ibdxnet transport to the MPI implementations FastMPJ and MVAPICH2. For small messages up to 64 bytes using multiple threads, DXNet with the Ibdxnet transport achieves a bi-directional message rate of 10 million messages per second and surpasses FastMPJ by a factor of 4 and MVAPICH by a factor of 2. Furthermore, DXNet scales well on a high load all-to-all communication with up to 8 nodes achieving a total aggregated message rate of 43.4 million messages per second for small messages and a throughput saturation of 33.6 GB s with only 2 kb message size.
@cite_32 is a network stack designed for next generation systems for applications with an highly multi-threaded environment. It provides three independent layers: UCS is a service layer with different cross platform utilities, such as atomic operations, thread safety, memory management and data structures. The transport layer UCT abstracts different hardware architectures and their low-level APIs, and provides an API to implement communication primitives. UCP implements high level protocols such as MPI or PGAS programming models by using UCT.
{ "abstract": [ "This paper presents Unified Communication X (UCX), a set of network APIs and their implementations for high throughput computing. UCX comes from the combined effort of national laboratories, industry, and academia to design and implement a high-performing and highly-scalable network stack for next generation applications and systems. UCX design provides the ability to tailor its APIs and network functionality to suit a wide variety of application domains and hardware. We envision these APIs to satisfy the networking needs of many programming models such as Message Passing Interface (MPI), OpenSHMEM, Partitioned Global Address Space (PGAS) languages, task-based paradigms and I O bound applications. To evaluate the design we implement the APIs and protocols, and measure the performance of overhead-critical network primitives fundamental for implementing many parallel programming models and system libraries. Our results show that the latency, bandwidth, and message rate achieved by the portable UCX prototype is very close to that of the underlying driver. With UCX, we achieved a message exchange latency of 0.89 us, a bandwidth of 6138.5 MB s, and a message rate of 14 million messages per second. As far as we know, this is the highest bandwidth and message rate achieved by any network stack (publicly known) on this hardware." ], "cite_N": [ "@cite_32" ], "mid": [ "1962931680" ] }
Ibdxnet: Leveraging InfiniBand in Highly Concurrent Java Applications
Todays big data applications generate hundreds or even thousands of terabytes of data. Commonly, Java based applications are used for further analysis. A single commodity machine, for example in a data center or typical cloud environment, cannot store and process the vast amounts of data making distribution mandatory. Thus, the machines have to use interconnects to exchange data or coordinate data analysis. However, commodity interconnects used in such environments, e.g. Gigabit Ethernet, cannot provide high throughput and low latency compared to alternatives like InfiniBand to speed up data analysis of the target applications. In this report, we describe the design and implementation of Ibdxnet, a low-latency and high-throughput transport providing the benefits of InfiniBand networks to Java applications. Ibdxnet is part of the Java-based DXNet library, a highly concurrent and simple to use messaging stack with transparent serialization of messaging objects and focus on very small messages (< 64 bytes). Ibdxnet implements the transport interface of DXNet in Java and a custom C++ library in native space using JNI. Several optimizations in both spaces minimize context switching overhead between Java and C++ and are not burdening message latency or throughput. Communication is implemented using the messaging verbs of the ibverbs library complemented by an automatic connection management in the native library. We compared DXNet with the Ibdxnet transport to the MPI implementations FastMPJ and MVAPICH2. For small messages up to 64 bytes using multiple threads, DXNet with the Ibdxnet transport achieves a bi-directional message rate of 10 million messages per second and surpasses FastMPJ by a factor of 4 and MVAPICH by a factor of 2. Furthermore, DXNet scales well on a high load all-to-all communication with up to 8 nodes achieving a total aggregated message rate of 43.4 million messages per second for small messages and a throughput saturation of 33.6 GB/s with only 2 kb message size. Introduction Interactive applications, especially on the web [6,28], simulations [34] or online data analysis [14,41,43] have to process terabytes of data often consisting of small objects. For example, social networks are storing graphs with trillions of edges resulting in a per object size of less than 64 bytes for the majority of objects [10]. Other graph examples are brain simulations with billions of neurons and thousands of connections each [31] or search engines for billions of indexed web pages [20]. To provide high interactivity to the user, low latency is a must in many of these application domains. Furthermore, it is also important in the domain of mobile networks moving state management into the cloud [23]. Big data applications are processing vast amounts of data which require either an expensive supercomputer or distributed platforms, like clusters or cloud environments [21]. High performance interconnects, such as InfiniBand, are playing a key role to keep processing and response times low, especially for highly interactive and always online applications. Today, many cloud providers, e.g. Microsoft, Amazon or Google, offer instances equipped with InfiniBand. InfiniBand offers messaging verbs and RDMA, both providing one way single digit microsecond latencies. It depends on the application requirements whether messaging verbs or RDMA is the better choice to ensure optimal performance [38]. In this report, we focus on Java-based parallel and distributed applications, especially big data applications, which commonly communicate with remote nodes using asynchronous and synchronous messages [10,16,13,42]. Unfortunately, accessing InfiniBand verbs from Java is not a built-in feature of the commonly used JVMs. There are several external libraries, wrappers or JVMs with built-in support available but all trade performance for transparency or require proprietary environments ( §3.1). To use InfiniBand from Java, one can rely on available (Java) MPI implementations. But, these are not providing features such as serialization for messaging objects and no automatic connection management ( §3.2). We developed the network subsystem DXNet ( §2) which provides transparent and simple to use sending and event based receiving of synchronous and asynchronous messages with transparent serialization of messaging objects [8]. It is optimized for high concurrency on all operations by implementing lock-free synchronization. DXNet is implemented in Java and open source and available at Github [1]. In this report, we propose Ibdxnet, a transport for the DXNet network subsystem. The transport uses reliable messaging verbs to implement InfiniBand support for DXNet and provides low latency and high throughput messaging for Java. Ibdxnet implements scalable and automatic connection and queue pair management, the msgrc transport engine, which uses InfiniBand messaging verbs, and a JNI interface. We present best practices applied to ensure scalability across multiple threads and nodes when working with InfiniBand verbs by elaborating on the implementation details of Ibdxnet. We carefully designing an efficient and low latency JNI layer to connect the native Ibdxnet subsystem to the Java based IB transport in DXNet. The IB transport uses the JNI layer to interface with Ibdxnet, extends DXNet's outgoing ring buffer for InfiniBand usage and implements scalable scheduling of outgoing data for many simultaneous connections. We evaluated DXNet with the IB transport and Ibdxnet, and compared then to two MPI implementations supporting InfiniBand: the well known MVAPICH2 and the Java based FastMPJ implementations. Though, MPI is discussed in related work ( §3.2) and two implementations are evaluated and compared to DXNet ( §9), DXNet, the IB transport nor Ibdxnet are implementing the MPI standard. The term messaging is used by DXNet to simply refer to exchanging data in the form of messages (i.e. additional metadata identifies message on receive). DXNet does not implement any by the standard defined MPI primitives. Various low-level libraries to use InfiniBand in Java are not compared in this report, but in a separate one. The report is structured in the following way: In Section 2, we present a summary of DXNet and its aspects important to this report. In Section 3, we discuss related work which includes a brief summary of available libraries and middleware for interfacing InfiniBand in Java applications. MPI and selected implementations supporting InfiniBand are presented as available middleware solutions and compared to DXNet. Lastly, we discuss target applications in the field of Big-Data which benefit from InfiniBand usage. Section 4 covers In-finiBand basics which are of concern for this report. Section 5 discusses JNI usage and presents best practices for low latency interfacing with native code from Java using JNI. Section 6 gives a brief overview of DXNet's multi layered stack when using InfiniBand. Implementation details of the native part Ibdxnet are given in Section 7 and the IB transport in Java are presented in Section 8. Section 9 presents and com- DXNet DXNet is a network library for Java targeting, but not limited to, highly concurrent big data applications. DXNet implements an asynchronous event driven messaging approach with a simple and easy to use application interface. Messaging describes transparent sending and receiving of complex (even nested) data structures with implicit serialization and de-serialzation. Furthermore, DXNet provides a built in primitive for transparent request-response communication. DXNet is optimized for highly multi-threaded sending and receiving of small messages by using lock-free data structures, fast concurrent serialization, zero copy and zero allocation. The core of DXNet provides automatic connection and buffer management, serialization of message objects and an interface for implementing different transports. Currently, an Ethernet transport using Java NIO sockets and an InifiniBand transport using ibverbs ( §7) is implemented. The following subsections describe the most important aspects of DXNet and its core which are depicted in Figure 1 and relevant for further sections of this report. A more detailed insight is given in a dedicated paper [8]. The source code is available at Github [1]. Automatic Connection Management To relieve the programmer from explicit connection creation, handling and cleanup, DXNet implements automatic and transparent connection creation, handling and cleanup. Nodes are addressed using an abstract and unique 16-bit nodeID. Address mappings must be registered to allow associating the nodeIDs of each remote node with a corresponding implementation dependent endpoint (e.g. socket, queue pair). To provide scalability with up to hundreds of simultaneous connections, our event driven system does not create one thread per connection. A new connection is cre-ated automatically once the first message is either sent to a destination or received from one. Connections are closed once a configurable connection limit is reached using a recently used strategy. Faulty connections (e.g. remote node not reachable anymore) are handled and cleaned up by the manager. Error handling on connection errors or timeouts is propagated to the application using exceptions. Sending of Messages Messages are serialized Java objects and sent asynchronously without waiting for a completion. A message can be targeted towards one or multiple receivers. Using the message type Request, it is sent to one receiver, only. When sending a request, the sender waits until receiving a corresponding response message (transparently handled by DXNet) or skips waiting and collects the response later. We expect applications calling DXNet concurrently with multiple threads to send messages. Every message is automatically and concurrently serialized into the Outgoing Ring Buffer (ORB), a natively allocated and lock-free ring buffer. Messages are automatically aggregated which increases send throughput. The ORB, one per connection, is allocated in native memory to allow direct and zero-copy access by the low-level transport. A transport runs a decoupled dedicated thread which removes the serialized and ready to send data from the ORB and forwards it to the hardware. Receiving of Messages The network transport handles incoming data by writing it to pooled native buffers to avoid burdening the Java garbage collection. Depending on how a transport writes and reads data, the buffers might contain fully serialized messages or just fragments. Every received buffer is pushed to the ring buffer based Incoming Buffer Queue (IBQ). Both, the buffer pool as well as the IBQ are shared among all connections. Dedicated handler threads pull buffers from the IBQ and process them asynchronously by de-serializing them and creating Java message objects. The messages are passed to pre-registered callback methods of the application. Flow Control DXNet implements its own flow control (FC) mechanism to avoid flooding a remote node with many (very small) messages. This would result in an increased overall latency and lower throughput if the receiving node cannot keep up with processing incoming messages. On sending a message, the per connection dedicated FC checks if a configurable threshold is exceeded. This threshold describes the number of bytes sent by the current node but not fully processed by the receiving node. Once the configurable threshold is exceeded, the receiving node slices the number of bytes received into equally sized windows (window size configurable) and sends the number of windows confirmed back to the source node. Once the sender receives this confirmation, the number of bytes sent but not processed is reduced by the number of received windows multiplied with the configured window size. If an application send thread was previously blocked due to exceeding this threshold, it can now continue with processing. Transport Interface DXNet provides a transport interface allowing implementations of different transport types. On initialization of DXNet, one of the implemented transports can be selected. Afterwards when using DXNet, the transport is transparent for the application. The following tasks must be handled by every transport implementation: • Connection: Create, close and cleanup • Get ready to send data from ORB and send it (ORB triggers callback once data is available) • Handle received data by pushing it to the IBQ • Manage flow control when sending/receiving data Every other task that is not exposed directly by one of the following methods must be handled internally by the transport. The core of DXNet relies on the following methods of abstract Java classes/interfaces which must be implemented by every transport: • Connection: open, close, dataPosted • ConnectionManager: createConnection, closeConnection • FlowControl: sendFlowControlData, getAndReset-FlowControlData We elaborate on further details about the transport interface in Section 8 where we describe the transport implementation for Ibdxnet. Java and InfiniBand Before developing Ibdxnet and the InfiniBand transport for DXNet, we evaluated available (low-level) solutions for leveraging InfiniBand hardware in Java applications. This includes using NIO sockets with IP over InfiniBand (IPoIB) [25], jVerbs [37], JSOR [40], libvma [2] and native c-verbs with ibverbs. Extensive experiments analyzing throughput and latency of both messaging verbs and RDMA were conducted to determine a suitable candidate for using InfiniBand with Java applications and are published in a separate report. Summerized, the results show that transparent solutions like IPoIB, libvma or JSOR, which allow existing socketbased applications to send and receive data transparently over InfiniBand hardware, are not able to deliver an overall adequate throughput and latency. For the verbs-based libraries, jVerbs gets close to the native ibverbs performance but, like JSOR, requires a proprietary JVM to run. Overall, none of the analyzed solutions, other than ibverbs, are delivering an adequate performance. Furthermore, we want DXNet to stay independent of the JVM when using Infini-Band hardware. Thus, we decided to use the native ibverbs library with the Java Native Interface to avoid the known performance issues of the evaluated solutions. MPI The message passing interface [19] defines a standard for high level networking primitives to send and receive data between local and remote processes, typically used for HPC applications. An application can send and receive primitive data types, arrays, derived or vectors of primitive data types, and indexed data types using MPI. The synchronous primitives MPI_Send and MPI_Recv perform these operations in blocking mode. The asynchronous operations MPI_Isend and MPI_Irecv allow non blocking communication. A status handle is returned with each started asynchronous operation. This can be used to check the completion of the operation or to actively wait for one or multiple completions using MPI_Wait or MPI_Waitall. Furthermore, there are various collective primitives which implement more advanced operations such as scatter, gather or reduce. Sending and receiving of data with MPI requires the application to issue a receive for every send with a target buffer that can hold at least the amount of data sent by the remote. DXNet relieves the application from this responsibility. Application threads can send messages with variable size and-DXNet manages the buffers used for sending and receiving. The application does not have to issue any receive operations and wait for data to arrive actively. Incoming messages are dispatched to pre-registered callback handlers by dedicated handler threads of DXNet. DXNet supports transparent serialization and de-serialization of complex (even nested) data types (Java objects) for messages. MPI primitives for sending and receiving data require the application to use one of the available data types supported and doesn't offer serialization for more complex datatypes such as objects. However, the MPI implementation can benefit from the lack of serialization by avoiding any copying of data, entirely. Due to the nature of serialization, DXNet has to create a (serialized) "copy" of the message when serializing it into the ORB. Analogously, data is copied when a message is created from incoming data during de-serialization. Messages in DXNet are sent asynchronously while requests offer active waiting or probing for the corresponding response. These communication patterns can also be applied by applications using MPI. The communication primitives currently provided by DXNet are limited to messages and request-response. Nevertheless, using these two primitives, other MPI primitives, such as scatter, gather or reduce, can be implemented by the application if required. DXNet does not implement multiple protocols for different buffer sizes like MPI with eager and rendezvous. A transport for DXNet might implement such a protocol but our current implementations for Ethernet and InfiniBand do not. The aggregated data available in the ORB is either sent as a whole or sliced and sent as multiple buffers. The transport on the receiving side passes the stream of buffers to DXNet and puts them into the IBQ. Afterwards, the buffers are reconnected to a stream of data by the MCC before extracting and processing the messages. An instance using DXNet runs within one process of a Big Data application with one or multiple application threads. Typically, one DXNet instance runs per cluster node. This allows the application to dynamically scale the number of threads up or down within the same DXNet instance as needed. Furthermore, fast communication between multiple threads within the same process is possible, too. Commonly, an MPI application runs a single thread per process. Multiple processes are spawned according to the number of cores per node with IPC fully based on MPI. MPI does offer different thread modes which includes issuing MPI calls using different threads in a process. Typically, this mode is used in combination with OpenMP [4]. However, it is not supported by all MPI implementations which also offer InfiniBand support ( §3.3). Furthermore, DXNet supports dynamic up and down scaling of instances. MPI implementations support up-scaling (for non singletons) but down scaling is considered an issue for many implementations. Processes cannot be removed entirely and might cause other processes to get stuck or crash. Connection management and identifying remote nodes are similar with DXNet and MPI. However, DXNet does not come with deployment tools such as mpirun which assigns the ids/ranks to identify the instances. This intentional design decision allows existing applications to integrate DXNet without restrictions to the bootstrapping process of the application. Furthermore, DXNet supports dynamically adding and removing instances. With MPI, an application must be created by using the MPI environment. MPI applications must be run using a special coordinator such as mpirun. If executed without a communicator, an MPI world is limited to the current process it is created in which doesn't allow communication with any other instances. Separate MPI worlds can be connected but the implementation must support this feature. To our knowledge, there is no implementation (with InfiniBand support) that currently supports this. MPI Implementations Supporting Infini-Band This section only considers MPI implementations supporting InfiniBand directly. Naturally, IPoIB can be used to run any MPI implementation supporting Ethernet networks over InfiniBand. But, as previously discussed ( §3.1), the network performance is very limited when using IPoIB. MVAPICH2 is a MPI library [32] supporting various network interconnects, such as Ethernet, iWARP, Omni-Path, RoCE and InfiniBand. MVAPICH2 includes features like RDMA fast path or RDMA operations for small message transfers and is widely used on many clusters over the world. Open MPI [3] is an open source implementation of the MPI standard (currently full 3.1 conformance) supporting a variety of interconnects, such as Ethernet using TCP sockets, RoCE, iWARP and InfiniBand. mpiJava [7] implements the MPI standard by a collection of wrapper classes that call native MPI implementations, such as MVAPICH2 or OpenMPI, through JNI. The wrapper based approach provides efficient communication relying on native libraries. However, it is not threadsafe and, thus, is not able to take advantage of multi-core systems using multithreading. FastMPJ [17] uses Java Fast Sockets [39] and ibvdev to provide a MPI implementation for parallel systems using Java. Initially, ibvdev [18] was implemented as a low-level communication device for MPJ Express [35], a Java MPI implementation of the mpiJava 1.2 API specification. ibvdev implements InfiniBand support using the low-level verbs API and can be integrated into any parallel and distributed Java application. FastMPJ optimizes MPJ Express collective primitives and provides efficient non-blocking communication. Currently, FastMPJ supports issuing MPI calls using a single thread, only. Other Middleware UCX [36] is a network stack designed for next generation systems for applications with an highly multi-threaded environment. It provides three independent layers: UCS is a service layer with different cross platform utilities, such as atomic operations, thread safety, memory management and data structures. The transport layer UCT abstracts different hardware architectures and their low-level APIs, and provides an API to implement communication primitives. UCP implements high level protocols such as MPI or PGAS programming models by using UCT. UCX aims to be a common computing platform for multithreaded applications. However, DXNet does not and, thus, does not include its own atomic operations, thread safety or memory management for data structures. Instead, it relies on the multi-threading utilities provided by the Java environment. DXNet does abstract different hardware like UCX but only network interconnects and not GPUs or other coprocessors. Furthermore, DXNet is a simple networking library for Java applications and does not implement MPI or PGAS models. Instead, it provides simple asynchronous messaging and synchronous request-response communication, only. Target Applications using InfiniBand Providing high throughput and low latency, InfiniBand is a technology which is widely used in various big-data applications. Apache Hadoop [22] is a well known Java big-data processing framework for large scale data processing using the MapReduce programming model. It uses the Hadoop Distributed File System for storing and accessing application data which supports InfiniBand interconnects using RDMA. Also implemented in Java, Apache Spark is a framework for big-data processing offering the domain-specific-language Spark SQL, a stream processing and machine learning extension and the graph processing framework GraphX. It supports InfiniBand hardware using an additional RDMA plugin [5]. Numerous key-value storages for big-data applications have been proposed that use InfiniBand and RDMA to provide low latency data access for highly interactive applications. RAMCloud [33] is a distributed key-value storage optimized for low latency data access using InfiniBand with messaging verbs. Multiple transports are implemented for network communication, e.g. using reliable and unreliable connections with InfiniBand and Ethernet with unreliable connections. FaRM [15] implements a key-value and graph storage using a shared memory architecture with RDMA. It performs well with a throughput of 167 million key-value lookups and 31 us latency using 20 machines. Pilaf [30] also implements a key-value storage using RDMA for get operations and messaging verbs for put operations. MICA [27] implements a key-value storage with a focus on NUMA architectures. It maps each CPU core to a partition of data and communicates using a request-response approach using unreliable connections. HERD [24] borrows the design of MICA and implements networking using RDMA writes for the request to the server and messaging verbs for the response back to the client. InfiniBand and ibverbs Basics This section covers the most important aspects of the Infini-Band hardware and the native ibverbs library which are relevant for this report. Abbreviations introduced here (most of them commonly used in the InfiniBand context) are used throughout the report from this point on. The host channel adapter (HCA) connected to the PCI bus of the host system is the network device for communicating with other nodes. The offloading engine of the HCA processes outgoing and incoming data asynchronously and is connected to other nodes using copper or optical cables via one or multiple switches. The ibverbs API provides the interface to communicate with the HCA either by exchanging data using Remote Direct Memory Access (RDMA) or messaging verbs. A queue pair (QP) identifies a physical connection to a remote node when using reliable connected (RC) communication. Using non connected unreliable datagram (UD) communication, a single QP is sufficient to send data to multiple remotes. A QP consists of one send queue (SQ) and one receive queue (RQ). On RC communication, a QP's SQ and RQ are always cross connected with a target's QP, e.g. node 0 SQ connects to node 1 RQ and node 0 RQ to node 1 SQ. If an application wants to send data, it posts a work request (WR) containing a pointer to the buffer to send and the length to the SQ. A corresponding WR must be posted on the RQ of the connected QP on the target node to receive the data. This WR also contains a pointer to a buffer and its size to receive any incoming data to. Once the data is sent, a work completion (WC) is generated and added to a completion queue (CQ) associated with the SQ. A WC is also generated for the corresponding WCQ of the remote's RQ receiving the data, once the data arrived. The WC of the send task tells the application that the data was successfully sent to the remote (or provides error information otherwise). On the remote receiving the data, the WC indicates that the buffer attached to the previously posted WR is now filled with the remote's data. When serving multiple connections, every single SQ and RQ does not need a dedicated CQ. A single CQ can be used as a shared completion queue (SCQ) with multiple SQs or RQs. Furthermore, when receiving data from multiple sources, instead of managing many RQs to provide buffers for incoming data, a shared receive queue (SRQ) can be used on multiple QPs instead of single RQs. When attaching a buffer to a WR, it is attached as a scatter gather element (SGE) of a scatter gather list (SGL). For sending, the SGL allows the offloading engine to gather the data from many scattered buffers and send it as one WR. For receiving, the received data is scattered to one or multiple buffers by the offloading engine. Low Latency Data Exchange Between Java and C In this section, we describe our experiences with and best practices for the Java Native Interface (JNI) to avoid performance penalties for latency sensitive applications. These are applied to various implementation aspects of the IB transport which are further explained in their dedicated sections. Using JNI is mandatory if the Java space has to interface with native code, e.g. for IO operations or when using native libraries. As we decided to use the low-level ibverbs library to benefit from full control, high flexibility and low latency ( §3.1), we had to ensure that interfacing with native code from Java does not introduce too much overhead compared to the already existing and evaluated solutions. The Java Native Interface (JNI) allows Java programmers to call native code from C/C++ libraries. It is a well known method to interface with native libraries that are not available in Java or access IO using system calls or other native libraries. When calling code of a native library, the library has to expose and implement a predefined interface which allows the JVM to connect the native functions to native declared Java methods in a Java class. With every call from Java to the native space and vice versa, a context switch is required to be executed by the JVM environment. This involves tasks related to thread and cache management adding latency to every native call. This increases the duration of such a call and is crucial, especially regarding the low latency of IB. Exchanging data with a native library without adding considerable overhead is challenging. For single primitive values, passing parameters to functions is convenient and does not add any considerable overhead. However, access to Java classes or arrays from native space requires synchronization with the JVM (and its garbage collector) which is very expensive and must be avoided. Alternatively, one can use ByteBuffers allocated as DirectByte-Buffers which allocates memory in native memory. Java can access the memory through the ByteBuffer and the native library can get the native address of the array and the size with the functions GetDirectBufferAddress and GetDirectBufferCapacity. However, these two calls increase the latency by tenth to even hundreds of microseconds (with high variation). This problem can be solved by allocating a native buffer in the native space, passing its address and size to the Java space and access it using the Unsafe API or wrap it as a newly allocated (Direct) ByteBuffer. The latter requires reflection to access the constructor of the DirectByteBuffer and Figure 2: Microbenchmarks to evaluate JNI call overhead and data exchange overhead using different types of memory access set the address and size fields. We decided to use the Unsafe API because we map native structs and don't require any of the additional features the ByteBuffer provides. The native address is cached which allows fast exchange of data from Java to native and vice versa. To improve convenience when accessing fields of a data structure, a helper class with getter and setter wrapper methods is created to access the fields of the native struct. We evaluated different means of passing data from Java to native and vice versa as well as the function/method call overhead. Figure 2 shows the results of the microbenchmarks used to evaluate JNI call overhead as well as overhead of different memory access methods. The results displayed are the averages of three runs of each benchmark executing the operation 100,000,000 times. A warm-up of 1,000 operations preceeds each benchmark run. For JNI context switching, we measured the latency introduced of Java to native (jtn), native to Java (ntj), native to Java with exception checking (ntjexc) and native to Java with thread detaching (ntjdet). For exchanging data between Java and native, we measured the latency introduced by accessing a 64 byte buffer in both spaces for a primitive Java byte array (ba), Java DirectByte-Buffer (dbb) and Unsafe (u). The benchmarks were executed on a machine with Intel Core i7-5820K CPU and Java 1.8 runtime. The results show that the average single costs for context switching are neglectable with an average switching time of only up to 0.1 µs. We exchange data using primitive function arguments, only. Data structures are mapped and accessed as C-structs in the native space. In Java, we access the native Cstructs using a helper class which utilizes the Unsafe library [29] as this is the fastest method in both spaces. These results influenced the important design decision to run native threads, attached once as daemon threads to the JVM, which call to Java instead of Java threads calling native methods ( §7.2.3, §7.2.4). Furthermore, we avoid using any of the JNI provided helper functions where possible [26]. For example: attaching a thread to the JVM involves expensive operations like creating a new Java thread object and various state changes to the JVM environment. Avoiding them on every context switch is crucial to latency and performance on every call. Lastly, we minimized the number of calls to the Java space by combining multiple tasks into a single cross-space call instead of yielding multiple calls. For inter space communication, we highly rely on communication via buffers mapped to structs in native space and wrapper classes in Java (see above). This is highly application dependable and not always possible. But if possible and applied, this can improve the overall performance. We applied this technique of combining multiple tasks into a single cross-space call to sending and receiving of data to minimize latency and context switching overhead. The native send and receive threads implement the most latency critical logic in the native space which is not simply wrapping ibverbs functions to be exposed to Java ( §7.2.3 and 7.2.4).. The counterpart to the native logic is implemented in Java ( §8). In the end, we are able to reduce sending and receiving of data to a single context switching call. Overview Ibdxnet and Java InfiniBand Transport This section gives a brief top-down introduction of the full transport implementation. Figure 3 depicts the different components and layers involved when using InfiniBand with DXNet. The Java InfiniBand transport (IB transport) Figure 4: Simplified architecture of Ibdxnet with the msgrc transport engine ( §8) implements DXNet's transport interface ( §2.5) and uses JNI to connect to the native counterpart. Ibdxnet uses the native ibverbs library to access the hardware and provides a separate subsystem for connection management, sending and receiving data. Furthermore, it implements a set of functions for the Java Native Interface to connect to the Java implementation. Ibdxnet: Native InfiniBand Subsystem with Transport Engine This section elaborates on the implementation details of our native InfiniBand subsystem Ibdxnet which is used by the IB transport implementation in DXNet to utilize InfiniBand hardware. Ibdxnet provides the following key features: a basic foundation with re-usable components for implementations using different means of communication (e.g. messaging verbs, RDMA) or protocols, automatic connection management and transport engines using different communication primitives. Figure 4 shows an outline of the different components involved. Ibdxnet provides an automatic connection and QP manager ( §7.1) which can be used by every transport engine. An interface for the connection manager and a connection object allows implementations for different transport engines. The engine msgrc (see Figure 4) uses the provided connection management and is based on RC messaging verbs. The engine msgud using UD messaging verbs is already implemented and will be discussed and extensively evaluated in a separate publication. A transport engine implements its own protocol to send/receive data and exposes a low-level interface. It creates an abstraction layer to hide direct interaction with the ibverbs library. Through the low-level interface, a transport implementation ( §8) provides data-to-send and forwards received data for further processing. For example: the lowlevel interface of the msgrc engine does not provide concur-rency control or serialization mechanisms for messages. It accepts a stream of data in one or multiple buffers for sending and provides buffers creating a stream of data on receive ( §7.2). This engine is connected to the Java transport counterpart via JNI and uses the existing infrastructure of DXNet ( §8). Furthermore, we implemented a loopback like stand alone transport for debugging and measuring performance of the native engine, only. The loopback transport creates a continuous stream of data for sending to one or multiple nodes and throws away any data received. This ensures that sending and receiving introduce no additional overhead and allows measuring the performance of different low-level aspects of our implementation. This was used to determine the maximum possible throughput with Ibdxnet ( §9.2.4). In the following sections, we explain the implementation details of Ibdxnet's connection manager ( §7.1) and the messaging engine msgrc ( §7.2). Additionally, we describe best practices for using the ibverbs API and optimizations for optimal hardware utilization. Furthermore, we elaborate on how Ibdxnet connects to the IB transport in Java using JNI and how we implemented low overhead data exchange between Java and native space. Dynamic, Scalable and Concurrent Connection Management Efficient connection management for many nodes is a challenging task. For example, hundreds of application threads want to send data to a node but the connection is not yet established. Who creates the connection and synchronizes access of other threads? How to avoid synchronization overhead or blocking of threads that want to get an already established connection? How to manage the lifetime of a connection? These challenges are addressed by a dedicated connection manager in Ibdxnet. The connection manager handles all tasks required to establish and manage connections and hides them from the higher level application. For our higher level Java transport ( §8.1), complexity and latency is reduced for connection setup by avoiding context switching. First, we explain how nodes are identified, the contents of a connection and how online/offline nodes are discovered and handled. Next, we describe how existing connections are accessed and non-existing connections are created on the fly during application runtime. We explain the details how a connection creation job is handled by the internal job manager, how connection data is exchanged with the remote in order to create a QP. At last, we briefly describe our previous attempt which failed to address the above challenges properly. A node is identified by a unique 16-bit integer nodeID (NID). The NID is assigned to a node on start of the connection manager and cannot be changed during runtime. A con- Figure 5: Connection manager: Creating non-existing connections (send thread: node 1 to node 0) and re-using existing connections (recv thread: node 1 to node 5). Automatic connection creation with QP data exchange (node 3 to node 0). The job CR0 is added to the back of the queue to initiate this process. The dedicated thread processes the queue by removing jobs from the front and processing them according to their type. nection consists of the source NID (the current node) and the destination NID (the target remote node). Depending on the transport implementation, an existing connection holds one or multiple ibverbs QPs, buffers and other data necessary to send and receive data using that connection. The connection manager provides a connection interface for the transport engines which allows them to implement their own type of connection. The following example describes a connection with a single QP, only. Before a connection to a remote node can be established, the remote node must be discovered and known as available. The job type node discovery (further details about the job system follow in the next paragraphs) detects online/offline nodes using UDP sockets over Ethernet. On startup, a list of node hostnames is provided to the connection manager. The list can be extended by adding/removing entries during runtime for dynamic scaling. The discovery job tries to con-tact all non-discovered nodes of that list in regular intervals. When a node was discovered, it is removed from the list and marked as discovered. A connection can only be established with an already discovered node. If a connection to the node was already created and is lost (e.g. node crash), the NID is added back to the list in order to re-discovered the node on the next iteration of the job. Node discovery is mandatory for InfiniBand in order to exchange QP information on connection creation. Figure 5 shows how existing connections are accessed and new connections are created when two threads, e.g. a send and a receive thread, are accessing the connection manager. The send thread wants to send new data to node 0 and the receive thread has received some data (e.g. from a SRQ). It has to forward it for further processing which requires information stored in each connection (e.g. a queue for the incoming data). If the connection is already established (the receive thread gets the connection to node 5), a connection handle (H5) is returned to the calling thread. If no connection has been established so far (the send thread wants to get the connection to node 0), a job to create the specific connection (CR0 = create to node 0) is added to the internal job queue. The calling thread has to wait until the job is dispatched and the connection is created before being able to send the data. Figure 6 shows how connection creation is handled by the internal job thread. The job CR0 (yielded by the send thread from the previous example in figure 5) is pushed to the back of the job queue. The job queue might contain jobs which affect different connections, i.e. there is no per connection dedicated queue. The dedicated connection manager thread is processing the queue by removing a job from the front and dispatching by type. There are three types of jobs: create a connection to a node with a NID, discover other connection managers, close an existing connection to node. To create a new connection with a remote node, the current node has to create an ibverbs QP with a SQ and RQ. Both queues are cross-connected to a remote QP (send with recv, recv with send) which requires data exchange using another communication channel (Sockets over Ethernet). For the job CR0, the thread creates a new QP on the current node (3) and exchanges its QP data with the remote it wants to connect to (0) using UDP sockets. The remote (0) also creates a QP and uses the received connection information (of 3). It replies with its own QP data (0 to 3) to complete QP creation. The newly established connection is added to the connection table and is now accessible (by the send and receive thread). At last, we briefly describe our lessons learned from our first attempt for an automatic connection manager. It was relying on active connection creation. The first thread calling the connection manager to acquire a connection creates it on the fly, if it does not exist. The calling thread executes connection exchange, waits for the remote data and finishes connection creation. This requires coordination of all threads accessing the connection manager either to create a new connection or getting an existing one. It introduced a very complex architecture with high synchronization overhead and latency especially when many threads are concurrently accessing the connection manager. Furthermore, it was error prone and difficult to debug. We encountered severe performance issues when creating connections with one hundred nodes in a very short time range (e.g. all-to-all communication). This resulted in connection creation times of up to half a minute. Even with a small setup of 4 to 8 nodes, creating a connection could take up to a few seconds if multiple threads tried to create the same or different connections simultaneously. msgrc: Transport Engine for Messaging using RC QPs This section describes the msgrc transport engine. It uses reliable QPs to implement messaging using a dedicated send and receive thread. The engine's interface allows a transport to provide a stream of data (to send) in form of variable sized buffers and provides a stream of data (received) to a registered callback handler. This interface is rather low-level and the backend does not implement any means of serialization/deserialization for sending/receiving of complex data structures. In combination with DXNet ( §2), the logic for these tasks resides in the Java space with DXNet and is shared with other transports such as the NIO Ethernet transport [9]. However, there are no restrictions to implement these higher level components for the msgrc engine natively, if required. Further details on how the msgrc engine is connected with the Java transport counterpart are given in Section 8. The following subsections explain the general architecture and interface of the transport, sending and receiving of data using dedicated threads and how various features of Infini-Band were used for optimal hardware utilization. Architecture This section explains the basic architecture as well as the low-level interface of the engine. Figure 4 includes the msgrc transport and can be referred to for an abstract representation of the most important components. The engine relies on our dedicated connection manager ( §7.1) for connection handling. We decided to use one dedicated thread for sending ( §7.2.3) and one for receiving ( §7.2.4) to benefit from the following advantages: a clear separation of responsibilities resulting in a less complex architecture, no scheduling of send/receive jobs when using a single thread for both and higher concurrency because we can run both threads on different CPU cores concurrently. The architecture allows us to create decoupled pipeline stages using lock-free queues and ring buffers. Thereby, we avoid complex and slow synchronization between the two threads and with hundreds of threads concurrently accessing shared resources. The low-level interface allows fine-grained control for the target transport over the engine. The interface for sending data is depicted in Listing 1 and receiving is depicted in Listing 2. Both interfaces create an abstraction hiding connection and QP management as well as how the hardware is driven with the ibverbs library. For sending data, the interface provides the callback GetNextDataToSend. This function is called by the send thread to pull new data to send from the transport (e.g. from the ORB, see 8.2). When called, an instance of each of the two structures PrevWorkPackageResults and CompletedWorkList are passed to the implementation of the callback as parameters: the first contains information about the previous call to the function and how much data was actually sent. If the SQ is full, no further data can be sent. Instead of introducing an additional callback, we combine getting the next data with returning information about the previous send call to reduce call overhead (important for JNI access). The second parameter contains data about completed work requests, i.e. data sent for the transport. This must be used in the transport to mark data processed (e.g. moving the pointers of the ORB). 26 27 uint32_t Received(IncomingRingBuffer* ringBuffer); 28 29 void ReturnBuffer(IbMemReg* buffer); Listing 2: Structure and callback of the msgrc engine's receive interface If data is received, the receive thread calls the callback function Received with an instance of the IncomingRing-Buffer structure as its parameter. This parameter holds a list of received buffers with their source NID. The transport can iterate this list and forward the buffers for further processing such as de-serialization. If the transport has to return the number of elements processed and, thus, is able to control the amount of buffers it can process. Once the received buffers are processed by the transport, they must be returned back to the RecvBufferPool by calling ReturnRecvBuffer to allow re-using them for further receives. Sending of Data This section explains the data and control flow of the dedicated send thread which asynchronously drives the engine for sending data. Listing 3 depicts a simplified version of the contents of its main loop with the relevant aspects for this section. Details of the functions involved in the main flow are explained further below. The loop starts with getting a workPackage, the next data to send (line 1), using the engine's low-level interface ( §7.2.2). The instance prevWorkResults contains information about posted and non-posted data from the previous loop iteration. The instance completionList holds data about completed sends. Both instances are reseted/nulled (line 2-3) for re-use in the current iteration. If the workPackage is valid (line 5), i.e. data to send is available, the nodeId from that package is used to get the connection to the send target from the connection manager (line 6). The connection and workPackage are passed to the SendData function (line 7). It processes the workPackage and returns how much data was processed, i.e. posted to the SQ of the connection, and how much data could not be processed. The latter happens if the SQ is full and must be kept track of to not lose any data. Afterwards, the thread returns the connection to the connection manager (line 8). At the end of a loop iteration, the thread polls the SCQ to remove any available WCs. We share the completion queue among all SQs/connections to avoid iterating over many connections for a task. The loop iteration ends and the thread starts from the beginning by calling GetNext-DataToSend and provides the work results of our previous iteration. Data about polled WCs from the SCQ are stored in the completionList and forwarded via the interface (to the transport). If no data is available (line 5), lines 6-8 are skipped and the thread executes a completion poll, only. This is important to ensure that any outstanding WCs are processed and passed to the transport (via the completionList and calling GetNext-DataToSend). Otherwise, if no data is sent for a while, the transport will not receive any information about previously processed data. This leads to false assumptions about the available buffer space for sending data, e.g. assuming that data fits into the buffer but actually does not because the processed buffer space is not free'd, yet. In the following paragraphs, we further explain how the functions SendData and PollCompletions make optimal use of the ibverbs library and how this cooperates with the interleaved control flow of the main thread loop explained above. The SendData function is responsible for preparing and posting of FC data and normal data (payload). FC data, which determines the number of flow control windows to confirm, is a small number (< 128) and, thus, does not require a lot of space. We post it as part of the immediate data, which can hold up to 4 bytes of data, with the WR instead of using a separate side channel, e.g. another QP. This avoids overhead of posting and polling of another QP which benefits overall performance, especially with many simultaneous connections. With FC data using 1 byte of the immediate data field, we use further 2 bytes to include the NID of the source node. This allows us to identify the source of the incoming WC on the remote. Otherwise, identifying the source would be very inconvenient. The only information provided with the incoming WC is the sender's unique physical QP id. In our case, this id must be mapped to the corresponding NID of the sender. However, this introduces an indirection every time a package arrives which hurts performance. For sending normal data (payload), the provided work-Package holds two pointers, front and back, which enclose a memory area of data to send. This memory area belongs to a buffer (e.g. the ORB) which was registered with the protection domain on start to allow access by the HCA. Figure 7 depicts an example with three (aggregated) ready to send messages in the ORB. We create a WR for the data to send and provide a single SGE which takes the pointers of the enclosed memory area. The HCA will directly read from that area without further copying of the data (zero copy). For buffer wrap arounds, two SGEs are created and attached to one WR: one SGE for the data from the front pointer to the end of the buffer, another SGE for the data from the start of the buffer to the back pointer. If the size of the area to send (sum of all SGEs) exceeds the maximum configurable receive size, the data to send must be sliced into multiple WRs. Multiple WRs are chained to a link list to minimize call overhead when posting them to the SQ using ibv_post_send. This greatly increases performance compared to posting multiple standalone WRs with single calls. The number of SGEs of a WR can be 0, if no normal data is available to send but FC data is available. To send FC data only, we write it to the immediate data field of a WR along with our source NID and post it without any SGEs attached which results in a 0 length data WR. The PollCompletions function calls ibv_poll_cq, once, to poll for any completions available on the SCQ. A SCQ is used instead of per connection CQs to avoid iterating the CQs of all connections which impacts performance. The send thread keeps track of the number of posted WRs and, thus, knows how many WCs are outstanding and expected to arrive on the SCQ. If none are being expected, polling is skipped. ibv_poll_cq is called once per PollCompletion call, only, and every call tries to poll WCs in batches to keep the call overhead minimal. Experiments have shown that most calls to ibv_poll_cq, even on high loads, will return empty, i.e. no WRs have completed. Thus, polling the SCQ until at least one completion is received is the wrong approach and greatly impacts overall performance. If the SQ of another connection is not full and there is data available to send, this method wastes CPU resources on busy polling instead of processing further data to send. The performance impact (resulting in low throughput) increases with the number of simultaneous connections being served. Furthermore, this increases the chance of SQs running empty because time is wasted on waiting for completions instead of keeping all SQs filled. Full SQs ensure that the HCA is kept busy which is the key to optimal performance. Data is received using a SRQ and SCQ instead of multiple receive and completions queues. This avoids iterating over all open connections and checking for data availability which introduces overhead with increasing number of simultaneous connections. Equally sized buffers for receiving data (configurable size and amount) are pooled and returned for re-use by the transport, once processed ( §7.2.2). Receiving of Data The loop starts by calling PollCompletions (line 1) to poll the SCQ for WCs. Before processing the WCs returned, the SRQ is refilled by calling Refill (line 4), if the SRQ is not filled, yet. Next, if any WCs were polled previously, they are processed by calling ProcessCompletions (line 8). This step pushes them to the Incoming Ring Buffer (IRB), a temporary ring buffer, before dispatching them. Finally, if the IRB is not empty (line 11), the thread tries to forward the contents of the IRB by calling DispatchReceived via the interface to the transport ( §7.2.2). The following paragraphs are further elaborating on how PollCompletions, Refill, ProcessCompletions and Dis-patchReceived make optimal use of the ibverbs library and how this cooperates with the interleaved control flow of the main thread loop explained above. The PollCompletions function is very similar to the one explained in Section 7.2.3 already. WCs are polled in batches of max. currently available IRB space and buffered before being processed. The Refill function adds new receive WRs to the SRQ, if the SRQ is not completely filled and receive buffers from the receive buffer pool are available. Every WR consists of a configurable number of SGEs which make up the maximum receive size. This is also the limiting size the send thread can post with a single WR (sum of sizes of SGE list). Using this method, the receive thread does not have to take care of any software slicing of received data because the HCA scatters one big chunk of send data transparently to multiple (smaller) receive buffers on the receiver side. At last, Refill chains the WRs to a linked list which is posted on a single call to ibv_post_srq_recv for minimal overhead. If WCs are buffered from the previous call to PollCompletions, the ProcessReceived function iterates this list of WCs. For each WC of the list, it gets the source NID and FC data from the immediate data field. If the recv length of this WC is non zero, the attached SGEs contain the received data scattered to the receive buffers of the SGE list. As the receive thread does not know or have any means of determining the size of the next incoming data, the challenge is optimal receive buffer usage with minimal internal fragmentation. Here, fragmentation describes the amount of receive buffers provided with a WR as SGEs in relation to the amount of received data written to that block of buffers. The less data written to the buffers, the higher the fragmenta-tion. In the example shown in figure 7, the three aggregated and serialized messages are received in five buffers but the last buffer is not completely used. This fragmentation cannot be avoided but handled to avoid negative results like empty buffer pools or low per buffer utilization. Receive buffers/SGEs of a WR that do not contain any received data, because the amount of received data is less than the total size of the list of buffers of the SGE list, are pushed back to the buffer pool. All receive buffers of the SGE list that contain valid received data are pushed to the IRB (in the order they were received). Depending on the target application, the fragmentation degree can be lowered if one configures the receive buffer and pool sizes accordingly. Applications typically sending small messages are performing well with small receive buffer sizes. However, throughput might decrease slightly for applications sending mainly big messages on small receive buffer sizes requiring more WRs per send data send (data sliced into multiple WRs). If the IRB contains any elements, the DispatchReceived function tries to forward them to the transport via the Received callback ( §7.2.2). The callback returns the number of elements it consumed from the IRB and, thus, is allowed to consume none or up to what's available. The consumed buffers are returned asynchronously to the receive buffer pool by transport, once it finished processing them. Load Adaptive Thread Parking The send and receive threads must be kept busy running their loops to send and receive data as fast as possible to ensure low latency. However, pure busy polling without any sleeping or yielding introduces high CPU load and occupying two cores of the CPU permanently. This is unnecessary during periods when the network is not used frequently. We do not want the send and receive threads to waste CPU resources and, therewith, decrease the overall node performance. Experiments have shown that simply adding sleep or yield operations highly impacts network latency and throughput and introduces high fluctuations [8]. To solve this, we used a simple but efficient wait pattern we call load adaptive thread parking. After a defined amount of time (e.g. 100 ms) of polling and no data available, the thread enters a yield phase and calls yield on every loop iteration if no data is available. After another timeframe passed (e.g. 1 sec), the thread enters a parking phase calling sleep-/park with a minimum value of 1 ns on every loop iteration reducing CPU load significantly. The lowest value possible (1 ns) ensure that the scheduler of the operating system sends the thread sleeping for the shortest period of time possible. Once data is available, the current phase is interrupted and the timer is reset. This ensures busy looping for the next iterations keeping latency for successive messages and on high loads low. For further details including evaluation results re- fer to our DXNet publication [8]. DXNet IB Transport Implementation in DXNet (Java) This section describes the transport implementation for DXNet in Java which utilizes the low-level transport engines, e.g. msgrc ( §7.2), provided by Ibdxnet ( §7). We describe the native interface which implements the low-level interface exposed by the engine ( §7.2.2) and how it is used in the DXNet IB transport for higher level connection management ( §8.1), sending serialized data from the ORB ( §8.2) and handling incoming receive buffers from remote nodes ( §8.3). Figure 8 depicts the involved components with the main aspects of their data and control flow which are referred to in the following subsection. If an application wants to send one or multiple messages, it calls DXNet which serializes them into the ORB and signals the WriteInterestManager (WIM) about available data ( §2.2). The native send thread checks the WIM for data to send periodically and, if available, gets it from the ORB. Depending on the size, the data to send might be sliced into multiple elements which are posted to the SQ as one or multiple work requests ( §7.2.3). Received data on the recv queue is written to one or multi- ple buffers (depending on the amount of data) from a native buffer pool ( §7.2.4). Without further processing, the buffers are forwarded to the Java space and pushed to the Incom-ingBufferQueue (IBQ). DXNet's de-serialization is processing the buffers in order and creates messages (Java objects) which are dispatched to pre-registered callbacks using dedicated message handler threads ( §2.3). Connection Handling To implement new transports in DXNet, it provides an interface to create specific connection types for the transport to implement. The DXNet core, which is shared across all transport implementations, manages the connections for the target application by automatically creating new connections on demand or closing connections if a configurable threshold is exceeded ( §2.1). For the IB transport implementation, the derived connection does not have to store further data or implement functionality. This is already stored and handled by the connection manager of Ibdxnet. It reduces overall architectural complexity by avoiding split functionality between Java and native space. Furthermore, it avoids context switching between Java and native code. Only the NID of either the target node to send to or the source node of the received data is exchanged between the Java and native space and vice versa. Thus, Connection setup in the transport implementation in Java is limited to creating the Java connection object for DXNet's connection manager. Connection close and cleanup is similar with an additional callback to the native library to signal a connection was closed to Ibdxnet's connection management. Dispatch of Ready-to-send Data The engine msgrc is running dedicated threads for sending data. The send thread pulls new data from the transport via the GetNextDataToSend function of the low-level interface ( §7.2.2, §7.2.3). In order to allow this and other callbacks (for connection management and receiving data) to be available to the IB transport, a lightweight JNI binding with the aspects explained in Section 5 was created. The transports implement the GetNextDataToSend function exposed by the JNI binding. To get new data to send, the send thread calls the JNI binding which is implemented in the IB transport in Java. Next, we elaborate on the implementation of GetNext-DataToSend in the IB transport, how the send thread gets data to send and how the different states for the data (posted, not posted, send completed) are handled in combination with the existing ORB data structure. Application threads using DXNet and sending messages are concurrently serializing them into the ORB ( §2.2). Once serialization completes, the thread signals the transport that there is ready to send (RTS) data in the ORB. For the IB transport, this signal adds a write interest to the dedicated Write Interest Manager (WIM). The WIM manages interest tokens using a lock-free list (based on a ring buffer) and a per connection atomic counter for both, RTS normal data from the ORB and FC data. Each type has a separate atomic counter, but, if not explicitly stated, we refer to them as one for ease of comprehension. The list contains the nodeIDs of the connections that have RTS data in the order they were added. The atomic counter is used to keep track of the number of interests signalled, i.e. the number of times the callback was triggered for the selected NID. Figure 9 depicts this situation with two threads (T1 and T2) which finished serializing data to the ORBs of two independent connections (3 and 2). The table with atomic counters keeps track of the number of signaled interests for RTS data/messages per connection. By calling GetNext-DataToSend, the send thread from Ibdxnet checks a lock-free list which contains nodeIDs of the connections with at least one write interest available. The nodeIDs are added in order to the list but only if it is not already in the list. This is detected by checking if the atomic counter returned 0 after a fetch and add operation. This mechanism ensures that data from many connection is processed in a round robin fashion. Furthermore, avoiding duplicates in the queue sets an upper bound for memory requirement which is sizeof(nodeID) * maxNumConnections. Otherwise, the queue can grow depending on the load and number of active connections. If the queue of the WIM is empty, the send thread aborts and returns to the native space. The send thread uses the NID it removed from the queue to get and reset the number of interests of the corresponding atomic counter. If there are any interests available for FC data, the send thread processes them by getting the FC from the connection and getting, but not yet removing, the stored FC data. For interests concerning normal data, the send thread gets the ORB from the connection and reads the current front and back pointers. The pointers of the ORB are ORB avail. for sending B P F B NP data posted to send queue but not confirmed B P next data to post to send queue F end of data to send and start of free area for serialization Serialization Cores: Sending Core: B NP Figure 10: Extended outgoing ring buffer used by IB transport. not modified, only read (details below). With this data, along with the NID of the connection, the send thread returns to the native space for processing ( §7.2.3). Every time the send thread returns to the Java space to get more data to send, it carries the parameters prevWorkResults, which contains data about the previous send operation, and completionList, which contains data about completed WRs, i.e. data send confirmations ( §7.2.3). For performance reasons, this data resides in native memory as structs and is mapped and accessed using DirectByteBuffers ( §5). The asynchronous workflow used to send and receive data by posting WRs and polling WCs must be adopted by updating the ORB and FC accordingly. Depending on the fill level of the SQ, the send thread might not be able to post all normal data or FC it retrieved in the previous iteration. The prevWorkResults parameter contains this information about how much normal and FC data was processed and could not be processed. This information must be preserved for the next send operation to avoid sending data multiple times. For the ORB however, we cannot move the front pointer because this frees up the memory which is not confirmed to be sent, yet. Thus, we introduce a second front pointer, front posted, which is only known to and modified by the send thread and allows it to keep track of already posted data. Figure 10 depicts the most important aspects of the enhanced ORB which is used for the IB transport. In total, this creates three virtual areas of memory designated to the following states: • Data posted but not confirmed: front to front posted • Data RTS and not posted: front posted to back • Free memory for send threads to serialize to: back to front Using the parameter prevWorkResults, the front posted pointer is moved by the amount of data posted. Any non processed data remains unprocessed (front posted not moved to cover entire area of RTS data). For data provided with the parameter completionList, the front pointer is updated according to the number of bytes now confirmed to be sent. A similar but less complex approach is applied to updating FC. Process Incoming Buffers The dedicated receive thread of msgrc is pushing received data to the low-level interface. Analogous to how RTS data is pulled from the IB transport via the JNI binding, the receive thread uses a received function provided by the binding to push the received buffers to the IB transport into Java space. All received buffers are stored as a batch in the recvPackage data structure ( §7.2.2) to minimize context switching overhead. For performance reasons, this data resides in native memory as structs and is mapped and accessed using Direct-ByteBuffers ( §5). The receive thread iterates the package in Java space, dispatches received FC data to each connection and pushes the received buffers (including the connection of the source node) to the IBQ ( §2.3). The buffers are handled and processed asynchronously by the MessageCreationCoordinator and one or multiple MessageHandlers of the DXNet core (all of them are Java threads). Once the buffers are processed (de-serializing its contents), the Java threads return them asynchronously to the transport engines receive buffer pool ( §7.2.4). Evaluation For better readability, we refer to DXNet with the IB transport Ibdxnet and msgrc engine as DXNet from here onwards. We implemented commonly used microbenchmarks to compare DXNet to two MPI implementations supporting In-finiBand: MVAPICH2 and FastMPJ. We decided to compare against two MPI implementations for the following reasons: To the best of our knowledge, there is no other system available that offers all features of DXNet and big data applications implementing their dedicated network stack do not offer it as a separate application/library like DXNet does. MPI can be used to partially cover some features of DXNet but not all ( §3). We are aware that MPI is targeting a different application domain, mainly HPC, whereas DXNet is targeting big data. However, MPI was already used in big data applications as well and several aspects related to the network stack and the technologies are overlapping in both application domains. Bandwidth with two nodes is compared using typical uniand bi-directional benchmarks. We also compared scalability using an all-to-all benchmark (worst-case scenario) with up to 8 nodes. Latency is compared by measuring the RTT with a request-response communication pattern. These benchmarks are executed single threaded to compare all three systems. Furthermore, we compared how DXNet and MVAPICH2 perform in a multi-threaded environment which is typical for Big Data but not HPC applications. However, we can only compare it using three benchmarks. Latency multi-threaded is not possible since it would require MVAPICH2 to implement additional infrastructure to store and map requests with responses and dynamic dispatching callbacks to handlers of incoming data to multiple receive threads (similar to DXNet). MVAPICH2 does not provide such a processing pipeline. FastMPJ cannot be compared at all here because it only supports single threaded environments. Table 1 summerizes the systems and benchmarks executed. All benchmarks were executed on up to 8 nodes of our private cluster, each with a single socket Intel Xeon CPU E5-1650 v3, 6 cores running at 3.50 GHz per core clock speed and 64 GB RAM. The nodes are running Ubuntu 16.04 with kernel version 4.4.0-57. All nodes are equipped with a Mellanox MT27500 HCA, connected with 56 Gbps links to a single Mellanox SX6015 18 port switch. For Java applications, we used the Oracle JVM version 1.8.0_151. Benchmarks The osu benchmarks included with MVAPICH2 implement typical micro benchmarks to measure uni-and bi-directional bandwidth and uni-directional latency which reflect basic usage of any network stack for point-to-point communication. osu_latency is used as a foundation and extended with recording of all RTTs to determine the 95th, 99th and 99.9th percentile after execution. The latency measured is the full RTT when the source is sending a request to the destination up to when the corresponding response is received by the source. For evaluating throughput, the benchmarks osu_bw and osu_bibw were combined to a single benchmark and extended to enable all-to-all bi-directional execution with more than two nodes. We consider this a relevant benchmark to show if the system is capable of handling multiple connections under high load. This is a common situation found in big data applications as well as backend storages [11]. On all-to-all, every node receives from all other nodes and sends messages to all other nodes in a round robin fashion. The bi-directional and all-to-all results presented are the aggregated send throughputs of all participating nodes. We added options to support multi-threaded sending and receiving using a configurable number of send and receive threads. As the per-processor core count increases, the multi-threading aspect becomes more and more important. Furthermore, our target application domain big data relies heavily on multithreaded environments. For the evaluation of FastMPJ, we ported the osu benchmarks to Java. The benchmarks for evaluating a multithreaded MPI process were omitted because FastMPJ does not support multi-threaded processes. DXNet comes with its own benchmarks already implemented which are comparable to the osu benchmarks. The osu benchmarks use a configurable parameter win-dow_size (WS) which denotes the number of messages sent in a single batch. Since MPI does not support implicit message aggregation like DXNet, we executed all MPI experiments with increasing WS to determine bandwidth peaks and saturation under optimal conditions and ensure a fair comparison to DXNet's built in aggregation. No MPI collectives are required for the benchmarks and, thus, aren't evaluated. All benchmarks are executed three times and their variance is displayed using error bars. Throughputs are specified in GB/s, latencies/RTTs in us and message rates in mmps (million messages per second). All throughput benchmarks send 100 million messages and all latency benchmarks 10 million messages. The total number of messages is incrementally halved starting with 4 kb message size to avoid unnecessary long running benchmark runs. All throughputs measured are based on the total amount of sent payload bytes. This does not include any overhead like message headers or envelopes that are required by the systems for message identification or routing. Furthermore, we included the results of the ib perf tools ib_write_bw and ib_write_lat as baselines to all end-to-end type benchmarks. These simple perf tools cannot be compared directly to the complex systems evaluated. But, these baselines show the best possible network performance (without any overhead by the evaluated system) and for rough comparisons of the systems across multiple plots. We chose parameters that reflect the configuration values of DXNet as close as possible (but still allow comparisons to FastMPJ and MVAPICH2 as well): receive queue size 2000 and send queue size 20 for both bandwidth and latency measurements; 100,000,000 messages for bandwidth and 10,000,000 for latency. DXNet with Ibdxnet Transport We configured DXNet using the parameters depicted in Table 2. The configuration values were determined with various debugging statistics and experiments, and are currently considered optimal configuration parameters. For comparing single threaded performance, the number of application threads and message handlers (referred to as MH) is limited to one each to allow comparing it to FastMPJ and MVAPICH2. DXNet's multi-threaded architecture does not allow combining the logic of the application send thread and a message handler into a single thread. Thus, DXNet's "single threaded" benchmarks are always executed with one dedicated send and one dedicated receive thread. The following subsections present the results of the various benchmarks. First, we present the results of all single threaded benchmarks with one send thread: uni-and bi-directional throughput, uni-directional latency and all-toall with increasing node count. Afterwards, the results of the same four benchmarks are presented with multiple send threads. Uni-directional Throughput The results of the uni-directional benchmark are depicted in figure 11. Considering one MH, DXNet's throughput peaks at 5.9 GB/s at a message size of 16 kb. For larger messages (32 kb to 1 MB), one MH is not sufficient to de-serialize and dispatch all incoming messages fast enough and drops to a peak bandwidth of 5.4 GB/s. However, this can be resolved by simply using two MHs. Now, DXNet's throughput peaks and saturates at 5.9 GB/s with a message size of just 4 kb and stays saturated up to 1 MB. Message sizes smaller than 4 kb also benefit significantly from the shorter receive processing times by utilizing two MHs. Further MHs can still improve performance but only slightly for a few message sizes. For small messages up to 64 bytes, DXNet achieves peak Compared to the baseline performance of ib_send_bw, DXNet's peak performance is approx. 0.5 to 1.0 mmps less. With increasing message size, this gap closes and DXNet even surpasses the baseline 1 kb to 32 kb message sizes when using multiple threads. DXNet peaks close to the baseline's peak performance of 6.0 GB/s. The results with small message sizes are fluctuating independent of the number of MHs. This can be observed on all other benchmarks with DXNet measuring message/payload throughput as well. It is a common issue which can be observed when running high load throughput benchmarks using the bare ibverbs API as well. This benchmark shows that DXNet is capable of handling a vast amount of small messages efficiently. The application send thread and, thus, the user does not have to bother with aggregating messages explicitly because DXNet handles this transparently and efficiently. The overall performance benefits from multiple message handlers increasing receive throughput. Large messages do impact performance with one MH because the de-serialization of data consumes most of the processing time during receive. However, simply adding at least another MH solves this issue and further increases performance. The peak aggregated message rate for small messages up to 64 bytes is varying from approx. 6 to 6.9 mmps with one MH. Using more MHs cannot improve performance significantly for this benchmark. Due to the multi-threaded and highly pipelined architecture of DXNet, these variations cannot be avoided, especially when exclusively handling many small messages. Bi-directional Throughput Compared to the baseline performance of ib_send_bw, there is still room for improvement for DXNet's performance on small message sizes (up to 2.5 mmps difference). For medium message sizes, ib_send_bw yields slightly higher throughput for up to 1 kb message size. But, DXNet surpasses ib_send_bw on 1 kb to 16 kb message size. DXNet's peak performance is approx. 1.1 GB/sec less than ib_send_bw's (11.5 GB/sec). Overall, this benchmark shows that DXNet can deliver Figure 13: DXNet: 2 nodes, uni-directional RTT and message rate with one application send thread, increasing message size great performance especially for small messages similar to the uni-directional benchmark ( §9.2.1). Figure 13 depicts the average RTTs as well as the 95th, 99th and 99.9th percentile of the uni-directional latency benchmark with one send thread and one MH. For message sizes up to 512 bytes, DXNet achieves an avg. RTT of 7.8 to 8.3 µs, a 95th percentile of 8.5 to 8.9 µs, a 99th percentile of 8.9 to 9.2 and 99.9th percentile of 11.8 to 12.7 µs. This results in a message rate of approx 0.1 mmps. As expected, starting with 1 kb message size, latency increases with increasing message size. Uni-directional Latency The RTT can be broken down into three parts: DXNet, Ibdxnet and hardware processing. Taking the lowest avg. of 7.8 µs, DXNet requires approx. 3.5 µs of the total RTT (the full breakdown is published in our other publication [8]) and the hardware approx. 2.0 µs (assuming avg. one way latency of 1 µs for the used hardware). Message de-and serialization as well as message object creation and dispatching are part of DXNet. For Ibdxnet, this results in approx. 2.3 µs processing time which includes JNI context switching as well as several pipeline stages explained in the earlier sections. Compared to the baseline performance of ib_send_lat, DXNet's latency is significantly higher. Obviously, additional latency cannot be avoided with such a long and complex processing pipeline. Considering the breakdown mentioned above, the native part Ibdxnet, which calls ibverbs to send and receive data, is to some degree comparable to the minimal perf tool ib_send_bw. With a total of 2.3 µs (of the full pipeline's 7.8 µs), the total RTT is just slightly higher than ib_send_bw's 1.8 µs. But, Ibdxnet already includes various data structures for state handling and buffer scheduling ( §7.2.3, §7.2.4) which ib_send_bw doesn't. Buffers for sending data are re-used instantly and the data received is 4 GB/s. Incrementally adding two nodes, throughput is increased by 8.5 GB/s (for 2 to 4 nodes), by 7.1 GB/s (for 4 to 6 nodes) and 6.4 GB/s (for 6 to 8 nodes). One would expect approx. equally large throughput increments but the gain is noticeably lowered with every two nodes added. We tried different configuration parameters for DXNet and ibverbs like different MTU sizes, SGE counts, receive buffer sizes, WRs per SQ/SRQ or CQ size. No combination of settings allowed us to improve this situation. We assume that the all-to-all communication pattern puts high stress on the HCA which, at some point, cannot keep up with processing outstanding requests. To rule out any software issues with DXNet first, we implemented a low-level "loopback" like test which uses the native part of Ibdxnet, only. The loopback test does not involve any dynamic message posting when sending data or data processing when receiving. Instead, a buffer equally to the size of the ORB is processed by Ibdxnet's send thread on every iteration and posted to every participating SQ. This ensures that all SQs are filled and are quickly refilled once at least one WR was processed. When receiving data on the SRQ, all buffers received are directly put back into the pool without processing Figure 15: DXNet: 2 nodes, uni-directional throughput and message rate with multiple application send threads, increasing message size and 4 message handlers and the SRQ is refilled. This ensures that no additional processing overhead is added for sending and receiving data. Thus, Ibdxnet's loopback test comes close to a perftool like benchmark. We executed the benchmark with 2, 4, 6 and 8 nodes which yielded aggregated throughputs of 11.7 GB/s, 21.7 GB/s, 28.3 GB/s and 34.0 GB/s. These results are very close to the performance of the full DXNet stack but don't rule out all software related issues, yet. The overall aggregated bandwidth could still somehow be limited by Ibdxnet. Thus, we executed another benchmark which, first, executes all-to-all communication with up to 8 nodes, then, once bandwidth is saturated, switching to a ring formation for communication without restarting the benchmark (every node sends to its successor determined by NID, only). Once the nodes switch the communication pattern during execution, the per node aggregated bandwidth increases very quickly and reaches a maximum aggregated bandwidth of approx. (11.7/2 × num_nodes) GB/s independent of the number of nodes used. This rules out total bandwidth limitations for software and hardware. Furthermore, we can now rule out any performance issues in DXNet or even ibverbs with connection management (e.g. too many QPs allocated). This leads to the assumption that the HCA cannot keep up with processing outstanding WRQs when SQs are under high load (always filled with WRQs). With more than 3 SQs per node, the total bandwidth drops noticably. Similar results with other systems further support this assumption ( §9.3.4 and 9.4.4). Figure 15 shows the uni-directional benchmark executed with 4 MHs and 1 to 16 send threads. For 1 to 4 send threads throughput saturates at 5.9 GB/s at either 4 kb or 8 kb messages. For 256 byte to 8 kb, using one thread yields better Figure 16: DXNet: 2 nodes, bi-directional throughput and message rate with multiple application send threads, increasing message size and 4 message handlers throughput than two or sometimes four threads. However, running the benchmark with 8 and 16 send threads increases overall throughput for all messages greater 32 byte significantly with saturation starting at 2 kb message size. DXNet's pipeline benefits from the many threads posting messages to the ORB concurrently. This results in greater aggregation of multiple messages and allows higher buffer utilization for the underlaying transport. DXNet also increases message throughput on small message sizes up to 512 byte. from approx. 4.0 mmps up to 6.7 mmps for 16 send threads. Again, performance is slightly worse with two and four compared to a single thread. Uni-directional Throughput Multi-threaded Furthermore, DXNet even surpasses the baseline performance of ib_send_bw when using multiple send threads. However, the peak performance cannot be improved further which shows the current limit of DXNet for this benchmark and the hardware used. Figure 16 shows the bi-directional benchmark executed with 4 MHs and 1 to 16 send threads. With more than one send thread, the aggregated throughput peaks at approx. 10.4 and 10.7 GB/s with messages sizes of 2 and 4 kb. DXNet delivers higher throughputs for all medium and small messages with increasing send thread count. The baseline performance of ib_send_bw is reached on small message sizes and even surpassed with medium sized messages up to 16 kb. The peak throughput is not reached showing DXNet's current limit with the used hardware. Bi-directional Throughput Multi-threaded The overall performance with 8 and 16 send threads don't differ noticeably which indicates saturation of DXNet's processing pipeline. For small messages (less than 512 byte), the message rates also increase with increasing send thread count. Again, saturation starts with 8 send threads with a message rate of approx. 8.6 to 10.2 mmps. Figure 18: DXNet: 2 nodes, uni-directional 95th, 99th and 99.9th percentile RTT and message rate with multiple application send threads, increasing message size and 4 message handlers DXNet is capable of handling a multi-threaded environment under high load with CPU over-provisioning and still delivers high throughput. Especially for small messages, DXNet's pipeline even benefits from the highly concurrent activity by aggregating many messages. This results in higher buffer utilization and, for the user, higher overall throughput. DXNet's internal threads, MH and send threads, exceed the core count of the CPU, DXNet switches to different parking strategies for the different thread types which slightly increase latency but greatly reduce overall CPU load ( §7.2.5). Uni-directional Latency Multi-threaded The message rate can be increased up to 0.33 mmps with up to 4 send threads as, practically, every send thread can use a free MH out of the 4 available. With 8 and 16 send threads, the MHs on the remote must be shared and DXNet's overprovisioning is active which reduces the overall throughput. The percentiles shown in figure 18 reflect this sitution very well and increase noticeably. With a single thread, as already discussed in 9.2.3, the difference of the avg. (7.8 to 8.3 µs) and 99.9th percentile (11.8 to 12.7 µs) RTT for message sizes less than 1 kb is approx. 4 to 5 µs. When doubling the send thread count, the 99.9th percentiles roughly double as well. When over-provisioning the CPU, we cannot avoid the higher than usual RTT caused by the increasing amount of messages getting posted. 9.2.8 All-to-all Throughput with up to 8 Nodes Multithreaded Figure 19 shows the results of the all-to-all benchmark with up to 8 nodes, 16 These results show that DXNet delivers high throughputs and message rates under high loads with increasing node and thread count. Small messages profit significantly through better aggregation and buffer utilization. Summary Results This section briefly summerizes the most important results and numbers of the previous benchmarks. All values are considered "up to" and show the possible peak performance in the given benchmark. Single-threaded: • Uni-directional throughput One MH: saturation with 16 kb messages, peak throughput at 5.9 GB/s; Figure 20: FastMPJ: 2 nodes, uni-directional throughput and message rate with increasing message and window size FastMPJ This section describes the results of the benchmarks executed with FastMPJ and compares them to the results of DXNet presented in the previous sections. We used FastMPJ 1.0_7 with the device ibvdev to run the benchmarks on InfiniBand hardware. The osu benchmarks of MVAPICH2 were ported to Java ( §9.1) and used for all following experiments. Since FastMPJ does not support multithreading in a single process, all benchmarks were executed single threaded and compared to the single threaded results of DXNet, only. Figure 20 shows the results of executing the uni-directional benchmark with two nodes with increasing message size. Furthermore, the benchmark was executed with increasing WS to ensure bandwidth saturation. As expected, throughput increases with increasing message size and bandwidth saturation starts at a medium message size of 64k with approx. 5.7 GB/s. The actual peak throughput is reached with large 512k message for a WS of 64 with 5.9 GB/s. For small message sizes up to 512 byte and independent of the WS, FastMPJ achieves a message rate of approx. 1.0 mmps. Furthermore, the results show that the WS doesn't matter for message sizes up to 64 KB. For 128 KB to 1 MB, FastMPJ profits from explicit aggregation with increasing WS. This indicates that ibvdev might include some message aggregation mechanism. Uni-directional Throughput Compared to the baseline performance of ib_send_bw, FastMPJ's performance is always inferior to it with a peak performance of 5.9 GB/s close to ib_send_bw's with 6.0 GB/s. Compared to the results of DXNet ( §9.2.1), DXNet's throughput saturates and peaks earlier at a message size of 16 kb with 5.9 GB/s. However, if using one MH, throughput drops for larger messages down to 5.4 GB/s due to increased message processing time (de-serialization). However, such a Bi-directional Throughput The results of the bi-directional benchmark are depicted in figure 21. Again, throughput increases with increasing message size peaking at 10.8 GB/s with WS 2 and large 512 kb messages. However, when handling messages of 128 kb and greater, throughput peaks at approx 10.2 GB/s for the WSs 4 to 32 and saturation varies depending on the WS. For WSs 4 to 32, throughput is saturated with 64 kb messages, for WSs 1 and 2 at 512 kb. Starting at 128 kb message size, WSs of 1 and 2 achieve slightly better results than the greater WSs. Especially WS 64 drops significantly with message sizes of 128 kb and greater. However, for message sizes of 64 kb to 512 kb, FastMPJ profits from explicit aggregation. Compared to the uni-directional results ( §9.3.1), FastMPJ does profit to some degree from explicit aggregation for small messages with 1 to 128 bytes. WS 1 to 16 allow higher message throughputs with WS 16 as an optimal value peaking at approx. 2.4 mmps for 1 to 128 byte messages. Greater WSs degrade message throughput significantly. However, this does not apply to message sizes of 256 bytes where greater explcit aggregation does always increase message throughput. Compared to the baseline performance of ib_send_bw, FastMPJ's performance is again always inferior to it with a difference in peak performance of 0.7 GB/sec (10.8 GB/s to 11.5 GB/s). When comparing to DXNet's results ( §9.2.2), the throughputs are nearly equal with 10.7 GB/s also at 512 kb message Uni-directional Latency The results of the latency benchmark are depicted in figure 22. Compared to the baseline performance of ib_send_lat, FastMPJ's average RTT comes close to its 1.8 µs and closes that gap slightly further starting with 256 byte message size. Comparing the avg. RTT and 95th percentile to DXNet's results ( §9.2.3), FastMPJ outperforms DXNet by a up to four times lower RTT. This is also reflected by the message rate of 0.41 mmps for FastMPJ and 0.1 mmps for DXNet. The breakdown given Section 9.2.3 explains the rather high RTTs and the amount of processing time spent by DXNet on major sections of the pipeline. However, even DXNet's avg. RTT for message sizes up to 512 byte is higher than FastMPJ's, DXNet achieves lower 99th (8.9 to 9.2 µs) and 99.9th percentile (11.8 to 12.7 µs) than FastMPJ. Summary Results This section briefly summerizes the most important results and key numbers of the previous benchmarks. All values are considered "up to" and show the possible peak performance in the given benchmark and are single-threaded, only. All results benefit from explicit aggregation using the WS. • Uni-directional throughput Saturation at 64 kb message size with 5.7 GB/s; Peak throughput at 512 kb message size with 5.9 GB/s; Compared to DXNet's single threaded results, it outperforms FastMPJ on small messages with a up to 4 times higher message rate on both un-und bi-directional benchmarks. However, FastMPJ achieves a lower average and 95th percentile latency on the uni-directional latency benchmark. But, even with a more complicated and dynamic pipeline, DXNet achieves lower 99th and 99.9th percentile than FastMPJ demonstrating high stability. On all-to-all communication with up to 8 nodes, DXNet reaches similar throughputs to FastMPJ's for large messages but outperforms FastMPJ's message rate by up to three times for small messages. DXNet is always better for small messages. MVAPICH2 This section describes the results of the benchmarks executed with MVAPICH2 and compares them to the results of DXNet. All osu benchmarks ( §9.1) were executed with MVAPICH2-2.3. Since MVAPICH2 supports MPI calls with multiple threads of the same process, some benchmarks were executed single and multi-threaded. We set the following environmental variables for optimal performance and comparability: • MV2_DEFAULT_MAX_SEND_WQE=128 • MV2_DEFAULT_MAX_RECV_WQE=128 • MV2_SRQ_SIZE=1024 • MV2_USE_SRQ=1 • MV2_ENABLE_AFFINITY=1 Additionally for the multi-threaded benchmarks, the following environmental variables were set: • MV2_CPU_BINDING_POLICY=hybrid • MV2_THREADS_PER_PROCESS=X (where X equals the number of threads we used when executing the benchmark) • MV2_HYBRID_BINDING_POLICY=linear Uni-directional Throughput The results of the uni-directional single threaded benchmark are depicted in figure 26. Compared to the baseline performance of ib_send_bw, MVAPICH2's peak performance is approx. 1.0 mmps less for small messages. With increasing message size, on a WS of 64, the performance comes close to the baseline and even exceeds it for 2 kb to 8 kb messages. MVAPICH2 peaks very close to the baseline's peak performance of 6.0 GB/s. DXNet achieves very similar results ( §9.2.1) compared to MVAPICH2 but without relying on explicit aggregation. DXNet's throughput saturates and peaks earlier at a message size of 16 kb with 5.9 GB/s. However, if using one MH, throughput drops for larger messages down to 5.4 GB/s due to increased message processing time (de-serialization). As already explained in Section 9.3.1, this can be resolved by using two MHs. For small messages of up to 64 bytes, DXNet achieves an equal to slightly higher message rate of 4.0 to 4.5 mmps. Compared to the baseline performance of ib_send_bw, MVAPICH2's peak performance for small messages is approx. half of ib_send_bw's 9.5 mmps. With increasing message size, the throughput of MVAPICH2 comes close ib_send_bw's with WS 64 and 32 for 4 and 8 kb messages, only. Peak throughput for large messages comes close to ib_send_bw's 11.5 GB/s. Compared to DXNet's results ( §9.2.2), the aggregated throughput is slightly higher than DXNet's (10.7 GB/s). However, DXNet outperforms MVAPICH2 for medium sized messages by reaching a peak throughput of 10.4 GB/s compared to 9.5 GB/s (on WS 64) for just 8 kb messages. Furthermore, DXNet offers a higher message rate of 6 to 7.2 mmps on small messages up to 64 bytes. DXNet achieves overall higher performance without relying on explicit message aggregation. Figure 28 shows the results of the uni-directional single threaded latency benchmark. MVAPICH2 achieves a very low average RTT of 2.1 to 2.4 µs for up to 64 byte messages and up to 3.9 µs for up to 512 byte messages. The 95th, 99th and 99.9th percentile are just slightly higher than the average Compared to DXNet's results ( §13), MVAPICH2 achieves an overall lower latency. DXNet's average with 7.8 to 8.3 µs is nearly four times higher. The 95h (8.5 to 8.9 µs), 99th (8.9 to 9.2 µs) and 99.9th percentile (11.8 to 12.7 µs) are also at least two to three times higher. MVAPICH2 implements a very thin layer of abstraction, only. Application threads issuing MPI calls, are pinned to cores and are directly calling ibverbs functions after passing through these few layers of abstraction. DXNet however implements multiple pipeline stages with de/-serialization and multiple (JNI) context/thread switches. Naturally, data passing through such a long pipeline takes longer to process which impacts overall latency. However, DXNet traded latency for multithreading support and performance as well as efficient handling of small messages. MVAPICH2 achieves a peak throughput of 19.5 GB/s with 128 kb messages on WSs 16, 32 and 64 and starts at approx 32 kb message size. WS 8 gets close to the peak throughput as well but the remaining WSs peak lower for messages With WS 2, a message rate 8.4 to 8.8 mmps for up to 64 byte messages is achieved and 6.6 to 8.8 mmps for up to 512 byte. Running the benchmark with 6 nodes, MVAPICH2 hits a peak throughput of 27.3 GB/s with 512 kb messages on WSs 16, 32 and 64. Saturation starts with a message size of approx. 64 to 128 kb depending on the WS. For 1 kb to 32 kb messages, the fluctuations increased compared to executing the benchmark with 4 nodes. Again, message rate is degraded when using large WS for small messages. An optimal message rate of 11.9 to 13.1 is achieved with WS 2 for up to 64 byte messages. Uni-directional Latency With 8 nodes, the benchmark peaks at 33.3 GB/s with 64 kb messages on a WS of 64. Again, WS does matter for large messages as well with WS 16, 32 and 64 reaching the peak throughput and starting saturation at approx. 128 kb message size. The remaining WSs peak significantly lower. Figure 32: MVAPICH2: 2 nodes, bi-directional throughput and message rate, multi-threaded with one send and one recv thread with increasing message and window size The fluctuations for mid range messages sizes of 1 kb to 64 kb increased further compared to 6 nodes. Most notable, the performance with 4 kb messages and WS 4 is nearly 10 GB/s better than 4 kb with WS 64. With up to 64 byte messages, a message rate of 16.5 to 17.8 mmps is achieved. For up to 512 byte messages, the message rate varies with 13.5 to 17.8 mmps. As with the previous node counts, a smaller WS increases the message rate significantly while larger WSs degrade performance by a factor of two. MVAPICH2 has the same "scalability issues" as DXNet ( §9.2.4) and FastMPJ ( §9.3.4). The maximum achievable bandwidth matches what was determined with the other systems. With the same results on three different systems, it's very unlikely that this is some kind of software issue like a bug or bad implementation but most likely a hardware limitation. So far, we haven't seen this issue discussed in any other publication and think it is noteworthy to know what the hardware is currently capable of. Compared to DXNet ( §9.2.4), MVAPICH2 reaches slightly higher peak throughputs for large messages. However, this peak as well as saturation is reached later at 32 to 512 kb messages compared to DXNet with approx. 16 kb. The fluctuations for mid range size messages cannot be compared as DXNet does not rely on explicit aggregation. For small messages up to 64 byte, DXNet achieves significantly higher message rates, with peaks at 7.0 mmps, 15.0 mmps, 21.1 mmps and 27.3 mmps for 2 to 8 nodes, compared to MVAPICH2. Figure 32 shows the results of the bi-directional multithreaded benchmark with two threads (on each node): a separate thread for sending and receiving each. In our case, this is the simplest multi-threading configuration to utilize more than one thread for MPI calls. The plot shows highly fluctu-ating results of the three runs executed as well as overall low throughput compared to the single threaded results ( §9.4.2). Throughput peaks at 8.8 GB/s with a message size of 512 kb for WS 16. A message rate of 0.78 to 1.19 mmps is reached for for up to 64 byte messages for WS 32. Bi-directional Throughput Multi-threaded We tried varying the configuration values (e.g. queue sizes, buffer sizes, buffer counts) but could not find configuration parameters that yielded significantly better, especially less fluctuating, results. Furthermore, the benchmarks could not be finished with sending 100,000,000 messages. When using MPI_THREAD_MULTIPLE, the memory consumption increases continuously and exhausts the total memory available on our machine (64 GB). We reduced the number of messages to 1,000,000 which still consumes approx. 20% of the total main memory but at least executes and finishes within a reasonable time. This does not happen with the widely used MPI_THREAD_SINGLE mode. MVAPICH2 implements multi-threading support using a single global lock for various MPI calls which includes MPI_Isend and MPI_Irecv used in the benchmark. This fulfils the requirements described in the MPI standard and avoids a complex architecture with lock-free data structures. However, a single global lock reduces concurrency significantly and does not scale well with increasing thread count [12]. This effect impacts performance less on applications with short bursts and low thread count. However, for multithreaded applications under high load, a single-threaded approach with one dedicated thread driving the network decoupled from the application threads, might be a better solution. Data between application threads and the network thread can be exchanged using data structures such as buffers, queues or pools like provided by DXNet. MVAPICH2's implementation of multi-threading does not allow to improve performance by increasing the send or receive thread counts. Thus, further multi-threaded experiments using MVAPICH2 are not reasonable. Summary Results This section briefly summerizes the most important results and numbers of the previous benchmarks. All values are considered "up to" and show the possible peak performance in the given benchmark. Single-threaded: • Uni-directional throughput Saturation with 64 kb to 128 kb message size, peak at 5. Compared to DXNet, the uni-directional results are similar but DXNet does not require explicit message aggregation to deliver high throughput. On bi-directional communication, MVAPICH2 achieves a slightly higher aggregated peak throughput than DXNet but DXNet performs better by approx 0.9 GB/s on medium sized messages. DXNet outperforms MVAPICH2 on small messages with a up to 1.8 times higher message rate. But, MVAPICH2 clearly outperforms DXNet on the uni-directional latency benchmark with an overall lower average, 95th, 99th and 99.9th percentile latency. On all-to-all communication with up to 8 nodes, MVAPICH2 reaches slightly higher peak throughputs for large messages but DXNet reaches its saturation earlier and performs significantly better on small message sizes up to 64 bytes. The low multi-threading performance of MVAPICH2 cannot be compared to DXNet's due to the following reasons: First, MVAPICH2 implements synchronization using a global lock which is the most simplest but very often least performant method to ensure thread safety. Second, MVA-PICH2, like many other MPI implementations, typically create multiple processes (one process per core) to enable concurrency on a single processor socket. However, as already discussed in related work ( §3), this programming model is not suitable for all application domains, especially in big data applications. DXNet is better for small messages and multi-threaded access like required in big-data applications. Conclusions We presented Ibdxnet, a transport for the Java messaging library DXNet which allows multi-threaded Java applications to benefit from low latency and high-throughput using InfiniBand hardware. DXnet provides transparent connection management, concurrency handling, message serialization and hides the transport which allows the application to switch from Ethernet to InfiniBand hardware transparently, if the hardware is available. Ibdxnet's native subsystem provides dynamic, scalable, concurrent and automatic connection management and the msgrc messaging engine implementation. The msgrc engine uses a dedicated send and receive thread and to drive RC QPs asynchronously which ensures scalability with many nodes. Load adaptive parking avoids high loads on idle but ensures low latency when busy. SGEs are used to simplify buffer handling and increase buffer utilization when sending data provided by the higher level DXNet core. A carefully crafted architecture minimizes context switching between Java and the native space as well us exchanging data using shared memory buffers. The evaluation shows that DXNet with the Ibdxnet transport can keep up with FastMPJ and MVAPICH2 on single threaded applications and even exceed them in multi-threaded applications on high load applications. DXNet with Ibdxnet is capable of handling concurrent connections and data streams with up to 8 nodes. Furthermore, multi-threaded applications benefit significantly from the multi-threaded aware architecture. The following topics are of interest for future research with DXnet and Ibdxnet: • Experiments with more than 100 nodes on our university's cluster
16,870
1812.01963
2902850193
In this report, we describe the design and implementation of Ibdxnet, a low-latency and high-throughput transport providing the benefits of InfiniBand networks to Java applications. Ibdxnet is part of the Java-based DXNet library, a highly concurrent and simple to use messaging stack with transparent serialization of messaging objects and focus on very small messages (< 64 bytes). Ibdxnet implements the transport interface of DXNet in Java and a custom C++ library in native space using JNI. Several optimizations in both spaces minimize context switching overhead between Java and C++ and are not burdening message latency or throughput. Communication is implemented using the messaging verbs of the ibverbs library complemented by an automatic connection management in the native library. We compared DXNet with the Ibdxnet transport to the MPI implementations FastMPJ and MVAPICH2. For small messages up to 64 bytes using multiple threads, DXNet with the Ibdxnet transport achieves a bi-directional message rate of 10 million messages per second and surpasses FastMPJ by a factor of 4 and MVAPICH by a factor of 2. Furthermore, DXNet scales well on a high load all-to-all communication with up to 8 nodes achieving a total aggregated message rate of 43.4 million messages per second for small messages and a throughput saturation of 33.6 GB s with only 2 kb message size.
@cite_11 is a distributed key-value storage optimized for low latency data access using InfiniBand with messaging verbs. Multiple transports are implemented for network communication, e.g. using reliable and unreliable connections with InfiniBand and Ethernet with unreliable connections. @cite_26 implements a key-value and graph storage using a shared memory architecture with RDMA. It performs well with a throughput of 167 million key-value lookups and 31 us latency using 20 machines. @cite_17 also implements a key-value storage using RDMA for get operations and messaging verbs for put operations. @cite_25 implements a key-value storage with a focus on NUMA architectures. It maps each CPU core to a partition of data and communicates using a request-response approach using unreliable connections. @cite_30 borrows the design of MICA and implements networking using RDMA writes for the request to the server and messaging verbs for the response back to the client.
{ "abstract": [ "", "We describe the design and implementation of FaRM, a new main memory distributed computing platform that exploits RDMA to improve both latency and throughput by an order of magnitude relative to state of the art main memory systems that use TCP IP. FaRM exposes the memory of machines in the cluster as a shared address space. Applications can use transactions to allocate, read, write, and free objects in the address space with location transparency. We expect this simple programming model to be sufficient for most application code. FaRM provides two mechanisms to improve performance where required: lock-free reads over RDMA, and support for collocating objects and function shipping to enable the use of efficient single machine transactions. FaRM uses RDMA both to directly access data in the shared address space and for fast messaging and is carefully tuned for the best RDMA performance. We used FaRM to build a key-value store and a graph store similar to Facebook's. They both perform well, for example, a 20-machine cluster can perform 167 million key-value lookups per second with a latency of 31µs.", "RAMCloud is a storage system that provides low-latency access to large-scale datasets. To achieve low latency, RAMCloud stores all data in DRAM at all times. To support large capacities (1PB or more), it aggregates the memories of thousands of servers into a single coherent key-value store. RAMCloud ensures the durability of DRAM-based data by keeping backup copies on secondary storage. It uses a uniform log-structured mechanism to manage both DRAM and secondary storage, which results in high performance and efficient memory usage. RAMCloud uses a polling-based approach to communication, bypassing the kernel to communicate directly with NICs; with this approach, client applications can read small objects from any RAMCloud storage server in less than 5μs, durable writes of small objects take about 13.5μs. RAMCloud does not keep multiple copies of data online; instead, it provides high availability by recovering from crashes very quickly (1 to 2 seconds). RAMCloud’s crash recovery mechanism harnesses the resources of the entire cluster working concurrently so that recovery performance scales with cluster size.", "MICA is a scalable in-memory key-value store that handles 65.6 to 76.9 million key-value operations per second using a single general-purpose multi-core system. MICA is over 4-13.5x faster than current state-of-the-art systems, while providing consistently high throughput over a variety of mixed read and write workloads. MICA takes a holistic approach that encompasses all aspects of request handling, including parallel data access, network request handling, and data structure design, but makes unconventional choices in each of the three domains. First, MICA optimizes for multi-core architectures by enabling parallel access to partitioned data. Second, for efficient parallel data access, MICA maps client requests directly to specific CPU cores at the server NIC level by using client-supplied information and adopts a light-weight networking stack that bypasses the kernel. Finally, MICA's new data structures--circular logs, lossy concurrent hash indexes, and bulk chaining--handle both read-and write-intensive workloads at low overhead.", "Recent technological trends indicate that future datacenter networks will incorporate High Performance Computing network features, such as ultra-low latency and CPU bypassing. How can these features be exploited in datacenter-scale systems infrastructure? In this paper, we explore the design of a distributed in-memory key-value store called Pilaf that takes advantage of Remote Direct Memory Access to achieve high performance with low CPU overhead. In Pilaf, clients directly read from the server's memory via RDMA to perform gets, which commonly dominate key-value store workloads. By contrast, put operations are serviced by the server to simplify the task of synchronizing memory accesses. To detect inconsistent RDMA reads with concurrent CPU memory modifications, we introduce the notion of self-verifying data structures that can detect read-write races without client-server coordination. Our experiments show that Pilaf achieves low latency and high throughput while consuming few CPU resources. Specifically, Pilaf can surpass 1.3 million ops sec (90 gets) using a single CPU core compared with 55K for Memcached and 59K for Redis." ], "cite_N": [ "@cite_30", "@cite_26", "@cite_11", "@cite_25", "@cite_17" ], "mid": [ "", "1532546444", "2074881976", "982826035", "2129554014" ] }
Ibdxnet: Leveraging InfiniBand in Highly Concurrent Java Applications
Todays big data applications generate hundreds or even thousands of terabytes of data. Commonly, Java based applications are used for further analysis. A single commodity machine, for example in a data center or typical cloud environment, cannot store and process the vast amounts of data making distribution mandatory. Thus, the machines have to use interconnects to exchange data or coordinate data analysis. However, commodity interconnects used in such environments, e.g. Gigabit Ethernet, cannot provide high throughput and low latency compared to alternatives like InfiniBand to speed up data analysis of the target applications. In this report, we describe the design and implementation of Ibdxnet, a low-latency and high-throughput transport providing the benefits of InfiniBand networks to Java applications. Ibdxnet is part of the Java-based DXNet library, a highly concurrent and simple to use messaging stack with transparent serialization of messaging objects and focus on very small messages (< 64 bytes). Ibdxnet implements the transport interface of DXNet in Java and a custom C++ library in native space using JNI. Several optimizations in both spaces minimize context switching overhead between Java and C++ and are not burdening message latency or throughput. Communication is implemented using the messaging verbs of the ibverbs library complemented by an automatic connection management in the native library. We compared DXNet with the Ibdxnet transport to the MPI implementations FastMPJ and MVAPICH2. For small messages up to 64 bytes using multiple threads, DXNet with the Ibdxnet transport achieves a bi-directional message rate of 10 million messages per second and surpasses FastMPJ by a factor of 4 and MVAPICH by a factor of 2. Furthermore, DXNet scales well on a high load all-to-all communication with up to 8 nodes achieving a total aggregated message rate of 43.4 million messages per second for small messages and a throughput saturation of 33.6 GB/s with only 2 kb message size. Introduction Interactive applications, especially on the web [6,28], simulations [34] or online data analysis [14,41,43] have to process terabytes of data often consisting of small objects. For example, social networks are storing graphs with trillions of edges resulting in a per object size of less than 64 bytes for the majority of objects [10]. Other graph examples are brain simulations with billions of neurons and thousands of connections each [31] or search engines for billions of indexed web pages [20]. To provide high interactivity to the user, low latency is a must in many of these application domains. Furthermore, it is also important in the domain of mobile networks moving state management into the cloud [23]. Big data applications are processing vast amounts of data which require either an expensive supercomputer or distributed platforms, like clusters or cloud environments [21]. High performance interconnects, such as InfiniBand, are playing a key role to keep processing and response times low, especially for highly interactive and always online applications. Today, many cloud providers, e.g. Microsoft, Amazon or Google, offer instances equipped with InfiniBand. InfiniBand offers messaging verbs and RDMA, both providing one way single digit microsecond latencies. It depends on the application requirements whether messaging verbs or RDMA is the better choice to ensure optimal performance [38]. In this report, we focus on Java-based parallel and distributed applications, especially big data applications, which commonly communicate with remote nodes using asynchronous and synchronous messages [10,16,13,42]. Unfortunately, accessing InfiniBand verbs from Java is not a built-in feature of the commonly used JVMs. There are several external libraries, wrappers or JVMs with built-in support available but all trade performance for transparency or require proprietary environments ( §3.1). To use InfiniBand from Java, one can rely on available (Java) MPI implementations. But, these are not providing features such as serialization for messaging objects and no automatic connection management ( §3.2). We developed the network subsystem DXNet ( §2) which provides transparent and simple to use sending and event based receiving of synchronous and asynchronous messages with transparent serialization of messaging objects [8]. It is optimized for high concurrency on all operations by implementing lock-free synchronization. DXNet is implemented in Java and open source and available at Github [1]. In this report, we propose Ibdxnet, a transport for the DXNet network subsystem. The transport uses reliable messaging verbs to implement InfiniBand support for DXNet and provides low latency and high throughput messaging for Java. Ibdxnet implements scalable and automatic connection and queue pair management, the msgrc transport engine, which uses InfiniBand messaging verbs, and a JNI interface. We present best practices applied to ensure scalability across multiple threads and nodes when working with InfiniBand verbs by elaborating on the implementation details of Ibdxnet. We carefully designing an efficient and low latency JNI layer to connect the native Ibdxnet subsystem to the Java based IB transport in DXNet. The IB transport uses the JNI layer to interface with Ibdxnet, extends DXNet's outgoing ring buffer for InfiniBand usage and implements scalable scheduling of outgoing data for many simultaneous connections. We evaluated DXNet with the IB transport and Ibdxnet, and compared then to two MPI implementations supporting InfiniBand: the well known MVAPICH2 and the Java based FastMPJ implementations. Though, MPI is discussed in related work ( §3.2) and two implementations are evaluated and compared to DXNet ( §9), DXNet, the IB transport nor Ibdxnet are implementing the MPI standard. The term messaging is used by DXNet to simply refer to exchanging data in the form of messages (i.e. additional metadata identifies message on receive). DXNet does not implement any by the standard defined MPI primitives. Various low-level libraries to use InfiniBand in Java are not compared in this report, but in a separate one. The report is structured in the following way: In Section 2, we present a summary of DXNet and its aspects important to this report. In Section 3, we discuss related work which includes a brief summary of available libraries and middleware for interfacing InfiniBand in Java applications. MPI and selected implementations supporting InfiniBand are presented as available middleware solutions and compared to DXNet. Lastly, we discuss target applications in the field of Big-Data which benefit from InfiniBand usage. Section 4 covers In-finiBand basics which are of concern for this report. Section 5 discusses JNI usage and presents best practices for low latency interfacing with native code from Java using JNI. Section 6 gives a brief overview of DXNet's multi layered stack when using InfiniBand. Implementation details of the native part Ibdxnet are given in Section 7 and the IB transport in Java are presented in Section 8. Section 9 presents and com- DXNet DXNet is a network library for Java targeting, but not limited to, highly concurrent big data applications. DXNet implements an asynchronous event driven messaging approach with a simple and easy to use application interface. Messaging describes transparent sending and receiving of complex (even nested) data structures with implicit serialization and de-serialzation. Furthermore, DXNet provides a built in primitive for transparent request-response communication. DXNet is optimized for highly multi-threaded sending and receiving of small messages by using lock-free data structures, fast concurrent serialization, zero copy and zero allocation. The core of DXNet provides automatic connection and buffer management, serialization of message objects and an interface for implementing different transports. Currently, an Ethernet transport using Java NIO sockets and an InifiniBand transport using ibverbs ( §7) is implemented. The following subsections describe the most important aspects of DXNet and its core which are depicted in Figure 1 and relevant for further sections of this report. A more detailed insight is given in a dedicated paper [8]. The source code is available at Github [1]. Automatic Connection Management To relieve the programmer from explicit connection creation, handling and cleanup, DXNet implements automatic and transparent connection creation, handling and cleanup. Nodes are addressed using an abstract and unique 16-bit nodeID. Address mappings must be registered to allow associating the nodeIDs of each remote node with a corresponding implementation dependent endpoint (e.g. socket, queue pair). To provide scalability with up to hundreds of simultaneous connections, our event driven system does not create one thread per connection. A new connection is cre-ated automatically once the first message is either sent to a destination or received from one. Connections are closed once a configurable connection limit is reached using a recently used strategy. Faulty connections (e.g. remote node not reachable anymore) are handled and cleaned up by the manager. Error handling on connection errors or timeouts is propagated to the application using exceptions. Sending of Messages Messages are serialized Java objects and sent asynchronously without waiting for a completion. A message can be targeted towards one or multiple receivers. Using the message type Request, it is sent to one receiver, only. When sending a request, the sender waits until receiving a corresponding response message (transparently handled by DXNet) or skips waiting and collects the response later. We expect applications calling DXNet concurrently with multiple threads to send messages. Every message is automatically and concurrently serialized into the Outgoing Ring Buffer (ORB), a natively allocated and lock-free ring buffer. Messages are automatically aggregated which increases send throughput. The ORB, one per connection, is allocated in native memory to allow direct and zero-copy access by the low-level transport. A transport runs a decoupled dedicated thread which removes the serialized and ready to send data from the ORB and forwards it to the hardware. Receiving of Messages The network transport handles incoming data by writing it to pooled native buffers to avoid burdening the Java garbage collection. Depending on how a transport writes and reads data, the buffers might contain fully serialized messages or just fragments. Every received buffer is pushed to the ring buffer based Incoming Buffer Queue (IBQ). Both, the buffer pool as well as the IBQ are shared among all connections. Dedicated handler threads pull buffers from the IBQ and process them asynchronously by de-serializing them and creating Java message objects. The messages are passed to pre-registered callback methods of the application. Flow Control DXNet implements its own flow control (FC) mechanism to avoid flooding a remote node with many (very small) messages. This would result in an increased overall latency and lower throughput if the receiving node cannot keep up with processing incoming messages. On sending a message, the per connection dedicated FC checks if a configurable threshold is exceeded. This threshold describes the number of bytes sent by the current node but not fully processed by the receiving node. Once the configurable threshold is exceeded, the receiving node slices the number of bytes received into equally sized windows (window size configurable) and sends the number of windows confirmed back to the source node. Once the sender receives this confirmation, the number of bytes sent but not processed is reduced by the number of received windows multiplied with the configured window size. If an application send thread was previously blocked due to exceeding this threshold, it can now continue with processing. Transport Interface DXNet provides a transport interface allowing implementations of different transport types. On initialization of DXNet, one of the implemented transports can be selected. Afterwards when using DXNet, the transport is transparent for the application. The following tasks must be handled by every transport implementation: • Connection: Create, close and cleanup • Get ready to send data from ORB and send it (ORB triggers callback once data is available) • Handle received data by pushing it to the IBQ • Manage flow control when sending/receiving data Every other task that is not exposed directly by one of the following methods must be handled internally by the transport. The core of DXNet relies on the following methods of abstract Java classes/interfaces which must be implemented by every transport: • Connection: open, close, dataPosted • ConnectionManager: createConnection, closeConnection • FlowControl: sendFlowControlData, getAndReset-FlowControlData We elaborate on further details about the transport interface in Section 8 where we describe the transport implementation for Ibdxnet. Java and InfiniBand Before developing Ibdxnet and the InfiniBand transport for DXNet, we evaluated available (low-level) solutions for leveraging InfiniBand hardware in Java applications. This includes using NIO sockets with IP over InfiniBand (IPoIB) [25], jVerbs [37], JSOR [40], libvma [2] and native c-verbs with ibverbs. Extensive experiments analyzing throughput and latency of both messaging verbs and RDMA were conducted to determine a suitable candidate for using InfiniBand with Java applications and are published in a separate report. Summerized, the results show that transparent solutions like IPoIB, libvma or JSOR, which allow existing socketbased applications to send and receive data transparently over InfiniBand hardware, are not able to deliver an overall adequate throughput and latency. For the verbs-based libraries, jVerbs gets close to the native ibverbs performance but, like JSOR, requires a proprietary JVM to run. Overall, none of the analyzed solutions, other than ibverbs, are delivering an adequate performance. Furthermore, we want DXNet to stay independent of the JVM when using Infini-Band hardware. Thus, we decided to use the native ibverbs library with the Java Native Interface to avoid the known performance issues of the evaluated solutions. MPI The message passing interface [19] defines a standard for high level networking primitives to send and receive data between local and remote processes, typically used for HPC applications. An application can send and receive primitive data types, arrays, derived or vectors of primitive data types, and indexed data types using MPI. The synchronous primitives MPI_Send and MPI_Recv perform these operations in blocking mode. The asynchronous operations MPI_Isend and MPI_Irecv allow non blocking communication. A status handle is returned with each started asynchronous operation. This can be used to check the completion of the operation or to actively wait for one or multiple completions using MPI_Wait or MPI_Waitall. Furthermore, there are various collective primitives which implement more advanced operations such as scatter, gather or reduce. Sending and receiving of data with MPI requires the application to issue a receive for every send with a target buffer that can hold at least the amount of data sent by the remote. DXNet relieves the application from this responsibility. Application threads can send messages with variable size and-DXNet manages the buffers used for sending and receiving. The application does not have to issue any receive operations and wait for data to arrive actively. Incoming messages are dispatched to pre-registered callback handlers by dedicated handler threads of DXNet. DXNet supports transparent serialization and de-serialization of complex (even nested) data types (Java objects) for messages. MPI primitives for sending and receiving data require the application to use one of the available data types supported and doesn't offer serialization for more complex datatypes such as objects. However, the MPI implementation can benefit from the lack of serialization by avoiding any copying of data, entirely. Due to the nature of serialization, DXNet has to create a (serialized) "copy" of the message when serializing it into the ORB. Analogously, data is copied when a message is created from incoming data during de-serialization. Messages in DXNet are sent asynchronously while requests offer active waiting or probing for the corresponding response. These communication patterns can also be applied by applications using MPI. The communication primitives currently provided by DXNet are limited to messages and request-response. Nevertheless, using these two primitives, other MPI primitives, such as scatter, gather or reduce, can be implemented by the application if required. DXNet does not implement multiple protocols for different buffer sizes like MPI with eager and rendezvous. A transport for DXNet might implement such a protocol but our current implementations for Ethernet and InfiniBand do not. The aggregated data available in the ORB is either sent as a whole or sliced and sent as multiple buffers. The transport on the receiving side passes the stream of buffers to DXNet and puts them into the IBQ. Afterwards, the buffers are reconnected to a stream of data by the MCC before extracting and processing the messages. An instance using DXNet runs within one process of a Big Data application with one or multiple application threads. Typically, one DXNet instance runs per cluster node. This allows the application to dynamically scale the number of threads up or down within the same DXNet instance as needed. Furthermore, fast communication between multiple threads within the same process is possible, too. Commonly, an MPI application runs a single thread per process. Multiple processes are spawned according to the number of cores per node with IPC fully based on MPI. MPI does offer different thread modes which includes issuing MPI calls using different threads in a process. Typically, this mode is used in combination with OpenMP [4]. However, it is not supported by all MPI implementations which also offer InfiniBand support ( §3.3). Furthermore, DXNet supports dynamic up and down scaling of instances. MPI implementations support up-scaling (for non singletons) but down scaling is considered an issue for many implementations. Processes cannot be removed entirely and might cause other processes to get stuck or crash. Connection management and identifying remote nodes are similar with DXNet and MPI. However, DXNet does not come with deployment tools such as mpirun which assigns the ids/ranks to identify the instances. This intentional design decision allows existing applications to integrate DXNet without restrictions to the bootstrapping process of the application. Furthermore, DXNet supports dynamically adding and removing instances. With MPI, an application must be created by using the MPI environment. MPI applications must be run using a special coordinator such as mpirun. If executed without a communicator, an MPI world is limited to the current process it is created in which doesn't allow communication with any other instances. Separate MPI worlds can be connected but the implementation must support this feature. To our knowledge, there is no implementation (with InfiniBand support) that currently supports this. MPI Implementations Supporting Infini-Band This section only considers MPI implementations supporting InfiniBand directly. Naturally, IPoIB can be used to run any MPI implementation supporting Ethernet networks over InfiniBand. But, as previously discussed ( §3.1), the network performance is very limited when using IPoIB. MVAPICH2 is a MPI library [32] supporting various network interconnects, such as Ethernet, iWARP, Omni-Path, RoCE and InfiniBand. MVAPICH2 includes features like RDMA fast path or RDMA operations for small message transfers and is widely used on many clusters over the world. Open MPI [3] is an open source implementation of the MPI standard (currently full 3.1 conformance) supporting a variety of interconnects, such as Ethernet using TCP sockets, RoCE, iWARP and InfiniBand. mpiJava [7] implements the MPI standard by a collection of wrapper classes that call native MPI implementations, such as MVAPICH2 or OpenMPI, through JNI. The wrapper based approach provides efficient communication relying on native libraries. However, it is not threadsafe and, thus, is not able to take advantage of multi-core systems using multithreading. FastMPJ [17] uses Java Fast Sockets [39] and ibvdev to provide a MPI implementation for parallel systems using Java. Initially, ibvdev [18] was implemented as a low-level communication device for MPJ Express [35], a Java MPI implementation of the mpiJava 1.2 API specification. ibvdev implements InfiniBand support using the low-level verbs API and can be integrated into any parallel and distributed Java application. FastMPJ optimizes MPJ Express collective primitives and provides efficient non-blocking communication. Currently, FastMPJ supports issuing MPI calls using a single thread, only. Other Middleware UCX [36] is a network stack designed for next generation systems for applications with an highly multi-threaded environment. It provides three independent layers: UCS is a service layer with different cross platform utilities, such as atomic operations, thread safety, memory management and data structures. The transport layer UCT abstracts different hardware architectures and their low-level APIs, and provides an API to implement communication primitives. UCP implements high level protocols such as MPI or PGAS programming models by using UCT. UCX aims to be a common computing platform for multithreaded applications. However, DXNet does not and, thus, does not include its own atomic operations, thread safety or memory management for data structures. Instead, it relies on the multi-threading utilities provided by the Java environment. DXNet does abstract different hardware like UCX but only network interconnects and not GPUs or other coprocessors. Furthermore, DXNet is a simple networking library for Java applications and does not implement MPI or PGAS models. Instead, it provides simple asynchronous messaging and synchronous request-response communication, only. Target Applications using InfiniBand Providing high throughput and low latency, InfiniBand is a technology which is widely used in various big-data applications. Apache Hadoop [22] is a well known Java big-data processing framework for large scale data processing using the MapReduce programming model. It uses the Hadoop Distributed File System for storing and accessing application data which supports InfiniBand interconnects using RDMA. Also implemented in Java, Apache Spark is a framework for big-data processing offering the domain-specific-language Spark SQL, a stream processing and machine learning extension and the graph processing framework GraphX. It supports InfiniBand hardware using an additional RDMA plugin [5]. Numerous key-value storages for big-data applications have been proposed that use InfiniBand and RDMA to provide low latency data access for highly interactive applications. RAMCloud [33] is a distributed key-value storage optimized for low latency data access using InfiniBand with messaging verbs. Multiple transports are implemented for network communication, e.g. using reliable and unreliable connections with InfiniBand and Ethernet with unreliable connections. FaRM [15] implements a key-value and graph storage using a shared memory architecture with RDMA. It performs well with a throughput of 167 million key-value lookups and 31 us latency using 20 machines. Pilaf [30] also implements a key-value storage using RDMA for get operations and messaging verbs for put operations. MICA [27] implements a key-value storage with a focus on NUMA architectures. It maps each CPU core to a partition of data and communicates using a request-response approach using unreliable connections. HERD [24] borrows the design of MICA and implements networking using RDMA writes for the request to the server and messaging verbs for the response back to the client. InfiniBand and ibverbs Basics This section covers the most important aspects of the Infini-Band hardware and the native ibverbs library which are relevant for this report. Abbreviations introduced here (most of them commonly used in the InfiniBand context) are used throughout the report from this point on. The host channel adapter (HCA) connected to the PCI bus of the host system is the network device for communicating with other nodes. The offloading engine of the HCA processes outgoing and incoming data asynchronously and is connected to other nodes using copper or optical cables via one or multiple switches. The ibverbs API provides the interface to communicate with the HCA either by exchanging data using Remote Direct Memory Access (RDMA) or messaging verbs. A queue pair (QP) identifies a physical connection to a remote node when using reliable connected (RC) communication. Using non connected unreliable datagram (UD) communication, a single QP is sufficient to send data to multiple remotes. A QP consists of one send queue (SQ) and one receive queue (RQ). On RC communication, a QP's SQ and RQ are always cross connected with a target's QP, e.g. node 0 SQ connects to node 1 RQ and node 0 RQ to node 1 SQ. If an application wants to send data, it posts a work request (WR) containing a pointer to the buffer to send and the length to the SQ. A corresponding WR must be posted on the RQ of the connected QP on the target node to receive the data. This WR also contains a pointer to a buffer and its size to receive any incoming data to. Once the data is sent, a work completion (WC) is generated and added to a completion queue (CQ) associated with the SQ. A WC is also generated for the corresponding WCQ of the remote's RQ receiving the data, once the data arrived. The WC of the send task tells the application that the data was successfully sent to the remote (or provides error information otherwise). On the remote receiving the data, the WC indicates that the buffer attached to the previously posted WR is now filled with the remote's data. When serving multiple connections, every single SQ and RQ does not need a dedicated CQ. A single CQ can be used as a shared completion queue (SCQ) with multiple SQs or RQs. Furthermore, when receiving data from multiple sources, instead of managing many RQs to provide buffers for incoming data, a shared receive queue (SRQ) can be used on multiple QPs instead of single RQs. When attaching a buffer to a WR, it is attached as a scatter gather element (SGE) of a scatter gather list (SGL). For sending, the SGL allows the offloading engine to gather the data from many scattered buffers and send it as one WR. For receiving, the received data is scattered to one or multiple buffers by the offloading engine. Low Latency Data Exchange Between Java and C In this section, we describe our experiences with and best practices for the Java Native Interface (JNI) to avoid performance penalties for latency sensitive applications. These are applied to various implementation aspects of the IB transport which are further explained in their dedicated sections. Using JNI is mandatory if the Java space has to interface with native code, e.g. for IO operations or when using native libraries. As we decided to use the low-level ibverbs library to benefit from full control, high flexibility and low latency ( §3.1), we had to ensure that interfacing with native code from Java does not introduce too much overhead compared to the already existing and evaluated solutions. The Java Native Interface (JNI) allows Java programmers to call native code from C/C++ libraries. It is a well known method to interface with native libraries that are not available in Java or access IO using system calls or other native libraries. When calling code of a native library, the library has to expose and implement a predefined interface which allows the JVM to connect the native functions to native declared Java methods in a Java class. With every call from Java to the native space and vice versa, a context switch is required to be executed by the JVM environment. This involves tasks related to thread and cache management adding latency to every native call. This increases the duration of such a call and is crucial, especially regarding the low latency of IB. Exchanging data with a native library without adding considerable overhead is challenging. For single primitive values, passing parameters to functions is convenient and does not add any considerable overhead. However, access to Java classes or arrays from native space requires synchronization with the JVM (and its garbage collector) which is very expensive and must be avoided. Alternatively, one can use ByteBuffers allocated as DirectByte-Buffers which allocates memory in native memory. Java can access the memory through the ByteBuffer and the native library can get the native address of the array and the size with the functions GetDirectBufferAddress and GetDirectBufferCapacity. However, these two calls increase the latency by tenth to even hundreds of microseconds (with high variation). This problem can be solved by allocating a native buffer in the native space, passing its address and size to the Java space and access it using the Unsafe API or wrap it as a newly allocated (Direct) ByteBuffer. The latter requires reflection to access the constructor of the DirectByteBuffer and Figure 2: Microbenchmarks to evaluate JNI call overhead and data exchange overhead using different types of memory access set the address and size fields. We decided to use the Unsafe API because we map native structs and don't require any of the additional features the ByteBuffer provides. The native address is cached which allows fast exchange of data from Java to native and vice versa. To improve convenience when accessing fields of a data structure, a helper class with getter and setter wrapper methods is created to access the fields of the native struct. We evaluated different means of passing data from Java to native and vice versa as well as the function/method call overhead. Figure 2 shows the results of the microbenchmarks used to evaluate JNI call overhead as well as overhead of different memory access methods. The results displayed are the averages of three runs of each benchmark executing the operation 100,000,000 times. A warm-up of 1,000 operations preceeds each benchmark run. For JNI context switching, we measured the latency introduced of Java to native (jtn), native to Java (ntj), native to Java with exception checking (ntjexc) and native to Java with thread detaching (ntjdet). For exchanging data between Java and native, we measured the latency introduced by accessing a 64 byte buffer in both spaces for a primitive Java byte array (ba), Java DirectByte-Buffer (dbb) and Unsafe (u). The benchmarks were executed on a machine with Intel Core i7-5820K CPU and Java 1.8 runtime. The results show that the average single costs for context switching are neglectable with an average switching time of only up to 0.1 µs. We exchange data using primitive function arguments, only. Data structures are mapped and accessed as C-structs in the native space. In Java, we access the native Cstructs using a helper class which utilizes the Unsafe library [29] as this is the fastest method in both spaces. These results influenced the important design decision to run native threads, attached once as daemon threads to the JVM, which call to Java instead of Java threads calling native methods ( §7.2.3, §7.2.4). Furthermore, we avoid using any of the JNI provided helper functions where possible [26]. For example: attaching a thread to the JVM involves expensive operations like creating a new Java thread object and various state changes to the JVM environment. Avoiding them on every context switch is crucial to latency and performance on every call. Lastly, we minimized the number of calls to the Java space by combining multiple tasks into a single cross-space call instead of yielding multiple calls. For inter space communication, we highly rely on communication via buffers mapped to structs in native space and wrapper classes in Java (see above). This is highly application dependable and not always possible. But if possible and applied, this can improve the overall performance. We applied this technique of combining multiple tasks into a single cross-space call to sending and receiving of data to minimize latency and context switching overhead. The native send and receive threads implement the most latency critical logic in the native space which is not simply wrapping ibverbs functions to be exposed to Java ( §7.2.3 and 7.2.4).. The counterpart to the native logic is implemented in Java ( §8). In the end, we are able to reduce sending and receiving of data to a single context switching call. Overview Ibdxnet and Java InfiniBand Transport This section gives a brief top-down introduction of the full transport implementation. Figure 3 depicts the different components and layers involved when using InfiniBand with DXNet. The Java InfiniBand transport (IB transport) Figure 4: Simplified architecture of Ibdxnet with the msgrc transport engine ( §8) implements DXNet's transport interface ( §2.5) and uses JNI to connect to the native counterpart. Ibdxnet uses the native ibverbs library to access the hardware and provides a separate subsystem for connection management, sending and receiving data. Furthermore, it implements a set of functions for the Java Native Interface to connect to the Java implementation. Ibdxnet: Native InfiniBand Subsystem with Transport Engine This section elaborates on the implementation details of our native InfiniBand subsystem Ibdxnet which is used by the IB transport implementation in DXNet to utilize InfiniBand hardware. Ibdxnet provides the following key features: a basic foundation with re-usable components for implementations using different means of communication (e.g. messaging verbs, RDMA) or protocols, automatic connection management and transport engines using different communication primitives. Figure 4 shows an outline of the different components involved. Ibdxnet provides an automatic connection and QP manager ( §7.1) which can be used by every transport engine. An interface for the connection manager and a connection object allows implementations for different transport engines. The engine msgrc (see Figure 4) uses the provided connection management and is based on RC messaging verbs. The engine msgud using UD messaging verbs is already implemented and will be discussed and extensively evaluated in a separate publication. A transport engine implements its own protocol to send/receive data and exposes a low-level interface. It creates an abstraction layer to hide direct interaction with the ibverbs library. Through the low-level interface, a transport implementation ( §8) provides data-to-send and forwards received data for further processing. For example: the lowlevel interface of the msgrc engine does not provide concur-rency control or serialization mechanisms for messages. It accepts a stream of data in one or multiple buffers for sending and provides buffers creating a stream of data on receive ( §7.2). This engine is connected to the Java transport counterpart via JNI and uses the existing infrastructure of DXNet ( §8). Furthermore, we implemented a loopback like stand alone transport for debugging and measuring performance of the native engine, only. The loopback transport creates a continuous stream of data for sending to one or multiple nodes and throws away any data received. This ensures that sending and receiving introduce no additional overhead and allows measuring the performance of different low-level aspects of our implementation. This was used to determine the maximum possible throughput with Ibdxnet ( §9.2.4). In the following sections, we explain the implementation details of Ibdxnet's connection manager ( §7.1) and the messaging engine msgrc ( §7.2). Additionally, we describe best practices for using the ibverbs API and optimizations for optimal hardware utilization. Furthermore, we elaborate on how Ibdxnet connects to the IB transport in Java using JNI and how we implemented low overhead data exchange between Java and native space. Dynamic, Scalable and Concurrent Connection Management Efficient connection management for many nodes is a challenging task. For example, hundreds of application threads want to send data to a node but the connection is not yet established. Who creates the connection and synchronizes access of other threads? How to avoid synchronization overhead or blocking of threads that want to get an already established connection? How to manage the lifetime of a connection? These challenges are addressed by a dedicated connection manager in Ibdxnet. The connection manager handles all tasks required to establish and manage connections and hides them from the higher level application. For our higher level Java transport ( §8.1), complexity and latency is reduced for connection setup by avoiding context switching. First, we explain how nodes are identified, the contents of a connection and how online/offline nodes are discovered and handled. Next, we describe how existing connections are accessed and non-existing connections are created on the fly during application runtime. We explain the details how a connection creation job is handled by the internal job manager, how connection data is exchanged with the remote in order to create a QP. At last, we briefly describe our previous attempt which failed to address the above challenges properly. A node is identified by a unique 16-bit integer nodeID (NID). The NID is assigned to a node on start of the connection manager and cannot be changed during runtime. A con- Figure 5: Connection manager: Creating non-existing connections (send thread: node 1 to node 0) and re-using existing connections (recv thread: node 1 to node 5). Automatic connection creation with QP data exchange (node 3 to node 0). The job CR0 is added to the back of the queue to initiate this process. The dedicated thread processes the queue by removing jobs from the front and processing them according to their type. nection consists of the source NID (the current node) and the destination NID (the target remote node). Depending on the transport implementation, an existing connection holds one or multiple ibverbs QPs, buffers and other data necessary to send and receive data using that connection. The connection manager provides a connection interface for the transport engines which allows them to implement their own type of connection. The following example describes a connection with a single QP, only. Before a connection to a remote node can be established, the remote node must be discovered and known as available. The job type node discovery (further details about the job system follow in the next paragraphs) detects online/offline nodes using UDP sockets over Ethernet. On startup, a list of node hostnames is provided to the connection manager. The list can be extended by adding/removing entries during runtime for dynamic scaling. The discovery job tries to con-tact all non-discovered nodes of that list in regular intervals. When a node was discovered, it is removed from the list and marked as discovered. A connection can only be established with an already discovered node. If a connection to the node was already created and is lost (e.g. node crash), the NID is added back to the list in order to re-discovered the node on the next iteration of the job. Node discovery is mandatory for InfiniBand in order to exchange QP information on connection creation. Figure 5 shows how existing connections are accessed and new connections are created when two threads, e.g. a send and a receive thread, are accessing the connection manager. The send thread wants to send new data to node 0 and the receive thread has received some data (e.g. from a SRQ). It has to forward it for further processing which requires information stored in each connection (e.g. a queue for the incoming data). If the connection is already established (the receive thread gets the connection to node 5), a connection handle (H5) is returned to the calling thread. If no connection has been established so far (the send thread wants to get the connection to node 0), a job to create the specific connection (CR0 = create to node 0) is added to the internal job queue. The calling thread has to wait until the job is dispatched and the connection is created before being able to send the data. Figure 6 shows how connection creation is handled by the internal job thread. The job CR0 (yielded by the send thread from the previous example in figure 5) is pushed to the back of the job queue. The job queue might contain jobs which affect different connections, i.e. there is no per connection dedicated queue. The dedicated connection manager thread is processing the queue by removing a job from the front and dispatching by type. There are three types of jobs: create a connection to a node with a NID, discover other connection managers, close an existing connection to node. To create a new connection with a remote node, the current node has to create an ibverbs QP with a SQ and RQ. Both queues are cross-connected to a remote QP (send with recv, recv with send) which requires data exchange using another communication channel (Sockets over Ethernet). For the job CR0, the thread creates a new QP on the current node (3) and exchanges its QP data with the remote it wants to connect to (0) using UDP sockets. The remote (0) also creates a QP and uses the received connection information (of 3). It replies with its own QP data (0 to 3) to complete QP creation. The newly established connection is added to the connection table and is now accessible (by the send and receive thread). At last, we briefly describe our lessons learned from our first attempt for an automatic connection manager. It was relying on active connection creation. The first thread calling the connection manager to acquire a connection creates it on the fly, if it does not exist. The calling thread executes connection exchange, waits for the remote data and finishes connection creation. This requires coordination of all threads accessing the connection manager either to create a new connection or getting an existing one. It introduced a very complex architecture with high synchronization overhead and latency especially when many threads are concurrently accessing the connection manager. Furthermore, it was error prone and difficult to debug. We encountered severe performance issues when creating connections with one hundred nodes in a very short time range (e.g. all-to-all communication). This resulted in connection creation times of up to half a minute. Even with a small setup of 4 to 8 nodes, creating a connection could take up to a few seconds if multiple threads tried to create the same or different connections simultaneously. msgrc: Transport Engine for Messaging using RC QPs This section describes the msgrc transport engine. It uses reliable QPs to implement messaging using a dedicated send and receive thread. The engine's interface allows a transport to provide a stream of data (to send) in form of variable sized buffers and provides a stream of data (received) to a registered callback handler. This interface is rather low-level and the backend does not implement any means of serialization/deserialization for sending/receiving of complex data structures. In combination with DXNet ( §2), the logic for these tasks resides in the Java space with DXNet and is shared with other transports such as the NIO Ethernet transport [9]. However, there are no restrictions to implement these higher level components for the msgrc engine natively, if required. Further details on how the msgrc engine is connected with the Java transport counterpart are given in Section 8. The following subsections explain the general architecture and interface of the transport, sending and receiving of data using dedicated threads and how various features of Infini-Band were used for optimal hardware utilization. Architecture This section explains the basic architecture as well as the low-level interface of the engine. Figure 4 includes the msgrc transport and can be referred to for an abstract representation of the most important components. The engine relies on our dedicated connection manager ( §7.1) for connection handling. We decided to use one dedicated thread for sending ( §7.2.3) and one for receiving ( §7.2.4) to benefit from the following advantages: a clear separation of responsibilities resulting in a less complex architecture, no scheduling of send/receive jobs when using a single thread for both and higher concurrency because we can run both threads on different CPU cores concurrently. The architecture allows us to create decoupled pipeline stages using lock-free queues and ring buffers. Thereby, we avoid complex and slow synchronization between the two threads and with hundreds of threads concurrently accessing shared resources. The low-level interface allows fine-grained control for the target transport over the engine. The interface for sending data is depicted in Listing 1 and receiving is depicted in Listing 2. Both interfaces create an abstraction hiding connection and QP management as well as how the hardware is driven with the ibverbs library. For sending data, the interface provides the callback GetNextDataToSend. This function is called by the send thread to pull new data to send from the transport (e.g. from the ORB, see 8.2). When called, an instance of each of the two structures PrevWorkPackageResults and CompletedWorkList are passed to the implementation of the callback as parameters: the first contains information about the previous call to the function and how much data was actually sent. If the SQ is full, no further data can be sent. Instead of introducing an additional callback, we combine getting the next data with returning information about the previous send call to reduce call overhead (important for JNI access). The second parameter contains data about completed work requests, i.e. data sent for the transport. This must be used in the transport to mark data processed (e.g. moving the pointers of the ORB). 26 27 uint32_t Received(IncomingRingBuffer* ringBuffer); 28 29 void ReturnBuffer(IbMemReg* buffer); Listing 2: Structure and callback of the msgrc engine's receive interface If data is received, the receive thread calls the callback function Received with an instance of the IncomingRing-Buffer structure as its parameter. This parameter holds a list of received buffers with their source NID. The transport can iterate this list and forward the buffers for further processing such as de-serialization. If the transport has to return the number of elements processed and, thus, is able to control the amount of buffers it can process. Once the received buffers are processed by the transport, they must be returned back to the RecvBufferPool by calling ReturnRecvBuffer to allow re-using them for further receives. Sending of Data This section explains the data and control flow of the dedicated send thread which asynchronously drives the engine for sending data. Listing 3 depicts a simplified version of the contents of its main loop with the relevant aspects for this section. Details of the functions involved in the main flow are explained further below. The loop starts with getting a workPackage, the next data to send (line 1), using the engine's low-level interface ( §7.2.2). The instance prevWorkResults contains information about posted and non-posted data from the previous loop iteration. The instance completionList holds data about completed sends. Both instances are reseted/nulled (line 2-3) for re-use in the current iteration. If the workPackage is valid (line 5), i.e. data to send is available, the nodeId from that package is used to get the connection to the send target from the connection manager (line 6). The connection and workPackage are passed to the SendData function (line 7). It processes the workPackage and returns how much data was processed, i.e. posted to the SQ of the connection, and how much data could not be processed. The latter happens if the SQ is full and must be kept track of to not lose any data. Afterwards, the thread returns the connection to the connection manager (line 8). At the end of a loop iteration, the thread polls the SCQ to remove any available WCs. We share the completion queue among all SQs/connections to avoid iterating over many connections for a task. The loop iteration ends and the thread starts from the beginning by calling GetNext-DataToSend and provides the work results of our previous iteration. Data about polled WCs from the SCQ are stored in the completionList and forwarded via the interface (to the transport). If no data is available (line 5), lines 6-8 are skipped and the thread executes a completion poll, only. This is important to ensure that any outstanding WCs are processed and passed to the transport (via the completionList and calling GetNext-DataToSend). Otherwise, if no data is sent for a while, the transport will not receive any information about previously processed data. This leads to false assumptions about the available buffer space for sending data, e.g. assuming that data fits into the buffer but actually does not because the processed buffer space is not free'd, yet. In the following paragraphs, we further explain how the functions SendData and PollCompletions make optimal use of the ibverbs library and how this cooperates with the interleaved control flow of the main thread loop explained above. The SendData function is responsible for preparing and posting of FC data and normal data (payload). FC data, which determines the number of flow control windows to confirm, is a small number (< 128) and, thus, does not require a lot of space. We post it as part of the immediate data, which can hold up to 4 bytes of data, with the WR instead of using a separate side channel, e.g. another QP. This avoids overhead of posting and polling of another QP which benefits overall performance, especially with many simultaneous connections. With FC data using 1 byte of the immediate data field, we use further 2 bytes to include the NID of the source node. This allows us to identify the source of the incoming WC on the remote. Otherwise, identifying the source would be very inconvenient. The only information provided with the incoming WC is the sender's unique physical QP id. In our case, this id must be mapped to the corresponding NID of the sender. However, this introduces an indirection every time a package arrives which hurts performance. For sending normal data (payload), the provided work-Package holds two pointers, front and back, which enclose a memory area of data to send. This memory area belongs to a buffer (e.g. the ORB) which was registered with the protection domain on start to allow access by the HCA. Figure 7 depicts an example with three (aggregated) ready to send messages in the ORB. We create a WR for the data to send and provide a single SGE which takes the pointers of the enclosed memory area. The HCA will directly read from that area without further copying of the data (zero copy). For buffer wrap arounds, two SGEs are created and attached to one WR: one SGE for the data from the front pointer to the end of the buffer, another SGE for the data from the start of the buffer to the back pointer. If the size of the area to send (sum of all SGEs) exceeds the maximum configurable receive size, the data to send must be sliced into multiple WRs. Multiple WRs are chained to a link list to minimize call overhead when posting them to the SQ using ibv_post_send. This greatly increases performance compared to posting multiple standalone WRs with single calls. The number of SGEs of a WR can be 0, if no normal data is available to send but FC data is available. To send FC data only, we write it to the immediate data field of a WR along with our source NID and post it without any SGEs attached which results in a 0 length data WR. The PollCompletions function calls ibv_poll_cq, once, to poll for any completions available on the SCQ. A SCQ is used instead of per connection CQs to avoid iterating the CQs of all connections which impacts performance. The send thread keeps track of the number of posted WRs and, thus, knows how many WCs are outstanding and expected to arrive on the SCQ. If none are being expected, polling is skipped. ibv_poll_cq is called once per PollCompletion call, only, and every call tries to poll WCs in batches to keep the call overhead minimal. Experiments have shown that most calls to ibv_poll_cq, even on high loads, will return empty, i.e. no WRs have completed. Thus, polling the SCQ until at least one completion is received is the wrong approach and greatly impacts overall performance. If the SQ of another connection is not full and there is data available to send, this method wastes CPU resources on busy polling instead of processing further data to send. The performance impact (resulting in low throughput) increases with the number of simultaneous connections being served. Furthermore, this increases the chance of SQs running empty because time is wasted on waiting for completions instead of keeping all SQs filled. Full SQs ensure that the HCA is kept busy which is the key to optimal performance. Data is received using a SRQ and SCQ instead of multiple receive and completions queues. This avoids iterating over all open connections and checking for data availability which introduces overhead with increasing number of simultaneous connections. Equally sized buffers for receiving data (configurable size and amount) are pooled and returned for re-use by the transport, once processed ( §7.2.2). Receiving of Data The loop starts by calling PollCompletions (line 1) to poll the SCQ for WCs. Before processing the WCs returned, the SRQ is refilled by calling Refill (line 4), if the SRQ is not filled, yet. Next, if any WCs were polled previously, they are processed by calling ProcessCompletions (line 8). This step pushes them to the Incoming Ring Buffer (IRB), a temporary ring buffer, before dispatching them. Finally, if the IRB is not empty (line 11), the thread tries to forward the contents of the IRB by calling DispatchReceived via the interface to the transport ( §7.2.2). The following paragraphs are further elaborating on how PollCompletions, Refill, ProcessCompletions and Dis-patchReceived make optimal use of the ibverbs library and how this cooperates with the interleaved control flow of the main thread loop explained above. The PollCompletions function is very similar to the one explained in Section 7.2.3 already. WCs are polled in batches of max. currently available IRB space and buffered before being processed. The Refill function adds new receive WRs to the SRQ, if the SRQ is not completely filled and receive buffers from the receive buffer pool are available. Every WR consists of a configurable number of SGEs which make up the maximum receive size. This is also the limiting size the send thread can post with a single WR (sum of sizes of SGE list). Using this method, the receive thread does not have to take care of any software slicing of received data because the HCA scatters one big chunk of send data transparently to multiple (smaller) receive buffers on the receiver side. At last, Refill chains the WRs to a linked list which is posted on a single call to ibv_post_srq_recv for minimal overhead. If WCs are buffered from the previous call to PollCompletions, the ProcessReceived function iterates this list of WCs. For each WC of the list, it gets the source NID and FC data from the immediate data field. If the recv length of this WC is non zero, the attached SGEs contain the received data scattered to the receive buffers of the SGE list. As the receive thread does not know or have any means of determining the size of the next incoming data, the challenge is optimal receive buffer usage with minimal internal fragmentation. Here, fragmentation describes the amount of receive buffers provided with a WR as SGEs in relation to the amount of received data written to that block of buffers. The less data written to the buffers, the higher the fragmenta-tion. In the example shown in figure 7, the three aggregated and serialized messages are received in five buffers but the last buffer is not completely used. This fragmentation cannot be avoided but handled to avoid negative results like empty buffer pools or low per buffer utilization. Receive buffers/SGEs of a WR that do not contain any received data, because the amount of received data is less than the total size of the list of buffers of the SGE list, are pushed back to the buffer pool. All receive buffers of the SGE list that contain valid received data are pushed to the IRB (in the order they were received). Depending on the target application, the fragmentation degree can be lowered if one configures the receive buffer and pool sizes accordingly. Applications typically sending small messages are performing well with small receive buffer sizes. However, throughput might decrease slightly for applications sending mainly big messages on small receive buffer sizes requiring more WRs per send data send (data sliced into multiple WRs). If the IRB contains any elements, the DispatchReceived function tries to forward them to the transport via the Received callback ( §7.2.2). The callback returns the number of elements it consumed from the IRB and, thus, is allowed to consume none or up to what's available. The consumed buffers are returned asynchronously to the receive buffer pool by transport, once it finished processing them. Load Adaptive Thread Parking The send and receive threads must be kept busy running their loops to send and receive data as fast as possible to ensure low latency. However, pure busy polling without any sleeping or yielding introduces high CPU load and occupying two cores of the CPU permanently. This is unnecessary during periods when the network is not used frequently. We do not want the send and receive threads to waste CPU resources and, therewith, decrease the overall node performance. Experiments have shown that simply adding sleep or yield operations highly impacts network latency and throughput and introduces high fluctuations [8]. To solve this, we used a simple but efficient wait pattern we call load adaptive thread parking. After a defined amount of time (e.g. 100 ms) of polling and no data available, the thread enters a yield phase and calls yield on every loop iteration if no data is available. After another timeframe passed (e.g. 1 sec), the thread enters a parking phase calling sleep-/park with a minimum value of 1 ns on every loop iteration reducing CPU load significantly. The lowest value possible (1 ns) ensure that the scheduler of the operating system sends the thread sleeping for the shortest period of time possible. Once data is available, the current phase is interrupted and the timer is reset. This ensures busy looping for the next iterations keeping latency for successive messages and on high loads low. For further details including evaluation results re- fer to our DXNet publication [8]. DXNet IB Transport Implementation in DXNet (Java) This section describes the transport implementation for DXNet in Java which utilizes the low-level transport engines, e.g. msgrc ( §7.2), provided by Ibdxnet ( §7). We describe the native interface which implements the low-level interface exposed by the engine ( §7.2.2) and how it is used in the DXNet IB transport for higher level connection management ( §8.1), sending serialized data from the ORB ( §8.2) and handling incoming receive buffers from remote nodes ( §8.3). Figure 8 depicts the involved components with the main aspects of their data and control flow which are referred to in the following subsection. If an application wants to send one or multiple messages, it calls DXNet which serializes them into the ORB and signals the WriteInterestManager (WIM) about available data ( §2.2). The native send thread checks the WIM for data to send periodically and, if available, gets it from the ORB. Depending on the size, the data to send might be sliced into multiple elements which are posted to the SQ as one or multiple work requests ( §7.2.3). Received data on the recv queue is written to one or multi- ple buffers (depending on the amount of data) from a native buffer pool ( §7.2.4). Without further processing, the buffers are forwarded to the Java space and pushed to the Incom-ingBufferQueue (IBQ). DXNet's de-serialization is processing the buffers in order and creates messages (Java objects) which are dispatched to pre-registered callbacks using dedicated message handler threads ( §2.3). Connection Handling To implement new transports in DXNet, it provides an interface to create specific connection types for the transport to implement. The DXNet core, which is shared across all transport implementations, manages the connections for the target application by automatically creating new connections on demand or closing connections if a configurable threshold is exceeded ( §2.1). For the IB transport implementation, the derived connection does not have to store further data or implement functionality. This is already stored and handled by the connection manager of Ibdxnet. It reduces overall architectural complexity by avoiding split functionality between Java and native space. Furthermore, it avoids context switching between Java and native code. Only the NID of either the target node to send to or the source node of the received data is exchanged between the Java and native space and vice versa. Thus, Connection setup in the transport implementation in Java is limited to creating the Java connection object for DXNet's connection manager. Connection close and cleanup is similar with an additional callback to the native library to signal a connection was closed to Ibdxnet's connection management. Dispatch of Ready-to-send Data The engine msgrc is running dedicated threads for sending data. The send thread pulls new data from the transport via the GetNextDataToSend function of the low-level interface ( §7.2.2, §7.2.3). In order to allow this and other callbacks (for connection management and receiving data) to be available to the IB transport, a lightweight JNI binding with the aspects explained in Section 5 was created. The transports implement the GetNextDataToSend function exposed by the JNI binding. To get new data to send, the send thread calls the JNI binding which is implemented in the IB transport in Java. Next, we elaborate on the implementation of GetNext-DataToSend in the IB transport, how the send thread gets data to send and how the different states for the data (posted, not posted, send completed) are handled in combination with the existing ORB data structure. Application threads using DXNet and sending messages are concurrently serializing them into the ORB ( §2.2). Once serialization completes, the thread signals the transport that there is ready to send (RTS) data in the ORB. For the IB transport, this signal adds a write interest to the dedicated Write Interest Manager (WIM). The WIM manages interest tokens using a lock-free list (based on a ring buffer) and a per connection atomic counter for both, RTS normal data from the ORB and FC data. Each type has a separate atomic counter, but, if not explicitly stated, we refer to them as one for ease of comprehension. The list contains the nodeIDs of the connections that have RTS data in the order they were added. The atomic counter is used to keep track of the number of interests signalled, i.e. the number of times the callback was triggered for the selected NID. Figure 9 depicts this situation with two threads (T1 and T2) which finished serializing data to the ORBs of two independent connections (3 and 2). The table with atomic counters keeps track of the number of signaled interests for RTS data/messages per connection. By calling GetNext-DataToSend, the send thread from Ibdxnet checks a lock-free list which contains nodeIDs of the connections with at least one write interest available. The nodeIDs are added in order to the list but only if it is not already in the list. This is detected by checking if the atomic counter returned 0 after a fetch and add operation. This mechanism ensures that data from many connection is processed in a round robin fashion. Furthermore, avoiding duplicates in the queue sets an upper bound for memory requirement which is sizeof(nodeID) * maxNumConnections. Otherwise, the queue can grow depending on the load and number of active connections. If the queue of the WIM is empty, the send thread aborts and returns to the native space. The send thread uses the NID it removed from the queue to get and reset the number of interests of the corresponding atomic counter. If there are any interests available for FC data, the send thread processes them by getting the FC from the connection and getting, but not yet removing, the stored FC data. For interests concerning normal data, the send thread gets the ORB from the connection and reads the current front and back pointers. The pointers of the ORB are ORB avail. for sending B P F B NP data posted to send queue but not confirmed B P next data to post to send queue F end of data to send and start of free area for serialization Serialization Cores: Sending Core: B NP Figure 10: Extended outgoing ring buffer used by IB transport. not modified, only read (details below). With this data, along with the NID of the connection, the send thread returns to the native space for processing ( §7.2.3). Every time the send thread returns to the Java space to get more data to send, it carries the parameters prevWorkResults, which contains data about the previous send operation, and completionList, which contains data about completed WRs, i.e. data send confirmations ( §7.2.3). For performance reasons, this data resides in native memory as structs and is mapped and accessed using DirectByteBuffers ( §5). The asynchronous workflow used to send and receive data by posting WRs and polling WCs must be adopted by updating the ORB and FC accordingly. Depending on the fill level of the SQ, the send thread might not be able to post all normal data or FC it retrieved in the previous iteration. The prevWorkResults parameter contains this information about how much normal and FC data was processed and could not be processed. This information must be preserved for the next send operation to avoid sending data multiple times. For the ORB however, we cannot move the front pointer because this frees up the memory which is not confirmed to be sent, yet. Thus, we introduce a second front pointer, front posted, which is only known to and modified by the send thread and allows it to keep track of already posted data. Figure 10 depicts the most important aspects of the enhanced ORB which is used for the IB transport. In total, this creates three virtual areas of memory designated to the following states: • Data posted but not confirmed: front to front posted • Data RTS and not posted: front posted to back • Free memory for send threads to serialize to: back to front Using the parameter prevWorkResults, the front posted pointer is moved by the amount of data posted. Any non processed data remains unprocessed (front posted not moved to cover entire area of RTS data). For data provided with the parameter completionList, the front pointer is updated according to the number of bytes now confirmed to be sent. A similar but less complex approach is applied to updating FC. Process Incoming Buffers The dedicated receive thread of msgrc is pushing received data to the low-level interface. Analogous to how RTS data is pulled from the IB transport via the JNI binding, the receive thread uses a received function provided by the binding to push the received buffers to the IB transport into Java space. All received buffers are stored as a batch in the recvPackage data structure ( §7.2.2) to minimize context switching overhead. For performance reasons, this data resides in native memory as structs and is mapped and accessed using Direct-ByteBuffers ( §5). The receive thread iterates the package in Java space, dispatches received FC data to each connection and pushes the received buffers (including the connection of the source node) to the IBQ ( §2.3). The buffers are handled and processed asynchronously by the MessageCreationCoordinator and one or multiple MessageHandlers of the DXNet core (all of them are Java threads). Once the buffers are processed (de-serializing its contents), the Java threads return them asynchronously to the transport engines receive buffer pool ( §7.2.4). Evaluation For better readability, we refer to DXNet with the IB transport Ibdxnet and msgrc engine as DXNet from here onwards. We implemented commonly used microbenchmarks to compare DXNet to two MPI implementations supporting In-finiBand: MVAPICH2 and FastMPJ. We decided to compare against two MPI implementations for the following reasons: To the best of our knowledge, there is no other system available that offers all features of DXNet and big data applications implementing their dedicated network stack do not offer it as a separate application/library like DXNet does. MPI can be used to partially cover some features of DXNet but not all ( §3). We are aware that MPI is targeting a different application domain, mainly HPC, whereas DXNet is targeting big data. However, MPI was already used in big data applications as well and several aspects related to the network stack and the technologies are overlapping in both application domains. Bandwidth with two nodes is compared using typical uniand bi-directional benchmarks. We also compared scalability using an all-to-all benchmark (worst-case scenario) with up to 8 nodes. Latency is compared by measuring the RTT with a request-response communication pattern. These benchmarks are executed single threaded to compare all three systems. Furthermore, we compared how DXNet and MVAPICH2 perform in a multi-threaded environment which is typical for Big Data but not HPC applications. However, we can only compare it using three benchmarks. Latency multi-threaded is not possible since it would require MVAPICH2 to implement additional infrastructure to store and map requests with responses and dynamic dispatching callbacks to handlers of incoming data to multiple receive threads (similar to DXNet). MVAPICH2 does not provide such a processing pipeline. FastMPJ cannot be compared at all here because it only supports single threaded environments. Table 1 summerizes the systems and benchmarks executed. All benchmarks were executed on up to 8 nodes of our private cluster, each with a single socket Intel Xeon CPU E5-1650 v3, 6 cores running at 3.50 GHz per core clock speed and 64 GB RAM. The nodes are running Ubuntu 16.04 with kernel version 4.4.0-57. All nodes are equipped with a Mellanox MT27500 HCA, connected with 56 Gbps links to a single Mellanox SX6015 18 port switch. For Java applications, we used the Oracle JVM version 1.8.0_151. Benchmarks The osu benchmarks included with MVAPICH2 implement typical micro benchmarks to measure uni-and bi-directional bandwidth and uni-directional latency which reflect basic usage of any network stack for point-to-point communication. osu_latency is used as a foundation and extended with recording of all RTTs to determine the 95th, 99th and 99.9th percentile after execution. The latency measured is the full RTT when the source is sending a request to the destination up to when the corresponding response is received by the source. For evaluating throughput, the benchmarks osu_bw and osu_bibw were combined to a single benchmark and extended to enable all-to-all bi-directional execution with more than two nodes. We consider this a relevant benchmark to show if the system is capable of handling multiple connections under high load. This is a common situation found in big data applications as well as backend storages [11]. On all-to-all, every node receives from all other nodes and sends messages to all other nodes in a round robin fashion. The bi-directional and all-to-all results presented are the aggregated send throughputs of all participating nodes. We added options to support multi-threaded sending and receiving using a configurable number of send and receive threads. As the per-processor core count increases, the multi-threading aspect becomes more and more important. Furthermore, our target application domain big data relies heavily on multithreaded environments. For the evaluation of FastMPJ, we ported the osu benchmarks to Java. The benchmarks for evaluating a multithreaded MPI process were omitted because FastMPJ does not support multi-threaded processes. DXNet comes with its own benchmarks already implemented which are comparable to the osu benchmarks. The osu benchmarks use a configurable parameter win-dow_size (WS) which denotes the number of messages sent in a single batch. Since MPI does not support implicit message aggregation like DXNet, we executed all MPI experiments with increasing WS to determine bandwidth peaks and saturation under optimal conditions and ensure a fair comparison to DXNet's built in aggregation. No MPI collectives are required for the benchmarks and, thus, aren't evaluated. All benchmarks are executed three times and their variance is displayed using error bars. Throughputs are specified in GB/s, latencies/RTTs in us and message rates in mmps (million messages per second). All throughput benchmarks send 100 million messages and all latency benchmarks 10 million messages. The total number of messages is incrementally halved starting with 4 kb message size to avoid unnecessary long running benchmark runs. All throughputs measured are based on the total amount of sent payload bytes. This does not include any overhead like message headers or envelopes that are required by the systems for message identification or routing. Furthermore, we included the results of the ib perf tools ib_write_bw and ib_write_lat as baselines to all end-to-end type benchmarks. These simple perf tools cannot be compared directly to the complex systems evaluated. But, these baselines show the best possible network performance (without any overhead by the evaluated system) and for rough comparisons of the systems across multiple plots. We chose parameters that reflect the configuration values of DXNet as close as possible (but still allow comparisons to FastMPJ and MVAPICH2 as well): receive queue size 2000 and send queue size 20 for both bandwidth and latency measurements; 100,000,000 messages for bandwidth and 10,000,000 for latency. DXNet with Ibdxnet Transport We configured DXNet using the parameters depicted in Table 2. The configuration values were determined with various debugging statistics and experiments, and are currently considered optimal configuration parameters. For comparing single threaded performance, the number of application threads and message handlers (referred to as MH) is limited to one each to allow comparing it to FastMPJ and MVAPICH2. DXNet's multi-threaded architecture does not allow combining the logic of the application send thread and a message handler into a single thread. Thus, DXNet's "single threaded" benchmarks are always executed with one dedicated send and one dedicated receive thread. The following subsections present the results of the various benchmarks. First, we present the results of all single threaded benchmarks with one send thread: uni-and bi-directional throughput, uni-directional latency and all-toall with increasing node count. Afterwards, the results of the same four benchmarks are presented with multiple send threads. Uni-directional Throughput The results of the uni-directional benchmark are depicted in figure 11. Considering one MH, DXNet's throughput peaks at 5.9 GB/s at a message size of 16 kb. For larger messages (32 kb to 1 MB), one MH is not sufficient to de-serialize and dispatch all incoming messages fast enough and drops to a peak bandwidth of 5.4 GB/s. However, this can be resolved by simply using two MHs. Now, DXNet's throughput peaks and saturates at 5.9 GB/s with a message size of just 4 kb and stays saturated up to 1 MB. Message sizes smaller than 4 kb also benefit significantly from the shorter receive processing times by utilizing two MHs. Further MHs can still improve performance but only slightly for a few message sizes. For small messages up to 64 bytes, DXNet achieves peak Compared to the baseline performance of ib_send_bw, DXNet's peak performance is approx. 0.5 to 1.0 mmps less. With increasing message size, this gap closes and DXNet even surpasses the baseline 1 kb to 32 kb message sizes when using multiple threads. DXNet peaks close to the baseline's peak performance of 6.0 GB/s. The results with small message sizes are fluctuating independent of the number of MHs. This can be observed on all other benchmarks with DXNet measuring message/payload throughput as well. It is a common issue which can be observed when running high load throughput benchmarks using the bare ibverbs API as well. This benchmark shows that DXNet is capable of handling a vast amount of small messages efficiently. The application send thread and, thus, the user does not have to bother with aggregating messages explicitly because DXNet handles this transparently and efficiently. The overall performance benefits from multiple message handlers increasing receive throughput. Large messages do impact performance with one MH because the de-serialization of data consumes most of the processing time during receive. However, simply adding at least another MH solves this issue and further increases performance. The peak aggregated message rate for small messages up to 64 bytes is varying from approx. 6 to 6.9 mmps with one MH. Using more MHs cannot improve performance significantly for this benchmark. Due to the multi-threaded and highly pipelined architecture of DXNet, these variations cannot be avoided, especially when exclusively handling many small messages. Bi-directional Throughput Compared to the baseline performance of ib_send_bw, there is still room for improvement for DXNet's performance on small message sizes (up to 2.5 mmps difference). For medium message sizes, ib_send_bw yields slightly higher throughput for up to 1 kb message size. But, DXNet surpasses ib_send_bw on 1 kb to 16 kb message size. DXNet's peak performance is approx. 1.1 GB/sec less than ib_send_bw's (11.5 GB/sec). Overall, this benchmark shows that DXNet can deliver Figure 13: DXNet: 2 nodes, uni-directional RTT and message rate with one application send thread, increasing message size great performance especially for small messages similar to the uni-directional benchmark ( §9.2.1). Figure 13 depicts the average RTTs as well as the 95th, 99th and 99.9th percentile of the uni-directional latency benchmark with one send thread and one MH. For message sizes up to 512 bytes, DXNet achieves an avg. RTT of 7.8 to 8.3 µs, a 95th percentile of 8.5 to 8.9 µs, a 99th percentile of 8.9 to 9.2 and 99.9th percentile of 11.8 to 12.7 µs. This results in a message rate of approx 0.1 mmps. As expected, starting with 1 kb message size, latency increases with increasing message size. Uni-directional Latency The RTT can be broken down into three parts: DXNet, Ibdxnet and hardware processing. Taking the lowest avg. of 7.8 µs, DXNet requires approx. 3.5 µs of the total RTT (the full breakdown is published in our other publication [8]) and the hardware approx. 2.0 µs (assuming avg. one way latency of 1 µs for the used hardware). Message de-and serialization as well as message object creation and dispatching are part of DXNet. For Ibdxnet, this results in approx. 2.3 µs processing time which includes JNI context switching as well as several pipeline stages explained in the earlier sections. Compared to the baseline performance of ib_send_lat, DXNet's latency is significantly higher. Obviously, additional latency cannot be avoided with such a long and complex processing pipeline. Considering the breakdown mentioned above, the native part Ibdxnet, which calls ibverbs to send and receive data, is to some degree comparable to the minimal perf tool ib_send_bw. With a total of 2.3 µs (of the full pipeline's 7.8 µs), the total RTT is just slightly higher than ib_send_bw's 1.8 µs. But, Ibdxnet already includes various data structures for state handling and buffer scheduling ( §7.2.3, §7.2.4) which ib_send_bw doesn't. Buffers for sending data are re-used instantly and the data received is 4 GB/s. Incrementally adding two nodes, throughput is increased by 8.5 GB/s (for 2 to 4 nodes), by 7.1 GB/s (for 4 to 6 nodes) and 6.4 GB/s (for 6 to 8 nodes). One would expect approx. equally large throughput increments but the gain is noticeably lowered with every two nodes added. We tried different configuration parameters for DXNet and ibverbs like different MTU sizes, SGE counts, receive buffer sizes, WRs per SQ/SRQ or CQ size. No combination of settings allowed us to improve this situation. We assume that the all-to-all communication pattern puts high stress on the HCA which, at some point, cannot keep up with processing outstanding requests. To rule out any software issues with DXNet first, we implemented a low-level "loopback" like test which uses the native part of Ibdxnet, only. The loopback test does not involve any dynamic message posting when sending data or data processing when receiving. Instead, a buffer equally to the size of the ORB is processed by Ibdxnet's send thread on every iteration and posted to every participating SQ. This ensures that all SQs are filled and are quickly refilled once at least one WR was processed. When receiving data on the SRQ, all buffers received are directly put back into the pool without processing Figure 15: DXNet: 2 nodes, uni-directional throughput and message rate with multiple application send threads, increasing message size and 4 message handlers and the SRQ is refilled. This ensures that no additional processing overhead is added for sending and receiving data. Thus, Ibdxnet's loopback test comes close to a perftool like benchmark. We executed the benchmark with 2, 4, 6 and 8 nodes which yielded aggregated throughputs of 11.7 GB/s, 21.7 GB/s, 28.3 GB/s and 34.0 GB/s. These results are very close to the performance of the full DXNet stack but don't rule out all software related issues, yet. The overall aggregated bandwidth could still somehow be limited by Ibdxnet. Thus, we executed another benchmark which, first, executes all-to-all communication with up to 8 nodes, then, once bandwidth is saturated, switching to a ring formation for communication without restarting the benchmark (every node sends to its successor determined by NID, only). Once the nodes switch the communication pattern during execution, the per node aggregated bandwidth increases very quickly and reaches a maximum aggregated bandwidth of approx. (11.7/2 × num_nodes) GB/s independent of the number of nodes used. This rules out total bandwidth limitations for software and hardware. Furthermore, we can now rule out any performance issues in DXNet or even ibverbs with connection management (e.g. too many QPs allocated). This leads to the assumption that the HCA cannot keep up with processing outstanding WRQs when SQs are under high load (always filled with WRQs). With more than 3 SQs per node, the total bandwidth drops noticably. Similar results with other systems further support this assumption ( §9.3.4 and 9.4.4). Figure 15 shows the uni-directional benchmark executed with 4 MHs and 1 to 16 send threads. For 1 to 4 send threads throughput saturates at 5.9 GB/s at either 4 kb or 8 kb messages. For 256 byte to 8 kb, using one thread yields better Figure 16: DXNet: 2 nodes, bi-directional throughput and message rate with multiple application send threads, increasing message size and 4 message handlers throughput than two or sometimes four threads. However, running the benchmark with 8 and 16 send threads increases overall throughput for all messages greater 32 byte significantly with saturation starting at 2 kb message size. DXNet's pipeline benefits from the many threads posting messages to the ORB concurrently. This results in greater aggregation of multiple messages and allows higher buffer utilization for the underlaying transport. DXNet also increases message throughput on small message sizes up to 512 byte. from approx. 4.0 mmps up to 6.7 mmps for 16 send threads. Again, performance is slightly worse with two and four compared to a single thread. Uni-directional Throughput Multi-threaded Furthermore, DXNet even surpasses the baseline performance of ib_send_bw when using multiple send threads. However, the peak performance cannot be improved further which shows the current limit of DXNet for this benchmark and the hardware used. Figure 16 shows the bi-directional benchmark executed with 4 MHs and 1 to 16 send threads. With more than one send thread, the aggregated throughput peaks at approx. 10.4 and 10.7 GB/s with messages sizes of 2 and 4 kb. DXNet delivers higher throughputs for all medium and small messages with increasing send thread count. The baseline performance of ib_send_bw is reached on small message sizes and even surpassed with medium sized messages up to 16 kb. The peak throughput is not reached showing DXNet's current limit with the used hardware. Bi-directional Throughput Multi-threaded The overall performance with 8 and 16 send threads don't differ noticeably which indicates saturation of DXNet's processing pipeline. For small messages (less than 512 byte), the message rates also increase with increasing send thread count. Again, saturation starts with 8 send threads with a message rate of approx. 8.6 to 10.2 mmps. Figure 18: DXNet: 2 nodes, uni-directional 95th, 99th and 99.9th percentile RTT and message rate with multiple application send threads, increasing message size and 4 message handlers DXNet is capable of handling a multi-threaded environment under high load with CPU over-provisioning and still delivers high throughput. Especially for small messages, DXNet's pipeline even benefits from the highly concurrent activity by aggregating many messages. This results in higher buffer utilization and, for the user, higher overall throughput. DXNet's internal threads, MH and send threads, exceed the core count of the CPU, DXNet switches to different parking strategies for the different thread types which slightly increase latency but greatly reduce overall CPU load ( §7.2.5). Uni-directional Latency Multi-threaded The message rate can be increased up to 0.33 mmps with up to 4 send threads as, practically, every send thread can use a free MH out of the 4 available. With 8 and 16 send threads, the MHs on the remote must be shared and DXNet's overprovisioning is active which reduces the overall throughput. The percentiles shown in figure 18 reflect this sitution very well and increase noticeably. With a single thread, as already discussed in 9.2.3, the difference of the avg. (7.8 to 8.3 µs) and 99.9th percentile (11.8 to 12.7 µs) RTT for message sizes less than 1 kb is approx. 4 to 5 µs. When doubling the send thread count, the 99.9th percentiles roughly double as well. When over-provisioning the CPU, we cannot avoid the higher than usual RTT caused by the increasing amount of messages getting posted. 9.2.8 All-to-all Throughput with up to 8 Nodes Multithreaded Figure 19 shows the results of the all-to-all benchmark with up to 8 nodes, 16 These results show that DXNet delivers high throughputs and message rates under high loads with increasing node and thread count. Small messages profit significantly through better aggregation and buffer utilization. Summary Results This section briefly summerizes the most important results and numbers of the previous benchmarks. All values are considered "up to" and show the possible peak performance in the given benchmark. Single-threaded: • Uni-directional throughput One MH: saturation with 16 kb messages, peak throughput at 5.9 GB/s; Figure 20: FastMPJ: 2 nodes, uni-directional throughput and message rate with increasing message and window size FastMPJ This section describes the results of the benchmarks executed with FastMPJ and compares them to the results of DXNet presented in the previous sections. We used FastMPJ 1.0_7 with the device ibvdev to run the benchmarks on InfiniBand hardware. The osu benchmarks of MVAPICH2 were ported to Java ( §9.1) and used for all following experiments. Since FastMPJ does not support multithreading in a single process, all benchmarks were executed single threaded and compared to the single threaded results of DXNet, only. Figure 20 shows the results of executing the uni-directional benchmark with two nodes with increasing message size. Furthermore, the benchmark was executed with increasing WS to ensure bandwidth saturation. As expected, throughput increases with increasing message size and bandwidth saturation starts at a medium message size of 64k with approx. 5.7 GB/s. The actual peak throughput is reached with large 512k message for a WS of 64 with 5.9 GB/s. For small message sizes up to 512 byte and independent of the WS, FastMPJ achieves a message rate of approx. 1.0 mmps. Furthermore, the results show that the WS doesn't matter for message sizes up to 64 KB. For 128 KB to 1 MB, FastMPJ profits from explicit aggregation with increasing WS. This indicates that ibvdev might include some message aggregation mechanism. Uni-directional Throughput Compared to the baseline performance of ib_send_bw, FastMPJ's performance is always inferior to it with a peak performance of 5.9 GB/s close to ib_send_bw's with 6.0 GB/s. Compared to the results of DXNet ( §9.2.1), DXNet's throughput saturates and peaks earlier at a message size of 16 kb with 5.9 GB/s. However, if using one MH, throughput drops for larger messages down to 5.4 GB/s due to increased message processing time (de-serialization). However, such a Bi-directional Throughput The results of the bi-directional benchmark are depicted in figure 21. Again, throughput increases with increasing message size peaking at 10.8 GB/s with WS 2 and large 512 kb messages. However, when handling messages of 128 kb and greater, throughput peaks at approx 10.2 GB/s for the WSs 4 to 32 and saturation varies depending on the WS. For WSs 4 to 32, throughput is saturated with 64 kb messages, for WSs 1 and 2 at 512 kb. Starting at 128 kb message size, WSs of 1 and 2 achieve slightly better results than the greater WSs. Especially WS 64 drops significantly with message sizes of 128 kb and greater. However, for message sizes of 64 kb to 512 kb, FastMPJ profits from explicit aggregation. Compared to the uni-directional results ( §9.3.1), FastMPJ does profit to some degree from explicit aggregation for small messages with 1 to 128 bytes. WS 1 to 16 allow higher message throughputs with WS 16 as an optimal value peaking at approx. 2.4 mmps for 1 to 128 byte messages. Greater WSs degrade message throughput significantly. However, this does not apply to message sizes of 256 bytes where greater explcit aggregation does always increase message throughput. Compared to the baseline performance of ib_send_bw, FastMPJ's performance is again always inferior to it with a difference in peak performance of 0.7 GB/sec (10.8 GB/s to 11.5 GB/s). When comparing to DXNet's results ( §9.2.2), the throughputs are nearly equal with 10.7 GB/s also at 512 kb message Uni-directional Latency The results of the latency benchmark are depicted in figure 22. Compared to the baseline performance of ib_send_lat, FastMPJ's average RTT comes close to its 1.8 µs and closes that gap slightly further starting with 256 byte message size. Comparing the avg. RTT and 95th percentile to DXNet's results ( §9.2.3), FastMPJ outperforms DXNet by a up to four times lower RTT. This is also reflected by the message rate of 0.41 mmps for FastMPJ and 0.1 mmps for DXNet. The breakdown given Section 9.2.3 explains the rather high RTTs and the amount of processing time spent by DXNet on major sections of the pipeline. However, even DXNet's avg. RTT for message sizes up to 512 byte is higher than FastMPJ's, DXNet achieves lower 99th (8.9 to 9.2 µs) and 99.9th percentile (11.8 to 12.7 µs) than FastMPJ. Summary Results This section briefly summerizes the most important results and key numbers of the previous benchmarks. All values are considered "up to" and show the possible peak performance in the given benchmark and are single-threaded, only. All results benefit from explicit aggregation using the WS. • Uni-directional throughput Saturation at 64 kb message size with 5.7 GB/s; Peak throughput at 512 kb message size with 5.9 GB/s; Compared to DXNet's single threaded results, it outperforms FastMPJ on small messages with a up to 4 times higher message rate on both un-und bi-directional benchmarks. However, FastMPJ achieves a lower average and 95th percentile latency on the uni-directional latency benchmark. But, even with a more complicated and dynamic pipeline, DXNet achieves lower 99th and 99.9th percentile than FastMPJ demonstrating high stability. On all-to-all communication with up to 8 nodes, DXNet reaches similar throughputs to FastMPJ's for large messages but outperforms FastMPJ's message rate by up to three times for small messages. DXNet is always better for small messages. MVAPICH2 This section describes the results of the benchmarks executed with MVAPICH2 and compares them to the results of DXNet. All osu benchmarks ( §9.1) were executed with MVAPICH2-2.3. Since MVAPICH2 supports MPI calls with multiple threads of the same process, some benchmarks were executed single and multi-threaded. We set the following environmental variables for optimal performance and comparability: • MV2_DEFAULT_MAX_SEND_WQE=128 • MV2_DEFAULT_MAX_RECV_WQE=128 • MV2_SRQ_SIZE=1024 • MV2_USE_SRQ=1 • MV2_ENABLE_AFFINITY=1 Additionally for the multi-threaded benchmarks, the following environmental variables were set: • MV2_CPU_BINDING_POLICY=hybrid • MV2_THREADS_PER_PROCESS=X (where X equals the number of threads we used when executing the benchmark) • MV2_HYBRID_BINDING_POLICY=linear Uni-directional Throughput The results of the uni-directional single threaded benchmark are depicted in figure 26. Compared to the baseline performance of ib_send_bw, MVAPICH2's peak performance is approx. 1.0 mmps less for small messages. With increasing message size, on a WS of 64, the performance comes close to the baseline and even exceeds it for 2 kb to 8 kb messages. MVAPICH2 peaks very close to the baseline's peak performance of 6.0 GB/s. DXNet achieves very similar results ( §9.2.1) compared to MVAPICH2 but without relying on explicit aggregation. DXNet's throughput saturates and peaks earlier at a message size of 16 kb with 5.9 GB/s. However, if using one MH, throughput drops for larger messages down to 5.4 GB/s due to increased message processing time (de-serialization). As already explained in Section 9.3.1, this can be resolved by using two MHs. For small messages of up to 64 bytes, DXNet achieves an equal to slightly higher message rate of 4.0 to 4.5 mmps. Compared to the baseline performance of ib_send_bw, MVAPICH2's peak performance for small messages is approx. half of ib_send_bw's 9.5 mmps. With increasing message size, the throughput of MVAPICH2 comes close ib_send_bw's with WS 64 and 32 for 4 and 8 kb messages, only. Peak throughput for large messages comes close to ib_send_bw's 11.5 GB/s. Compared to DXNet's results ( §9.2.2), the aggregated throughput is slightly higher than DXNet's (10.7 GB/s). However, DXNet outperforms MVAPICH2 for medium sized messages by reaching a peak throughput of 10.4 GB/s compared to 9.5 GB/s (on WS 64) for just 8 kb messages. Furthermore, DXNet offers a higher message rate of 6 to 7.2 mmps on small messages up to 64 bytes. DXNet achieves overall higher performance without relying on explicit message aggregation. Figure 28 shows the results of the uni-directional single threaded latency benchmark. MVAPICH2 achieves a very low average RTT of 2.1 to 2.4 µs for up to 64 byte messages and up to 3.9 µs for up to 512 byte messages. The 95th, 99th and 99.9th percentile are just slightly higher than the average Compared to DXNet's results ( §13), MVAPICH2 achieves an overall lower latency. DXNet's average with 7.8 to 8.3 µs is nearly four times higher. The 95h (8.5 to 8.9 µs), 99th (8.9 to 9.2 µs) and 99.9th percentile (11.8 to 12.7 µs) are also at least two to three times higher. MVAPICH2 implements a very thin layer of abstraction, only. Application threads issuing MPI calls, are pinned to cores and are directly calling ibverbs functions after passing through these few layers of abstraction. DXNet however implements multiple pipeline stages with de/-serialization and multiple (JNI) context/thread switches. Naturally, data passing through such a long pipeline takes longer to process which impacts overall latency. However, DXNet traded latency for multithreading support and performance as well as efficient handling of small messages. MVAPICH2 achieves a peak throughput of 19.5 GB/s with 128 kb messages on WSs 16, 32 and 64 and starts at approx 32 kb message size. WS 8 gets close to the peak throughput as well but the remaining WSs peak lower for messages With WS 2, a message rate 8.4 to 8.8 mmps for up to 64 byte messages is achieved and 6.6 to 8.8 mmps for up to 512 byte. Running the benchmark with 6 nodes, MVAPICH2 hits a peak throughput of 27.3 GB/s with 512 kb messages on WSs 16, 32 and 64. Saturation starts with a message size of approx. 64 to 128 kb depending on the WS. For 1 kb to 32 kb messages, the fluctuations increased compared to executing the benchmark with 4 nodes. Again, message rate is degraded when using large WS for small messages. An optimal message rate of 11.9 to 13.1 is achieved with WS 2 for up to 64 byte messages. Uni-directional Latency With 8 nodes, the benchmark peaks at 33.3 GB/s with 64 kb messages on a WS of 64. Again, WS does matter for large messages as well with WS 16, 32 and 64 reaching the peak throughput and starting saturation at approx. 128 kb message size. The remaining WSs peak significantly lower. Figure 32: MVAPICH2: 2 nodes, bi-directional throughput and message rate, multi-threaded with one send and one recv thread with increasing message and window size The fluctuations for mid range messages sizes of 1 kb to 64 kb increased further compared to 6 nodes. Most notable, the performance with 4 kb messages and WS 4 is nearly 10 GB/s better than 4 kb with WS 64. With up to 64 byte messages, a message rate of 16.5 to 17.8 mmps is achieved. For up to 512 byte messages, the message rate varies with 13.5 to 17.8 mmps. As with the previous node counts, a smaller WS increases the message rate significantly while larger WSs degrade performance by a factor of two. MVAPICH2 has the same "scalability issues" as DXNet ( §9.2.4) and FastMPJ ( §9.3.4). The maximum achievable bandwidth matches what was determined with the other systems. With the same results on three different systems, it's very unlikely that this is some kind of software issue like a bug or bad implementation but most likely a hardware limitation. So far, we haven't seen this issue discussed in any other publication and think it is noteworthy to know what the hardware is currently capable of. Compared to DXNet ( §9.2.4), MVAPICH2 reaches slightly higher peak throughputs for large messages. However, this peak as well as saturation is reached later at 32 to 512 kb messages compared to DXNet with approx. 16 kb. The fluctuations for mid range size messages cannot be compared as DXNet does not rely on explicit aggregation. For small messages up to 64 byte, DXNet achieves significantly higher message rates, with peaks at 7.0 mmps, 15.0 mmps, 21.1 mmps and 27.3 mmps for 2 to 8 nodes, compared to MVAPICH2. Figure 32 shows the results of the bi-directional multithreaded benchmark with two threads (on each node): a separate thread for sending and receiving each. In our case, this is the simplest multi-threading configuration to utilize more than one thread for MPI calls. The plot shows highly fluctu-ating results of the three runs executed as well as overall low throughput compared to the single threaded results ( §9.4.2). Throughput peaks at 8.8 GB/s with a message size of 512 kb for WS 16. A message rate of 0.78 to 1.19 mmps is reached for for up to 64 byte messages for WS 32. Bi-directional Throughput Multi-threaded We tried varying the configuration values (e.g. queue sizes, buffer sizes, buffer counts) but could not find configuration parameters that yielded significantly better, especially less fluctuating, results. Furthermore, the benchmarks could not be finished with sending 100,000,000 messages. When using MPI_THREAD_MULTIPLE, the memory consumption increases continuously and exhausts the total memory available on our machine (64 GB). We reduced the number of messages to 1,000,000 which still consumes approx. 20% of the total main memory but at least executes and finishes within a reasonable time. This does not happen with the widely used MPI_THREAD_SINGLE mode. MVAPICH2 implements multi-threading support using a single global lock for various MPI calls which includes MPI_Isend and MPI_Irecv used in the benchmark. This fulfils the requirements described in the MPI standard and avoids a complex architecture with lock-free data structures. However, a single global lock reduces concurrency significantly and does not scale well with increasing thread count [12]. This effect impacts performance less on applications with short bursts and low thread count. However, for multithreaded applications under high load, a single-threaded approach with one dedicated thread driving the network decoupled from the application threads, might be a better solution. Data between application threads and the network thread can be exchanged using data structures such as buffers, queues or pools like provided by DXNet. MVAPICH2's implementation of multi-threading does not allow to improve performance by increasing the send or receive thread counts. Thus, further multi-threaded experiments using MVAPICH2 are not reasonable. Summary Results This section briefly summerizes the most important results and numbers of the previous benchmarks. All values are considered "up to" and show the possible peak performance in the given benchmark. Single-threaded: • Uni-directional throughput Saturation with 64 kb to 128 kb message size, peak at 5. Compared to DXNet, the uni-directional results are similar but DXNet does not require explicit message aggregation to deliver high throughput. On bi-directional communication, MVAPICH2 achieves a slightly higher aggregated peak throughput than DXNet but DXNet performs better by approx 0.9 GB/s on medium sized messages. DXNet outperforms MVAPICH2 on small messages with a up to 1.8 times higher message rate. But, MVAPICH2 clearly outperforms DXNet on the uni-directional latency benchmark with an overall lower average, 95th, 99th and 99.9th percentile latency. On all-to-all communication with up to 8 nodes, MVAPICH2 reaches slightly higher peak throughputs for large messages but DXNet reaches its saturation earlier and performs significantly better on small message sizes up to 64 bytes. The low multi-threading performance of MVAPICH2 cannot be compared to DXNet's due to the following reasons: First, MVAPICH2 implements synchronization using a global lock which is the most simplest but very often least performant method to ensure thread safety. Second, MVA-PICH2, like many other MPI implementations, typically create multiple processes (one process per core) to enable concurrency on a single processor socket. However, as already discussed in related work ( §3), this programming model is not suitable for all application domains, especially in big data applications. DXNet is better for small messages and multi-threaded access like required in big-data applications. Conclusions We presented Ibdxnet, a transport for the Java messaging library DXNet which allows multi-threaded Java applications to benefit from low latency and high-throughput using InfiniBand hardware. DXnet provides transparent connection management, concurrency handling, message serialization and hides the transport which allows the application to switch from Ethernet to InfiniBand hardware transparently, if the hardware is available. Ibdxnet's native subsystem provides dynamic, scalable, concurrent and automatic connection management and the msgrc messaging engine implementation. The msgrc engine uses a dedicated send and receive thread and to drive RC QPs asynchronously which ensures scalability with many nodes. Load adaptive parking avoids high loads on idle but ensures low latency when busy. SGEs are used to simplify buffer handling and increase buffer utilization when sending data provided by the higher level DXNet core. A carefully crafted architecture minimizes context switching between Java and the native space as well us exchanging data using shared memory buffers. The evaluation shows that DXNet with the Ibdxnet transport can keep up with FastMPJ and MVAPICH2 on single threaded applications and even exceed them in multi-threaded applications on high load applications. DXNet with Ibdxnet is capable of handling concurrent connections and data streams with up to 8 nodes. Furthermore, multi-threaded applications benefit significantly from the multi-threaded aware architecture. The following topics are of interest for future research with DXnet and Ibdxnet: • Experiments with more than 100 nodes on our university's cluster
16,870
1812.02245
2952049456
We revisit the notion of deniability in quantum key exchange (QKE), a topic that remains largely unexplored. In the only work on this subject by Donald Beaver, it is argued that QKE is not necessarily deniable due to an eavesdropping attack that limits key equivocation. We provide more insight into the nature of this attack and how it extends to other constructions such as QKE obtained from uncloneable encryption. We then adopt the framework for quantum authenticated key exchange, developed by , and extend it to introduce the notion of coercer-deniable QKE, formalized in terms of the indistinguishability of real and fake coercer views. Next, we apply results from a recent work by Arrazola and Scarani on covert quantum communication to establish a connection between covert QKE and deniability. We propose DC-QKE, a simple deniable covert QKE protocol, and prove its deniability via a reduction to the security of covert QKE. Finally, we consider how entanglement distillation can be used to enable information-theoretically deniable protocols for QKE and tasks beyond key exchange.
In a framework based on the simulation paradigm, introduced the notion of deniable authentication @cite_10 , followed by the work of Di on the formalization of deniable key exchange @cite_24 . Both works rely on the formalism of zero-knowledge (ZK) proofs, with definitions formalized in terms of a simulator that can produce a simulated view that is indistinguishable from the real one. In a subsequent work, Di Raimondo and Gennaro gave a formal definition of forward deniability @cite_31 , requiring that indistinguishability remain intact even when a (corrupted) party reveals real coins after a session. Among other things, they showed that statistical ZK protocols are forward deniable.
{ "abstract": [ "We extend the definitional work of Dwork,Naor and Sahai from deniable authentication to deniable key-exchange protocols. We then use these definitions to prove the deniability features of SKEME and SIGMA, two natural and efficient protocols which serve as basis for the Internet Key Exchange (IKE)protocol.SKEME is an encryption-based protocol for which we prove full deniability based on the plaintext awareness of the underlying encryption scheme. Interestingly SKEME's deniability is possibly the first \"natural\" application which essentially requires plaintext awareness (until now this notion has been mainly used as a tool for proving chosen-ciphertext security).SIGMA, on the other hand,uses non-repudiable signatures for authentication and hence cannot be proven to be fully deniable. Yet we are able to prove a weaker, but meaningful, \"partial deniability\" property: a party may not be able to deny that it was \"alive\" at some point in time but can fully deny the contents of its communications and the identity of its interlocutors.We remark that the deniability of SKEME and SIGMA holds in a concurrent setting and does not essentially rely on the random oracle model.", "Deniable Authentication protocols allow a Sender to authenticate a message for a Receiver, in a way that the Receiver cannot convince a third party that such authentication (or any authentication) ever took place. We present two new approaches to the problem of deniable authentication. The novelty of our schemes is that they do not require the use of CCA-secure encryption (all previous known solutions did), thus showing a different generic approach to the problem of deniable authentication. These new approaches are practically relevant as they lead to more efficient protocols. In the process we point out a subtle definitional issue for deniability. In particular, we propose the notion of forward deniability, which requires that the authentications remain deniable even if the Sender wants to later prove that she authenticated a message. We show that a simulation-based definition of deniability, where the simulation can be computationally indistinguishable from the real protocol does not imply forward deniability. Thus, for deniability one needs to restrict the simulation to be perfect (or statistically close). Our new protocols satisfy this stricter requirement.", "" ], "cite_N": [ "@cite_24", "@cite_31", "@cite_10" ], "mid": [ "2020071233", "1983765476", "" ] }
Revisiting Deniability in Quantum Key Exchange via Covert Communication and Entanglement Distillation
Deniability represents a fundamental privacy-related notion in cryptography. The ability to deny a message or an action is a desired property in many contexts such as off-the-record communication, anonymous reporting, whistle-blowing and coercionresistant secure electronic voting. The concept of non-repudiation is closely related to deniability in that the former is aimed at associating specific actions with legitimate parties and thereby preventing them from denying that they have performed a certain task, whereas the latter achieves the opposite property by allowing legitimate parties to deny having performed a particular action. For this reason, deniability is sometimes referred to as repudiability. The definitions and requirements for deniable exchange can vary depending on the cryptographic task in question, e.g., encryption, authentication or key exchange. Roughly speaking, the common underlying idea for a deniable scheme can be understood as the impossibility for an adversary to produce cryptographic proofs, using only algorithmic evidence, that would allow a third-party, often referred to as a judge, to decide if a particular entity has either taken part in a given exchange or exchanged a certain message, which can be a secret key, a digital signature, or a plaintext message. In the context of key exchange, this can be also formulated in terms of a corrupt party (receiver) proving to a judge that a message can be traced back to the other party [16]. In the public-key setting, an immediate challenge for achieving deniability is posed by the need for remote authentication as it typically gives rise to binding evidence, e.g., digital signatures, see [16,17]. The formal analysis of deniability in classical cryptography can be traced back to the original works of Canetti et al. and Dwork et al. on deniable encryption [11] and deniable authentication [18], respectively. These led to a series of papers on this topic covering a relatively wide array of applications. Deniable key exchange was first formalized by Di Raimondo et al. in [16] using a framework based on the simulation paradigm, which is closely related to that of zero-knowledge proofs. Despite being a well-known and fundamental concept in classical cryptography, rather surprisingly, deniability has been largely ignored by the quantum cryptography community. To put things into perspective, with the exception of a single paper by Donald Beaver [3], and a footnote in [20] commenting on the former, there are no other works that directly tackle deniable QKE. In the adversarial setting described in [3], it is assumed that the honest parties are approached by the adversary after the termination of a QKE session and demanded to reveal their private randomness, i.e., the raw key bits encoded in their quantum states. It is then claimed that QKE schemes, despite having perfect and unconditional security, are not necessarily deniable due to an eavesdropping attack. In the case of the BB84 protocol, this attack introduces a binding between the parties' inputs and the final key, thus constraining the space of the final secret key such that key equivocation is no longer possible. Note that since Beaver's work [3] appeared a few years before a formal analysis of deniability for key exchange was published, its analysis is partly based on the adversarial model formulated earlier in [11] for deniable encryption. For this reason, the setting corresponds more closely to scenarios wherein the honest parties try to deceive a coercer by presenting fake messages and randomness, e.g., deceiving a coercer who tries to verify a voter's claimed choice using an intercepted ciphertext of a ballot in the context of secure e-voting. Contributions and Structure In Section 3 we revisit the notion of deniability in QKE and provide more insight into the eavesdropping attack aimed at detecting attempts at denial described in [3]. Having shed light on the nature of this attack, we show that while coercer-deniability can be achieved by uncloneable encryption (UE) [19], QKE obtained from UE remains vulnerable to the same attack. We briefly elaborate on the differences between our model and simulation-based deniability [16]. To provide a firm foundation, we adopt the framework and security model for quantum authenticated key exchange (Q-AKE) developed by Mosca et al. [24] and extend them to introduce the notion of coercerdeniable QKE, which we formalize in terms of the indistinguishability of real and fake coercer views. A. Atashpendar et al. We establish a connection between the concept of covert communication and deniability in Section 4, which to the best of our knowledge has not been formally considered before. More precisely, we apply results from a recent work by Arrazola and Scarani on obtaining covert quantum communication and covert QKE via noise injection [1] to propose DC-QKE, a simple construction for coercer-deniable QKE. We prove the deniability of DC-QKE via a reduction to the security of covert QKE. Compared to the candidate PQECC protocol suggested in [3] that is claimed to be deniable, our construction does not require quantum computation and falls within the more practical realm of prepare-and-measure protocols. Finally, in Section 5 we consider how quantum entanglement distillation can be used not only to counter eavesdropping attacks, but also to achieve informationtheoretic deniability. We conclude by presenting some open questions in Section 6. It is our hope that this work will rekindle interest, more broadly, in the notion of deniable communication in the quantum setting, a topic that has received very little attention from the quantum cryptography community. Preliminaries in Quantum Information and QKE We use the Dirac bra-ket notation and standard terminology from quantum computing. Here we limit ourselves to a description of the most relevant concepts in quantum information theory. More details can be found in standard textbooks [25,32]. For brevity, let A and B denote the honest parties, and E the adversary. Given an orthonormal basis formed by |0 and |1 in a two-dimensional complex Hilbert space H 2 , let (+) ≡ {|0 , |1 } denote the computational basis and (×) ≡ {( 1 / √ 2)(|0 + |1 ), ( 1 / √ 2)(|0 − |1 )} the diagonal basis. If the state vector of a composite system cannot be expressed as a tensor product |ψ 1 ⊗ |ψ 2 , the state of each subsystem cannot be described independently and we say the two qubits are entangled. This property is best exemplified by maximally entangled qubits (ebits), the so-called Bell states Φ ± AB = 1 √ 2 (|00 AB ± |11 AB ) , Ψ ± AB = 1 √ 2 (|01 AB ± |10 AB ) A noisy qubit that cannot be expressed as a linear superposition of pure states is said to be in a mixed state, a classical probability distribution of pure states: {p X (x), |ψ x } x∈X . The density operator ρ, defined as a weighted sum of projectors, captures both pure and mixed states: ρ ≡ x∈X p X (x) |ψ x ψ x |. Given a density matrix ρ AB describing the joint state of a system held by A and B, the partial trace allows us to compute the local state of A (density operator ρ A ) if B's system is not accessible to A. To obtain ρ A from ρ AB (the reduced state of ρ AB on A), we trace out the system B: ρ A = Tr B (ρ AB ). As a distance measure, we use the expected fidelity F (|ψ , ρ) between a pure state |ψ and a mixed state ρ given by F (|ψ , ρ) = ψ| ρ |ψ . A crucial distinction between quantum and classical information is captured by the well-known No-Cloning theorem [33], which states that an arbitrary unknown quantum state cannot be copied or cloned perfectly. Quantum Key Exchange and Uncloneable Encryption QKE allows two parties to establish a common secret key with information-theoretic security using an insecure quantum channel, and a public authenticated classical channel. In Protocol 1 we describe the BB84 protocol, the most well-known QKE variant due to Bennett and Brassard [5]. For consistency with related works, we use the well-established formalism based on error-correcting codes, developed by Shor and Preskill [28]. Let C 1 [n, k 1 ] and C 2 [n, k 2 ] be two classical linear binary codes encoding k 1 and k 2 bits in n bits such that {0} ⊂ C 2 ⊂ C 1 ⊂ F n 2 where F n 2 is the binary vector space on n bits. A mapping of vectors v ∈ C 1 to a set of basis states (codewords) for the Calderbank-Shor-Steane (CSS) [10,29] code subspace is given by : v → ( 1 / |C2|) w∈C2 |v + w . Due to the irrelevance of phase errors and their decoupling from bit flips in CSS codes, Alice can send |v along with classical error-correction information u + v where u, v ∈ F n 2 and u ∈ C 1 , such that Bob can decode to a codeword in C 1 from (v + ǫ) − (u + v) where ǫ is an error codeword, with the final key being the coset leader of u + C 2 . Protocol 1 BB84 for an n-bit key with protection against δn bit errors 1: Alice generates two random bit strings a, b ∈ {0, 1} (4+δ)n , encodes ai into |ψi in basis (+) if bi = 0 and in (×) otherwise, and ∀i ∈ [1, |a|] sends |ψi to Bob. 2: Bob generates a random bit string b ′ ∈ {0, 1} (4+δ)n and upon receiving the qubits, measures |ψi in (+) or (×) according to b ′ i to obtain a ′ i . 3: Alice announces b and Bob discards a ′ i where bi = b ′ i , ending up with at least 2n bits with high probability. 4: Alice picks a set p of 2n bits at random from a, and a set q containing n elements of p chosen as check bits at random. Let v = p \ q. 5: Alice and Bob compare their check bits and abort if the error exceeds a predefined threshold. 6: Alice announces u + v, where v is the string of the remaining non-check bits, and u is a random codeword in C1. 7: Bob subtracts u + v from his code qubits, v + ǫ, and corrects the result, u + ǫ, to a codeword in C1. 8: Alice and Bob use the coset of u + C2 as their final secret key of length n. Uncloneable encryption (UE) enables transmission of ciphertexts that cannot be perfectly copied and stored for later decoding, by encoding carefully prepared codewords into quantum states, thereby leveraging the No-Cloning theorem. We refer to Gottesman's original work [19] for a detailed explanation of the sketch in Protocol 2. Alice and Bob agree on a message length n, a Message Authentication Code (MAC) of length s, an error-correcting code C 1 having message length K and codeword length N with distance 2δN for average error rate δ, and another error-correcting code C 2 (for privacy amplification) with message length K ′ and codeword length N and distance 2(δ + η)N to correct more errors than C 1 , satisfying C ⊥ 2 ⊂ C 1 , where C ⊥ 2 is the dual code containing all vectors orthogonal to C 2 . The pre-shared key is broken down into four pieces, all chosen uniformly at random: an authentication key k ∈ {0, 1} s , a one-time pad e ∈ {0, 1} n+s , a syndrome c 1 ∈ {0, 1} N −K , and a basis sequence b ∈ {0, 1} N . Protocol 2 Uncloneable Encryption for sending a message m ∈ {0, 1} n 1: Compute MAC(m) k = µ ∈ {0, 1} s . Let x = m||µ ∈ {0, 1} n+s . 2: Mask x with the one-time pad e to obtain y = x ⊕ e. 3: From the coset of C1 given by the syndrome c1, pick a random codeword z ∈ {0, 1} N that has syndrome bits y w. r.t. C ⊥ 2 , where C ⊥ 2 ⊂ C1. 4: For i ∈ [1, N ] encode ciphertext bit zi in the basis (+) if bi = 0 and in the basis (×) if bi = 1. The resulting state |ψi is sent to Bob. To perform decryption: 1: For i ∈ [1, N ], measure |ψ ′ i according to bi, to obtain z ′ i ∈ {0, 1} N . 2: Perform error-correction on z ′ using code C1 and evaluate the parity checks of C2/C ⊥ 1 for privacy amplification to get an (n + s)-bit string y ′ . 3: Invert the OTP step to obtain x ′ = y ′ ⊕ e. 4: Parse x ′ as the concatenation m ′ ||µ ′ and use k to verify if MAC(m ′ ) k = µ ′ . This is a copy of the author preprint. The final authenticated version is available online at: https://doi.org/10.1007/978-3-030-03638-6_7 QKE from UE. It is known [19] that any quantum authentication (QA) scheme can be used as a secure UE scheme, which can in turn be used to obtain QKE, with less interaction and more efficient error detection. We give a brief description of how QKE can be obtained from UE in Protocol 3. Protocol 3 Obtaining QKE from Uncloneable Encryption 1: Alice generates random strings k and x, and sends x to Bob via UE, keyed with k. 2: Bob announces that he has received the message, and then Alice announces k. 3: Bob decodes the classical message x, and upon MAC verification, if the message is valid, he announces this to Alice and they will use x as their secret key. Coercer-Deniable Quantum Key Exchange Following the setting in [3], in which it is implicitly assumed that the adversary has established a binding between the participants' identities and a given QKE session, we introduce the notion of coercer-deniability for QKE. This makes it possible to consider an adversarial setting similar to that of deniable encryption [11] and expect that the parties might be coerced into revealing their private coins after the termination of a session, in which case they would have to produce fake randomness such that the resulting transcript and the claimed values remain consistent with the adversary's observations. Beaver's analysis [3] is briefly addressed in a footnote in a paper by Ioannou and Mosca [20] and the issue is brushed aside based on the argument that the parties do not have to keep records of their raw key bits. It is argued that for deniability to be satisfied, it is sufficient that the adversary cannot provide binding evidence that attributes a particular key to the classical communication as their measurements on the quantum channel do not constitute a publicly verifiable proof. However, counterarguments for this view were already raised in the motivations for deniable encryption [11] in terms of secure erasure being difficult and unreliable, and that erasing cannot be externally verified. Moreover, it is also argued that if one were to make the physical security assumption that random choices made for encryption are physically unavailable, the deniability problem would disappear. We refer to [11] and references therein for more details. Bindings, or lack thereof, lie at the core of deniability. Although we leave a formal comparison of our model with the one formulated in the simulation paradigm [16] as future work, a notable difference can be expressed in terms of the inputs presented to the adversary. In the simulation paradigm, deniability is modelled only according to the simulatability of the legal transcript that the adversary or a corrupt party produces naturally via a session with a party as evidence for the judge, whereas for coercer-deniability, the adversary additionally demands that the honest parties reveal their private randomness. Finally, note that viewing deniability in terms of "convincing" the adversary is bound to be problematic and indeed a source of debate in the cryptographic research community as the adversary may never be convinced given their knowledge of the existence of faking algorithms. Hence, deniability is formulated in terms of the indistinguishability of views (or their simulatability [16]) such that a judge would have no reason to believe a given transcript provided by the adversary establishes a binding as it could have been forged or simulated. Defeating Deniability in QKE via Eavesdropping in a Nutshell We briefly review the eavesdropping attack described in [3] and provide further insight. Suppose Alice sends qubit |ψ m,b to Bob, which encodes a single-bit message m prepared in a basis determined by b ∈ {+, ×}. Let Φ(E, m) denote the state obtained after sending |ψ m,b , relayed and possibly modified by an adversary E. Moreover, let ρ(E, m) denote the view presented to the judge, obtained by tracing over inaccessible systems. Now for a qubit measured correctly by Eve, if a party tries to deny by pretending to have sent σ 1 = ρ(E, 1) instead of σ 2 = ρ(E, 0), e.g., by using some local transformation U neg to simply negate a given qubit, then F (σ 1 , σ 2 ) = 0, where F denotes the fidelity between σ 1 and σ 2 . Thus, the judge can successfully detect this attempt at denial. This attack can be mounted successfully with non-negligible probability without causing the session to abort: Assume that N qubits will be transmitted in a BB84 session and that the tolerable error rate is η N , where clearly η ∼ N . Eve measures each qubit with probability η N (choosing a basis at random) and passes on the remaining ones to Bob undisturbed, i.e., she plants a number of decoy states proportional to the tolerated error threshold. On average, η 2 measurements will come from matching bases, which can be used by Eve to detect attempts at denial, if Alice claims to have measured a different encoding. After discarding half the qubits in the sifting phase, this ratio will remain unchanged. Now Alice and/or Bob must flip at least one bit in order to deny without knowledge of where the decoy states lie in the transmitted sequence, thus getting caught with probability η 2N upon flipping a bit at random. On the Coercer-Deniability of Uncloneable Encryption The vulnerability described in Section 3.1 is made possible by an eavesdropping attack that induces a binding in the key coming from a BB84 session. Uncloneable encryption remains immune to this attack because the quantum encoding is done for an already one-time padded classical input. More precisely, a binding established at the level of quantum states can still be perfectly denied because the actual raw information bits m are not directly encoded into the sequence of qubits, instead the concatenation of m and the corresponding authentication tag µ = MAC k (m), i.e., x = m||µ, is masked with a one-time pad e to obtain y = x ⊕ e, which is then mapped onto a codeword z that is encoded into quantum states. For this reason, in the context of coercerdeniability, regardless of a binding established on z by the adversary, Alice can still deny to another input message in that she can pick a different input x ′ = m ′ ||µ ′ to compute a fake pad e ′ = y ⊕ x ′ , so that upon revealing e ′ to Eve, she will simply decode y ⊕ e ′ = x ′ , as intended. This is a copy of the author preprint. The final authenticated version is available online at: https://doi.org/10.1007/978-3-030-03638-6_7 However, note that a prepare-and-measure QKE obtained from UE still remains vulnerable to the same eavesdropping attack due to the fact that we can no longer make use of the deniability of the one-time pad in UE such that the bindings induced by Eve constrain the choice of the underlying codewords. Security Model We adopt the framework for quantum AKEs developed by Mosca et al. [24]. Due to space constraints, we mainly focus on our proposed extensions. Parties, including the adversary, are modelled as a pair of classical and quantum Turing machines (TM) that execute a series of interactive computations and exchange messages with each other through classical and quantum channels, collectively referred to as a protocol. An execution of a protocol is referred to as a session, identified with a unique session identifier. An ongoing session is called an active session, and upon completion, it either outputs an error term ⊥ in case of an abort, or it outputs a tuple (sk, pid, v, u) in case of a successful termination. The tuple consists of a session key sk, a party identifier pid and two vectors u and v that model public values and secret terms, respectively. We adopt an extended version of the adversarial model described in [24], to account for coercer-deniability. Let E be an efficient, i.e. (quantum) polynomial time, adversary with classical and quantum runtime bounds t c (k) and t q (k), and quantum memory bound m q (k), where bounds can be unlimited. Following standard assumptions, the adversary controls all communication between parties and carries the messages exchanged between them. We consider an authenticated classical channel and do not impose any special restrictions otherwise. Additionally, the adversary is allowed to approach either the sender or the receiver after the termination of a session and request access to a subset r ⊆ v of the private randomness used by the parties for a given session, i.e. set of values to be faked. Security notions can be formulated in terms of security experiments in which the adversary interacts with the parties via a set of well-defined queries. These queries typically involve sending messages to an active session or initiating one, corrupting a party, learning their long-term secret key, revealing the ephemeral keys of an incomplete session, obtaining the computed session key for a given session, and a test-session(id) query capturing the winning condition of the game that can be invoked only for a fresh session. Revealing secret values to the adversary is modeled via partnering. The notion of freshness captures the idea of excluding cases that would allow the adversary to trivially win the security experiment. This is done by imposing minimal restrictions on the set of queries the adversary can invoke for a given session such that there exist protocols that can still satisfy the definition of session-key security. A session remains fresh as long as at least one element in u and v remains secret, see [24] for more details. The transcript of a protocol consists of all publicly exchanged messages between the parties during a run or session of the protocol. The definition of "views" and "outputs" given in [3] coincides with that of transcripts in [16] in the sense that it allows us to model a transcript that can be obtained from observations made on the quantum channel. The view of a party P consists of their state in H P along with any classical strings they produce or observe. More generally, for a two-party protocol, captured by the global density matrix ρ AB for the systems of A and B, the individual system A corresponds to a partial trace that yields a reduced density matrix, i.e., ρ A = Tr B (ρ AB ), with a similar approach for any additional couplings. Coercer-Deniable QKE via View Indistinguishability We use the security model in Section 3.3 to introduce the notion of coercer-deniable QKE, formalized via the indistinguishability of real and fake views. Note that in this work we do not account for forward deniability and forward secrecy. Coercer-Deniability Security Experiment. Let CoercerDenQKE Π E,C (κ) denote this experiment and Q the same set of queries available to the adversary in a security game for session-key security, as described in Section 3.3, and [24]. Clearly, in addition to deniability, it is vital that the security of the session key remains intact as well. For this reason, we simply extend the requirements of the security game for a session-key secure KE by having the challenger C provide an additional piece of information to the adversary E when the latter calls the test-session() query. This means that the definition of a fresh session remains the same as the one given in [24]. E invokes queries from Q \ {test-session()} until E issues test-session() to a fresh session of their choice. C decides on a random bit b and if b = 0, C provides E with the real session key k and the real vector of private randomness r, and if b = 1, with a random (fake) key k ′ and a random (fake) vector of private randomness r ′ . Finally, E guesses an output b ′ and wins the game if b = b ′ . The experiment returns 1 if E succeeds, and 0 otherwise. Let Adv Π E (κ) = |Pr[b = b ′ ] − 1 /2| denote the winning advantage of E. Definition 1 (Coercer-Deniable QKE). For adversary E, let there be an efficient distinguisher D E on security parameter κ. We say that Π r is a coercer-deniable QKE protocol if, for any adversary E, transcript t, and for any k, k ′ , and a vector of private random inputs r = (r 1 , . . . , r ℓ ), there exists a denial/faking program F A,B that running on (k, k ′ , t, r) produces r ′ = (r ′ 1 , . . . , r ′ ℓ ) such that the following conditions hold: -Π is a secure QKE protocol. -The adversary E cannot do better than making a random guess for winning the coercer-deniability security experiment, i.e., Adv Π E (κ) ≤ negl(κ) Pr[CoercerDenQKE Π E,C (κ) = 1] ≤ 1 2 + negl(κ) Equivalently, we require that for all efficient distinguisher D E |Pr[D E (View Real (k, t, r)) = 1] − Pr[D E (View F ake (k ′ , t, r ′ )) = 1]| ≤ negl(κ), where the transcript t = (c, ρ E (k)) is a tuple consisting of a vector c, containing classical message exchanges of a session, along with the local view of the adversary w.r.t. the quantum channel obtained by tracing over inaccessible systems (see Section 3.3). A function f : N → R is negligible if for any constant k, there exists a N k such that ∀N ≥ N k , we have f (N ) < N −k . In other words, it approaches zero faster than any polynomial in the asymptotic limit. Remark 1. We introduced a vector of private random inputs r to avoid being restricted to a specific set of "fake coins" in a coercer-deniable setting such as the raw key bits in BB84 as used in Beaver's analysis. This allows us to include other private inputs as part of the transcript that need to be forged by the denying parties without having to provide a new security model for each variant. Indeed, in [24], Mosca et al. consider the security of QKE in case various secret values are compromised before or after a session. This means that these values can, in principle, be included in the set of random coins that might have to be revealed to the adversary and it should therefore be possible to generate fake alternatives using a faking algorithm. Deniable QKE via Covert Quantum Communication We establish a connection between covert communication and deniability by providing a simple construction for coercer-deniable QKE using covert QKE. We then show that deniability is reduced to the covertness property, meaning that deniable QKE can be performed as long as covert QKE is not broken by the adversary, formalized via the security reduction given in Theorem 2. Covert communication becomes relevant when parties wish to keep the very act of communicating secret or hidden from a malicious warden. This can be motivated by various requirements such as the need for hiding one's communication with a particular entity when this act alone can be incriminating. While encryption can make it impossible for the adversary to access the contents of a message, it would not prevent them from detecting exchanges over a channel under their observation. Bash et al. [2,27] established a square-root law for covert communication in the presence of an unbounded quantum adversary stating that O( √ n) covert bits can be exchanged over n channel uses. Recently, Arrazola and Scarani [1] extended covert communication to the quantum regime for transmitting qubits covertly. Covert quantum communication consists of two parties exchanging a sequence of qubits such that an adversary trying to detect this cannot succeed by doing better than making a random guess, i.e., P d ≤ 1 2 + ǫ for sufficiently small ǫ > 0, where P d denotes the probability of detection and ǫ the detection bias. Covert Quantum Key Exchange Since covert communication requires pre-shared secret randomness, a natural question to ask is whether QKE can be done covertly. This was also addressed in [1] and it was shown that covert QKE with unconditional security for the covertness property is impossible because the amount of key consumption is greater than the amount produced. However, a hybrid approach involving pseudo-random number generators (PRNG) was proposed to achieve covert QKE with a positive key rate such that the resulting secret key remains information-theoretically secure, while the covertness of QKE is shown to be at least as strong as the security of the PRNG. The PRNG is used to expand a truly random pre-shared key into an exponentially larger pseudorandom output, which is then used to determine the time-bins for sending signals in covert QKE. Covert QKE Security Experiment. Let CovertQKE Π cov E,C (κ) denote the security experiment. The main property of covert QKE, denoted by Π cov , can be expressed as a game played by the adversary E against a challenger C who decides on a random bit b and if b = 0, C runs Π cov , otherwise (if b = 1), C does not run Π cov . Finally, E guesses a random bit b ′ and wins the game if b = b ′ . The experiment outputs 1 if E succeeds, and 0 otherwise. The winning advantage of E is given by -Π cov G is a secure QKE protocol. -The probability that E guesses the bit b correctly (b ′ = b), i.e., E manages to distinguish between Alice and Bob running Π cov G or not, is no more than 1 2 plus a negligible function in the security parameter κ, i.e., Adv Π cov E (κ) = |Pr[b = b ′ ] − 1 /2| and we want that Adv Π cov E (κ) ≤ negl(κ).Pr[CovertQKE Π cov E,C (κ) = 1] ≤ 1 2 + negl(κ) Theorem 1. (Sourced from [1]) The secret key obtained from the covert QKE protocol Π cov G is informational-theoretically secure and the covertness of Π cov G is as secure as the underlying PRNG. Deniable Covert Quantum Key Exchange (DC-QKE) We are now in a position to describe DC-QKE, a simple construction shown in Protocol 4, which preserves unconditional security for the final secret key, while its deniability is as secure as the underlying PRNG used in Π cov r,G . In terms of the Security Experiment 3.4, Π cov r,G is run to establish a real key k, while non-covert QKE Π r ′ is used to produce a fake key k ′ aimed at achieving deniability, where r and r ′ are the respective vectors of real and fake private inputs. Operationally, consider a setting wherein the parties suspect in advance that they might be coerced into revealing their private coins for a given run: their joint strategy consists of running both components in Protocol 4 and claiming to have employed Π r ′ to establish the fake key k ′ using the fake private randomness r ′ (e.g. raw key bits in BB84) and provide these as input to the adversary upon termination of a session. Thus, for Eve to be able to produce a proof showing that the revealed values are fake, she would have to break the security of covert QKE to detect the presence of Π cov r,G , as shown in Theorem 2. Moreover, note that covert communication can be used for dynamically agreeing on a joint strategy for denial, further highlighting its relevance for deniability. Remark 2. The original analysis in [3] describes an attack based solely on revealing fake raw key bits that may be inconsistent with the adversary's observations. An advantage of DC-QKE in this regard is that Alice's strategy for achieving coercerdeniability consists of revealing all the secret values of the non-covert QKE Π r ′ honestly. This allows her to cover the full range of private randomness that could be considered in different variants of deniability as discussed in Remark 1. A potential drawback is the extra cost induced by F A,B , which could, in principle, be mitigated using a less interactive solution such as QKE via UE. Remark 3. If the classical channel is authenticated by an information-theoretically secure algorithm, the minimal entropy overhead in terms of pre-shared key (logarithmic in the input size) for Π can be generated by Π cov r . Example 1. In the case of encryption, A can send c = m ⊕ k over a covert channel to B, while for denying to m ′ , she can send c ′ = m ′ ⊕ k ′ over a non-covert channel. Alternatively, she can transmit a single ciphertext over a non-covert channel such that it can be opened to two different messages. To do so, given c = m ⊕ k, Alice computes k ′ = m ′ ⊕ c = m ′ ⊕ m ⊕ k, and she can then either encode k ′ as a codeword, as described in Section 2.1, and run Π r ′ via uncloneable encryption, thus allowing her to reveal the entire transcript to Eve honestly, or she can agree with Bob on a suitable privacy amplification (PA) function (with PA being many-to-one) as part of their denying program in order to obtain k ′ . Theorem 2. If Π cov r,G is a covert QKE protocol, then DC-QKE given in Protocol 4 is a coercer-deniable QKE protocol that satisfies Definition 1. Proof. The main idea consists of showing that breaking the deniability property of DC-QKE amounts to breaking the security of covert QKE, such that coercerdeniability follows from the contrapositive of this implication, i.e., if there exists no efficient algorithm for compromising the security of covert QKE, then there exists no efficient algorithm for breaking the deniability of DC-QKE. We formalize this via a reduction, sketched as follows. Let w ′ = View F ake (k ′ , t E , r ′ ) and w = View Real (k, t E , r) denote the two views. Flip a coin b for an attempt at denial: if b = 0, then t E = (t ′ , ∅), else (b = 1), t E = (t ′ , t cov ), where t cov and t ′ denote the transcripts of covert and non-covert exchanges from Π cov r,G and Π r ′ . Now if DC-QKE is constructed from Π cov , then given an efficient adversary E that can distinguish w from w ′ with probability p 1 , we can use E to construct an efficient distinguisher D to break the security of Deniable QKE via Entanglement Distillation and Teleportation We now argue why performing randomness distillation at the quantum level, thus requiring quantum computation, plays an important role w.r.t. deniability. The subtleties alluded to in [3] arise from the fact that randomness distillation is performed in the classical post-processing step. This allows Eve to leverage her tampering in that she can verify the parties' claims against her decoy states. However, this attack can be countered by removing Eve's knowledge before the classical exchanges begin. Most security proofs of QKE [22,28,23] are based on a reduction to an entanglement-based variant, such that the fidelity of Alice and Bob's final state with |Ψ + ⊗m is shown to be exponentially close to 1. Moreover, secret key distillation techniques involving ED and quantum teleportation [7,14] can be used to faithfully transfer qubits from A to B by consuming ebits. To illustrate the relevance of distillation for deniability in QKE, consider the generalized template shown in Protocol 5, based on these well-known techniques. Protocol 5 Template for deniable QKE via entanglement distillation and teleportation 1: A and B share n noisy entangled pairs (assume i.i.d. states for simplicity). 2: They perform entanglement distillation to convert them into a state ρ such that F ( Ψ + ⊗m , ρ) is arbitrarily close to 1 where m < n. 3: Perform verification to make sure they share m maximally entangled states Ψ + ⊗m , and abort otherwise. 4: A prepares m qubits (e.g. BB84 states) and performs quantum teleportation to send them to B at the cost of consuming m ebits and exchanging 2m classical bits. 5: A and B proceed with standard classical distillation techniques to agree on a key based on their measurements. By performing ED, Alice and Bob make sure that the resulting state cannot be correlated with anything else due to the monogamy of entanglement (see e.g. [21,30]), thus factoring out Eve's system. The parties can open their records for steps (2) and (3) honestly, and open to arbitrary classical inputs for steps (3), (4) and (5): deniability follows from decoupling Eve's system, meaning that she is faced with a reduced density matrix on a pure bipartite maximally entangled state, i.e., a maximally mixed state ρ E = I/2, thus obtaining key equivocation. In terms of the hierarchy of entanglement-based constructions mentioned in [3], this approach mainly constitutes a generalization of such schemes. It should therefore be viewed more as a step towards a theoretical characterization of entanglement-based schemes for achieving information-theoretic deniability. Due to lack of space, we omit a discussion of how techniques from device-independent cryptography can deal with maliciously prepared initial states. Going beyond QKE, note that quantum teleportation allows the transfer of an unknown quantum state, meaning that even the sender would be oblivious as to what state is sent. Moreover, ebits can enable uniquely quantum tasks such as traceless exchange in the context of quantum anonymous transmission [12], to achieve incoercible protocols that allow parties to deny to any random input. Studying the deniability of public-key authenticated QKE both in our model and in the simulation paradigm, and the existence of an equivalence relation between our indistinguishability-based definition and a simulation-based one would be a natural continuation of this work. Other lines of inquiry include forward deniability, deniable QKE in conjunction with forward secrecy, deniability using covert communication in stronger adversarial models, a further analysis of the relation between the impossibility of unconditional quantum bit commitment and deniability mentioned in [3], and deniable QKE via uncloneable encryption. Finally, gaining a better understanding of entanglement distillation w.r.t. potential pitfalls in various adversarial settings and proposing concrete deniable protocols for QKE and other tasks beyond key exchange represent further research avenues.
6,580
1812.02245
2952049456
We revisit the notion of deniability in quantum key exchange (QKE), a topic that remains largely unexplored. In the only work on this subject by Donald Beaver, it is argued that QKE is not necessarily deniable due to an eavesdropping attack that limits key equivocation. We provide more insight into the nature of this attack and how it extends to other constructions such as QKE obtained from uncloneable encryption. We then adopt the framework for quantum authenticated key exchange, developed by , and extend it to introduce the notion of coercer-deniable QKE, formalized in terms of the indistinguishability of real and fake coercer views. Next, we apply results from a recent work by Arrazola and Scarani on covert quantum communication to establish a connection between covert QKE and deniability. We propose DC-QKE, a simple deniable covert QKE protocol, and prove its deniability via a reduction to the security of covert QKE. Finally, we consider how entanglement distillation can be used to enable information-theoretically deniable protocols for QKE and tasks beyond key exchange.
Pass @cite_5 formally defines the notion of deniable zero-knowledge and presents positive and negative results in the common reference string and random oracle model. In @cite_0 , establish a link between deniability and ideal authentication and further model a situation in which deniability should hold even when a corrupted party colludes with the adversary during the execution of a protocol. They show an impossibility result in the PKI model if adaptive corruptions are allowed. Cremers and Feltz introduced another variant for key exchange referred to as peer and time deniability @cite_1 , while also capturing perfect forward secrecy. More recently, Unger and Goldberg studied deniable authenticated key exchange (DAKE) in the context of secure messaging @cite_4 .
{ "abstract": [ "", "We revisit the definitions of zero-knowledge in the Common Reference String (CRS) model and the Random Oracle (RO) model. We argue that even though these definitions syntactically mimic the standard zero-knowledge definition, they loose some of its spirit. In particular, we show that there exist a specific natural security property that is not captured by these definitions. This is the property of deniability. We formally define the notion of deniable zero-knowledge in these models and investigate the possibility of achieving it. Our results are different for the two models:", "Traditionally, secure one-round key exchange protocols in the PKI setting have either achieved perfect forward secrecy, or forms of deniability, but not both. On the one hand, achieving perfect forward secrecy against active attackers seems to require some form of authentication of the messages, as in signed Diffie-Hellman style protocols, that subsequently sacrifice deniability. On the other hand, using implicit authentication along the lines of MQV and descendants sacrifices perfect forward secrecy in one round and achieves only weak perfect forward secrecy instead. We show that by reintroducing signatures, it is possible to satisfy both a very strong key-exchange security notion, which we call eCK-PFS, as well as a strong form of deniability, in one-round key exchange protocols. Our security notion for key exchange is stronger than, e.g., the extended-CK model, and captures perfect forward secrecy. Our notion of deniability, which we call peer-and-time deniability, is stronger than that offered by, e.g., the SIGMA protocol. We propose a concrete protocol and prove that it satisfies our definition of key-exchange security in the random oracle model as well as peer-and-time deniability. The protocol combines a signed-Diffie-Hellman message exchange with an MQV-style key computation, and offers a remarkable combination of advanced security properties.", "In the wake of recent revelations of mass government surveillance, secure messaging protocols have come under renewed scrutiny. A widespread weakness of existing solutions is the lack of strong deniability properties that allow users to plausibly deny sending messages or participating in conversations if the security of their communications is later compromised. Deniable authenticated key exchanges (DAKEs), the cryptographic protocols responsible for providing deniability in secure messaging applications, cannot currently provide all desirable properties simultaneously. We introduce two new DAKEs with provable security and deniability properties in the Generalized Universal Composability framework. Our primary contribution is the introduction of Spawn, the first non-interactive DAKE that offers forward secrecy and achieves deniability against both offline and online judges; Spawn can be used to improve the deniability properties of the popular TextSecure secure messaging application. We also introduce an interactive dual-receiver cryptosystem that can improve the performance of the only existing interactive DAKE with competitive security properties. To encourage adoption, we implement and evaluate the performance of our schemes while relying solely on standard-model assumptions." ], "cite_N": [ "@cite_0", "@cite_5", "@cite_1", "@cite_4" ], "mid": [ "", "1508362310", "110317680", "2063578896" ] }
Revisiting Deniability in Quantum Key Exchange via Covert Communication and Entanglement Distillation
Deniability represents a fundamental privacy-related notion in cryptography. The ability to deny a message or an action is a desired property in many contexts such as off-the-record communication, anonymous reporting, whistle-blowing and coercionresistant secure electronic voting. The concept of non-repudiation is closely related to deniability in that the former is aimed at associating specific actions with legitimate parties and thereby preventing them from denying that they have performed a certain task, whereas the latter achieves the opposite property by allowing legitimate parties to deny having performed a particular action. For this reason, deniability is sometimes referred to as repudiability. The definitions and requirements for deniable exchange can vary depending on the cryptographic task in question, e.g., encryption, authentication or key exchange. Roughly speaking, the common underlying idea for a deniable scheme can be understood as the impossibility for an adversary to produce cryptographic proofs, using only algorithmic evidence, that would allow a third-party, often referred to as a judge, to decide if a particular entity has either taken part in a given exchange or exchanged a certain message, which can be a secret key, a digital signature, or a plaintext message. In the context of key exchange, this can be also formulated in terms of a corrupt party (receiver) proving to a judge that a message can be traced back to the other party [16]. In the public-key setting, an immediate challenge for achieving deniability is posed by the need for remote authentication as it typically gives rise to binding evidence, e.g., digital signatures, see [16,17]. The formal analysis of deniability in classical cryptography can be traced back to the original works of Canetti et al. and Dwork et al. on deniable encryption [11] and deniable authentication [18], respectively. These led to a series of papers on this topic covering a relatively wide array of applications. Deniable key exchange was first formalized by Di Raimondo et al. in [16] using a framework based on the simulation paradigm, which is closely related to that of zero-knowledge proofs. Despite being a well-known and fundamental concept in classical cryptography, rather surprisingly, deniability has been largely ignored by the quantum cryptography community. To put things into perspective, with the exception of a single paper by Donald Beaver [3], and a footnote in [20] commenting on the former, there are no other works that directly tackle deniable QKE. In the adversarial setting described in [3], it is assumed that the honest parties are approached by the adversary after the termination of a QKE session and demanded to reveal their private randomness, i.e., the raw key bits encoded in their quantum states. It is then claimed that QKE schemes, despite having perfect and unconditional security, are not necessarily deniable due to an eavesdropping attack. In the case of the BB84 protocol, this attack introduces a binding between the parties' inputs and the final key, thus constraining the space of the final secret key such that key equivocation is no longer possible. Note that since Beaver's work [3] appeared a few years before a formal analysis of deniability for key exchange was published, its analysis is partly based on the adversarial model formulated earlier in [11] for deniable encryption. For this reason, the setting corresponds more closely to scenarios wherein the honest parties try to deceive a coercer by presenting fake messages and randomness, e.g., deceiving a coercer who tries to verify a voter's claimed choice using an intercepted ciphertext of a ballot in the context of secure e-voting. Contributions and Structure In Section 3 we revisit the notion of deniability in QKE and provide more insight into the eavesdropping attack aimed at detecting attempts at denial described in [3]. Having shed light on the nature of this attack, we show that while coercer-deniability can be achieved by uncloneable encryption (UE) [19], QKE obtained from UE remains vulnerable to the same attack. We briefly elaborate on the differences between our model and simulation-based deniability [16]. To provide a firm foundation, we adopt the framework and security model for quantum authenticated key exchange (Q-AKE) developed by Mosca et al. [24] and extend them to introduce the notion of coercerdeniable QKE, which we formalize in terms of the indistinguishability of real and fake coercer views. A. Atashpendar et al. We establish a connection between the concept of covert communication and deniability in Section 4, which to the best of our knowledge has not been formally considered before. More precisely, we apply results from a recent work by Arrazola and Scarani on obtaining covert quantum communication and covert QKE via noise injection [1] to propose DC-QKE, a simple construction for coercer-deniable QKE. We prove the deniability of DC-QKE via a reduction to the security of covert QKE. Compared to the candidate PQECC protocol suggested in [3] that is claimed to be deniable, our construction does not require quantum computation and falls within the more practical realm of prepare-and-measure protocols. Finally, in Section 5 we consider how quantum entanglement distillation can be used not only to counter eavesdropping attacks, but also to achieve informationtheoretic deniability. We conclude by presenting some open questions in Section 6. It is our hope that this work will rekindle interest, more broadly, in the notion of deniable communication in the quantum setting, a topic that has received very little attention from the quantum cryptography community. Preliminaries in Quantum Information and QKE We use the Dirac bra-ket notation and standard terminology from quantum computing. Here we limit ourselves to a description of the most relevant concepts in quantum information theory. More details can be found in standard textbooks [25,32]. For brevity, let A and B denote the honest parties, and E the adversary. Given an orthonormal basis formed by |0 and |1 in a two-dimensional complex Hilbert space H 2 , let (+) ≡ {|0 , |1 } denote the computational basis and (×) ≡ {( 1 / √ 2)(|0 + |1 ), ( 1 / √ 2)(|0 − |1 )} the diagonal basis. If the state vector of a composite system cannot be expressed as a tensor product |ψ 1 ⊗ |ψ 2 , the state of each subsystem cannot be described independently and we say the two qubits are entangled. This property is best exemplified by maximally entangled qubits (ebits), the so-called Bell states Φ ± AB = 1 √ 2 (|00 AB ± |11 AB ) , Ψ ± AB = 1 √ 2 (|01 AB ± |10 AB ) A noisy qubit that cannot be expressed as a linear superposition of pure states is said to be in a mixed state, a classical probability distribution of pure states: {p X (x), |ψ x } x∈X . The density operator ρ, defined as a weighted sum of projectors, captures both pure and mixed states: ρ ≡ x∈X p X (x) |ψ x ψ x |. Given a density matrix ρ AB describing the joint state of a system held by A and B, the partial trace allows us to compute the local state of A (density operator ρ A ) if B's system is not accessible to A. To obtain ρ A from ρ AB (the reduced state of ρ AB on A), we trace out the system B: ρ A = Tr B (ρ AB ). As a distance measure, we use the expected fidelity F (|ψ , ρ) between a pure state |ψ and a mixed state ρ given by F (|ψ , ρ) = ψ| ρ |ψ . A crucial distinction between quantum and classical information is captured by the well-known No-Cloning theorem [33], which states that an arbitrary unknown quantum state cannot be copied or cloned perfectly. Quantum Key Exchange and Uncloneable Encryption QKE allows two parties to establish a common secret key with information-theoretic security using an insecure quantum channel, and a public authenticated classical channel. In Protocol 1 we describe the BB84 protocol, the most well-known QKE variant due to Bennett and Brassard [5]. For consistency with related works, we use the well-established formalism based on error-correcting codes, developed by Shor and Preskill [28]. Let C 1 [n, k 1 ] and C 2 [n, k 2 ] be two classical linear binary codes encoding k 1 and k 2 bits in n bits such that {0} ⊂ C 2 ⊂ C 1 ⊂ F n 2 where F n 2 is the binary vector space on n bits. A mapping of vectors v ∈ C 1 to a set of basis states (codewords) for the Calderbank-Shor-Steane (CSS) [10,29] code subspace is given by : v → ( 1 / |C2|) w∈C2 |v + w . Due to the irrelevance of phase errors and their decoupling from bit flips in CSS codes, Alice can send |v along with classical error-correction information u + v where u, v ∈ F n 2 and u ∈ C 1 , such that Bob can decode to a codeword in C 1 from (v + ǫ) − (u + v) where ǫ is an error codeword, with the final key being the coset leader of u + C 2 . Protocol 1 BB84 for an n-bit key with protection against δn bit errors 1: Alice generates two random bit strings a, b ∈ {0, 1} (4+δ)n , encodes ai into |ψi in basis (+) if bi = 0 and in (×) otherwise, and ∀i ∈ [1, |a|] sends |ψi to Bob. 2: Bob generates a random bit string b ′ ∈ {0, 1} (4+δ)n and upon receiving the qubits, measures |ψi in (+) or (×) according to b ′ i to obtain a ′ i . 3: Alice announces b and Bob discards a ′ i where bi = b ′ i , ending up with at least 2n bits with high probability. 4: Alice picks a set p of 2n bits at random from a, and a set q containing n elements of p chosen as check bits at random. Let v = p \ q. 5: Alice and Bob compare their check bits and abort if the error exceeds a predefined threshold. 6: Alice announces u + v, where v is the string of the remaining non-check bits, and u is a random codeword in C1. 7: Bob subtracts u + v from his code qubits, v + ǫ, and corrects the result, u + ǫ, to a codeword in C1. 8: Alice and Bob use the coset of u + C2 as their final secret key of length n. Uncloneable encryption (UE) enables transmission of ciphertexts that cannot be perfectly copied and stored for later decoding, by encoding carefully prepared codewords into quantum states, thereby leveraging the No-Cloning theorem. We refer to Gottesman's original work [19] for a detailed explanation of the sketch in Protocol 2. Alice and Bob agree on a message length n, a Message Authentication Code (MAC) of length s, an error-correcting code C 1 having message length K and codeword length N with distance 2δN for average error rate δ, and another error-correcting code C 2 (for privacy amplification) with message length K ′ and codeword length N and distance 2(δ + η)N to correct more errors than C 1 , satisfying C ⊥ 2 ⊂ C 1 , where C ⊥ 2 is the dual code containing all vectors orthogonal to C 2 . The pre-shared key is broken down into four pieces, all chosen uniformly at random: an authentication key k ∈ {0, 1} s , a one-time pad e ∈ {0, 1} n+s , a syndrome c 1 ∈ {0, 1} N −K , and a basis sequence b ∈ {0, 1} N . Protocol 2 Uncloneable Encryption for sending a message m ∈ {0, 1} n 1: Compute MAC(m) k = µ ∈ {0, 1} s . Let x = m||µ ∈ {0, 1} n+s . 2: Mask x with the one-time pad e to obtain y = x ⊕ e. 3: From the coset of C1 given by the syndrome c1, pick a random codeword z ∈ {0, 1} N that has syndrome bits y w. r.t. C ⊥ 2 , where C ⊥ 2 ⊂ C1. 4: For i ∈ [1, N ] encode ciphertext bit zi in the basis (+) if bi = 0 and in the basis (×) if bi = 1. The resulting state |ψi is sent to Bob. To perform decryption: 1: For i ∈ [1, N ], measure |ψ ′ i according to bi, to obtain z ′ i ∈ {0, 1} N . 2: Perform error-correction on z ′ using code C1 and evaluate the parity checks of C2/C ⊥ 1 for privacy amplification to get an (n + s)-bit string y ′ . 3: Invert the OTP step to obtain x ′ = y ′ ⊕ e. 4: Parse x ′ as the concatenation m ′ ||µ ′ and use k to verify if MAC(m ′ ) k = µ ′ . This is a copy of the author preprint. The final authenticated version is available online at: https://doi.org/10.1007/978-3-030-03638-6_7 QKE from UE. It is known [19] that any quantum authentication (QA) scheme can be used as a secure UE scheme, which can in turn be used to obtain QKE, with less interaction and more efficient error detection. We give a brief description of how QKE can be obtained from UE in Protocol 3. Protocol 3 Obtaining QKE from Uncloneable Encryption 1: Alice generates random strings k and x, and sends x to Bob via UE, keyed with k. 2: Bob announces that he has received the message, and then Alice announces k. 3: Bob decodes the classical message x, and upon MAC verification, if the message is valid, he announces this to Alice and they will use x as their secret key. Coercer-Deniable Quantum Key Exchange Following the setting in [3], in which it is implicitly assumed that the adversary has established a binding between the participants' identities and a given QKE session, we introduce the notion of coercer-deniability for QKE. This makes it possible to consider an adversarial setting similar to that of deniable encryption [11] and expect that the parties might be coerced into revealing their private coins after the termination of a session, in which case they would have to produce fake randomness such that the resulting transcript and the claimed values remain consistent with the adversary's observations. Beaver's analysis [3] is briefly addressed in a footnote in a paper by Ioannou and Mosca [20] and the issue is brushed aside based on the argument that the parties do not have to keep records of their raw key bits. It is argued that for deniability to be satisfied, it is sufficient that the adversary cannot provide binding evidence that attributes a particular key to the classical communication as their measurements on the quantum channel do not constitute a publicly verifiable proof. However, counterarguments for this view were already raised in the motivations for deniable encryption [11] in terms of secure erasure being difficult and unreliable, and that erasing cannot be externally verified. Moreover, it is also argued that if one were to make the physical security assumption that random choices made for encryption are physically unavailable, the deniability problem would disappear. We refer to [11] and references therein for more details. Bindings, or lack thereof, lie at the core of deniability. Although we leave a formal comparison of our model with the one formulated in the simulation paradigm [16] as future work, a notable difference can be expressed in terms of the inputs presented to the adversary. In the simulation paradigm, deniability is modelled only according to the simulatability of the legal transcript that the adversary or a corrupt party produces naturally via a session with a party as evidence for the judge, whereas for coercer-deniability, the adversary additionally demands that the honest parties reveal their private randomness. Finally, note that viewing deniability in terms of "convincing" the adversary is bound to be problematic and indeed a source of debate in the cryptographic research community as the adversary may never be convinced given their knowledge of the existence of faking algorithms. Hence, deniability is formulated in terms of the indistinguishability of views (or their simulatability [16]) such that a judge would have no reason to believe a given transcript provided by the adversary establishes a binding as it could have been forged or simulated. Defeating Deniability in QKE via Eavesdropping in a Nutshell We briefly review the eavesdropping attack described in [3] and provide further insight. Suppose Alice sends qubit |ψ m,b to Bob, which encodes a single-bit message m prepared in a basis determined by b ∈ {+, ×}. Let Φ(E, m) denote the state obtained after sending |ψ m,b , relayed and possibly modified by an adversary E. Moreover, let ρ(E, m) denote the view presented to the judge, obtained by tracing over inaccessible systems. Now for a qubit measured correctly by Eve, if a party tries to deny by pretending to have sent σ 1 = ρ(E, 1) instead of σ 2 = ρ(E, 0), e.g., by using some local transformation U neg to simply negate a given qubit, then F (σ 1 , σ 2 ) = 0, where F denotes the fidelity between σ 1 and σ 2 . Thus, the judge can successfully detect this attempt at denial. This attack can be mounted successfully with non-negligible probability without causing the session to abort: Assume that N qubits will be transmitted in a BB84 session and that the tolerable error rate is η N , where clearly η ∼ N . Eve measures each qubit with probability η N (choosing a basis at random) and passes on the remaining ones to Bob undisturbed, i.e., she plants a number of decoy states proportional to the tolerated error threshold. On average, η 2 measurements will come from matching bases, which can be used by Eve to detect attempts at denial, if Alice claims to have measured a different encoding. After discarding half the qubits in the sifting phase, this ratio will remain unchanged. Now Alice and/or Bob must flip at least one bit in order to deny without knowledge of where the decoy states lie in the transmitted sequence, thus getting caught with probability η 2N upon flipping a bit at random. On the Coercer-Deniability of Uncloneable Encryption The vulnerability described in Section 3.1 is made possible by an eavesdropping attack that induces a binding in the key coming from a BB84 session. Uncloneable encryption remains immune to this attack because the quantum encoding is done for an already one-time padded classical input. More precisely, a binding established at the level of quantum states can still be perfectly denied because the actual raw information bits m are not directly encoded into the sequence of qubits, instead the concatenation of m and the corresponding authentication tag µ = MAC k (m), i.e., x = m||µ, is masked with a one-time pad e to obtain y = x ⊕ e, which is then mapped onto a codeword z that is encoded into quantum states. For this reason, in the context of coercerdeniability, regardless of a binding established on z by the adversary, Alice can still deny to another input message in that she can pick a different input x ′ = m ′ ||µ ′ to compute a fake pad e ′ = y ⊕ x ′ , so that upon revealing e ′ to Eve, she will simply decode y ⊕ e ′ = x ′ , as intended. This is a copy of the author preprint. The final authenticated version is available online at: https://doi.org/10.1007/978-3-030-03638-6_7 However, note that a prepare-and-measure QKE obtained from UE still remains vulnerable to the same eavesdropping attack due to the fact that we can no longer make use of the deniability of the one-time pad in UE such that the bindings induced by Eve constrain the choice of the underlying codewords. Security Model We adopt the framework for quantum AKEs developed by Mosca et al. [24]. Due to space constraints, we mainly focus on our proposed extensions. Parties, including the adversary, are modelled as a pair of classical and quantum Turing machines (TM) that execute a series of interactive computations and exchange messages with each other through classical and quantum channels, collectively referred to as a protocol. An execution of a protocol is referred to as a session, identified with a unique session identifier. An ongoing session is called an active session, and upon completion, it either outputs an error term ⊥ in case of an abort, or it outputs a tuple (sk, pid, v, u) in case of a successful termination. The tuple consists of a session key sk, a party identifier pid and two vectors u and v that model public values and secret terms, respectively. We adopt an extended version of the adversarial model described in [24], to account for coercer-deniability. Let E be an efficient, i.e. (quantum) polynomial time, adversary with classical and quantum runtime bounds t c (k) and t q (k), and quantum memory bound m q (k), where bounds can be unlimited. Following standard assumptions, the adversary controls all communication between parties and carries the messages exchanged between them. We consider an authenticated classical channel and do not impose any special restrictions otherwise. Additionally, the adversary is allowed to approach either the sender or the receiver after the termination of a session and request access to a subset r ⊆ v of the private randomness used by the parties for a given session, i.e. set of values to be faked. Security notions can be formulated in terms of security experiments in which the adversary interacts with the parties via a set of well-defined queries. These queries typically involve sending messages to an active session or initiating one, corrupting a party, learning their long-term secret key, revealing the ephemeral keys of an incomplete session, obtaining the computed session key for a given session, and a test-session(id) query capturing the winning condition of the game that can be invoked only for a fresh session. Revealing secret values to the adversary is modeled via partnering. The notion of freshness captures the idea of excluding cases that would allow the adversary to trivially win the security experiment. This is done by imposing minimal restrictions on the set of queries the adversary can invoke for a given session such that there exist protocols that can still satisfy the definition of session-key security. A session remains fresh as long as at least one element in u and v remains secret, see [24] for more details. The transcript of a protocol consists of all publicly exchanged messages between the parties during a run or session of the protocol. The definition of "views" and "outputs" given in [3] coincides with that of transcripts in [16] in the sense that it allows us to model a transcript that can be obtained from observations made on the quantum channel. The view of a party P consists of their state in H P along with any classical strings they produce or observe. More generally, for a two-party protocol, captured by the global density matrix ρ AB for the systems of A and B, the individual system A corresponds to a partial trace that yields a reduced density matrix, i.e., ρ A = Tr B (ρ AB ), with a similar approach for any additional couplings. Coercer-Deniable QKE via View Indistinguishability We use the security model in Section 3.3 to introduce the notion of coercer-deniable QKE, formalized via the indistinguishability of real and fake views. Note that in this work we do not account for forward deniability and forward secrecy. Coercer-Deniability Security Experiment. Let CoercerDenQKE Π E,C (κ) denote this experiment and Q the same set of queries available to the adversary in a security game for session-key security, as described in Section 3.3, and [24]. Clearly, in addition to deniability, it is vital that the security of the session key remains intact as well. For this reason, we simply extend the requirements of the security game for a session-key secure KE by having the challenger C provide an additional piece of information to the adversary E when the latter calls the test-session() query. This means that the definition of a fresh session remains the same as the one given in [24]. E invokes queries from Q \ {test-session()} until E issues test-session() to a fresh session of their choice. C decides on a random bit b and if b = 0, C provides E with the real session key k and the real vector of private randomness r, and if b = 1, with a random (fake) key k ′ and a random (fake) vector of private randomness r ′ . Finally, E guesses an output b ′ and wins the game if b = b ′ . The experiment returns 1 if E succeeds, and 0 otherwise. Let Adv Π E (κ) = |Pr[b = b ′ ] − 1 /2| denote the winning advantage of E. Definition 1 (Coercer-Deniable QKE). For adversary E, let there be an efficient distinguisher D E on security parameter κ. We say that Π r is a coercer-deniable QKE protocol if, for any adversary E, transcript t, and for any k, k ′ , and a vector of private random inputs r = (r 1 , . . . , r ℓ ), there exists a denial/faking program F A,B that running on (k, k ′ , t, r) produces r ′ = (r ′ 1 , . . . , r ′ ℓ ) such that the following conditions hold: -Π is a secure QKE protocol. -The adversary E cannot do better than making a random guess for winning the coercer-deniability security experiment, i.e., Adv Π E (κ) ≤ negl(κ) Pr[CoercerDenQKE Π E,C (κ) = 1] ≤ 1 2 + negl(κ) Equivalently, we require that for all efficient distinguisher D E |Pr[D E (View Real (k, t, r)) = 1] − Pr[D E (View F ake (k ′ , t, r ′ )) = 1]| ≤ negl(κ), where the transcript t = (c, ρ E (k)) is a tuple consisting of a vector c, containing classical message exchanges of a session, along with the local view of the adversary w.r.t. the quantum channel obtained by tracing over inaccessible systems (see Section 3.3). A function f : N → R is negligible if for any constant k, there exists a N k such that ∀N ≥ N k , we have f (N ) < N −k . In other words, it approaches zero faster than any polynomial in the asymptotic limit. Remark 1. We introduced a vector of private random inputs r to avoid being restricted to a specific set of "fake coins" in a coercer-deniable setting such as the raw key bits in BB84 as used in Beaver's analysis. This allows us to include other private inputs as part of the transcript that need to be forged by the denying parties without having to provide a new security model for each variant. Indeed, in [24], Mosca et al. consider the security of QKE in case various secret values are compromised before or after a session. This means that these values can, in principle, be included in the set of random coins that might have to be revealed to the adversary and it should therefore be possible to generate fake alternatives using a faking algorithm. Deniable QKE via Covert Quantum Communication We establish a connection between covert communication and deniability by providing a simple construction for coercer-deniable QKE using covert QKE. We then show that deniability is reduced to the covertness property, meaning that deniable QKE can be performed as long as covert QKE is not broken by the adversary, formalized via the security reduction given in Theorem 2. Covert communication becomes relevant when parties wish to keep the very act of communicating secret or hidden from a malicious warden. This can be motivated by various requirements such as the need for hiding one's communication with a particular entity when this act alone can be incriminating. While encryption can make it impossible for the adversary to access the contents of a message, it would not prevent them from detecting exchanges over a channel under their observation. Bash et al. [2,27] established a square-root law for covert communication in the presence of an unbounded quantum adversary stating that O( √ n) covert bits can be exchanged over n channel uses. Recently, Arrazola and Scarani [1] extended covert communication to the quantum regime for transmitting qubits covertly. Covert quantum communication consists of two parties exchanging a sequence of qubits such that an adversary trying to detect this cannot succeed by doing better than making a random guess, i.e., P d ≤ 1 2 + ǫ for sufficiently small ǫ > 0, where P d denotes the probability of detection and ǫ the detection bias. Covert Quantum Key Exchange Since covert communication requires pre-shared secret randomness, a natural question to ask is whether QKE can be done covertly. This was also addressed in [1] and it was shown that covert QKE with unconditional security for the covertness property is impossible because the amount of key consumption is greater than the amount produced. However, a hybrid approach involving pseudo-random number generators (PRNG) was proposed to achieve covert QKE with a positive key rate such that the resulting secret key remains information-theoretically secure, while the covertness of QKE is shown to be at least as strong as the security of the PRNG. The PRNG is used to expand a truly random pre-shared key into an exponentially larger pseudorandom output, which is then used to determine the time-bins for sending signals in covert QKE. Covert QKE Security Experiment. Let CovertQKE Π cov E,C (κ) denote the security experiment. The main property of covert QKE, denoted by Π cov , can be expressed as a game played by the adversary E against a challenger C who decides on a random bit b and if b = 0, C runs Π cov , otherwise (if b = 1), C does not run Π cov . Finally, E guesses a random bit b ′ and wins the game if b = b ′ . The experiment outputs 1 if E succeeds, and 0 otherwise. The winning advantage of E is given by -Π cov G is a secure QKE protocol. -The probability that E guesses the bit b correctly (b ′ = b), i.e., E manages to distinguish between Alice and Bob running Π cov G or not, is no more than 1 2 plus a negligible function in the security parameter κ, i.e., Adv Π cov E (κ) = |Pr[b = b ′ ] − 1 /2| and we want that Adv Π cov E (κ) ≤ negl(κ).Pr[CovertQKE Π cov E,C (κ) = 1] ≤ 1 2 + negl(κ) Theorem 1. (Sourced from [1]) The secret key obtained from the covert QKE protocol Π cov G is informational-theoretically secure and the covertness of Π cov G is as secure as the underlying PRNG. Deniable Covert Quantum Key Exchange (DC-QKE) We are now in a position to describe DC-QKE, a simple construction shown in Protocol 4, which preserves unconditional security for the final secret key, while its deniability is as secure as the underlying PRNG used in Π cov r,G . In terms of the Security Experiment 3.4, Π cov r,G is run to establish a real key k, while non-covert QKE Π r ′ is used to produce a fake key k ′ aimed at achieving deniability, where r and r ′ are the respective vectors of real and fake private inputs. Operationally, consider a setting wherein the parties suspect in advance that they might be coerced into revealing their private coins for a given run: their joint strategy consists of running both components in Protocol 4 and claiming to have employed Π r ′ to establish the fake key k ′ using the fake private randomness r ′ (e.g. raw key bits in BB84) and provide these as input to the adversary upon termination of a session. Thus, for Eve to be able to produce a proof showing that the revealed values are fake, she would have to break the security of covert QKE to detect the presence of Π cov r,G , as shown in Theorem 2. Moreover, note that covert communication can be used for dynamically agreeing on a joint strategy for denial, further highlighting its relevance for deniability. Remark 2. The original analysis in [3] describes an attack based solely on revealing fake raw key bits that may be inconsistent with the adversary's observations. An advantage of DC-QKE in this regard is that Alice's strategy for achieving coercerdeniability consists of revealing all the secret values of the non-covert QKE Π r ′ honestly. This allows her to cover the full range of private randomness that could be considered in different variants of deniability as discussed in Remark 1. A potential drawback is the extra cost induced by F A,B , which could, in principle, be mitigated using a less interactive solution such as QKE via UE. Remark 3. If the classical channel is authenticated by an information-theoretically secure algorithm, the minimal entropy overhead in terms of pre-shared key (logarithmic in the input size) for Π can be generated by Π cov r . Example 1. In the case of encryption, A can send c = m ⊕ k over a covert channel to B, while for denying to m ′ , she can send c ′ = m ′ ⊕ k ′ over a non-covert channel. Alternatively, she can transmit a single ciphertext over a non-covert channel such that it can be opened to two different messages. To do so, given c = m ⊕ k, Alice computes k ′ = m ′ ⊕ c = m ′ ⊕ m ⊕ k, and she can then either encode k ′ as a codeword, as described in Section 2.1, and run Π r ′ via uncloneable encryption, thus allowing her to reveal the entire transcript to Eve honestly, or she can agree with Bob on a suitable privacy amplification (PA) function (with PA being many-to-one) as part of their denying program in order to obtain k ′ . Theorem 2. If Π cov r,G is a covert QKE protocol, then DC-QKE given in Protocol 4 is a coercer-deniable QKE protocol that satisfies Definition 1. Proof. The main idea consists of showing that breaking the deniability property of DC-QKE amounts to breaking the security of covert QKE, such that coercerdeniability follows from the contrapositive of this implication, i.e., if there exists no efficient algorithm for compromising the security of covert QKE, then there exists no efficient algorithm for breaking the deniability of DC-QKE. We formalize this via a reduction, sketched as follows. Let w ′ = View F ake (k ′ , t E , r ′ ) and w = View Real (k, t E , r) denote the two views. Flip a coin b for an attempt at denial: if b = 0, then t E = (t ′ , ∅), else (b = 1), t E = (t ′ , t cov ), where t cov and t ′ denote the transcripts of covert and non-covert exchanges from Π cov r,G and Π r ′ . Now if DC-QKE is constructed from Π cov , then given an efficient adversary E that can distinguish w from w ′ with probability p 1 , we can use E to construct an efficient distinguisher D to break the security of Deniable QKE via Entanglement Distillation and Teleportation We now argue why performing randomness distillation at the quantum level, thus requiring quantum computation, plays an important role w.r.t. deniability. The subtleties alluded to in [3] arise from the fact that randomness distillation is performed in the classical post-processing step. This allows Eve to leverage her tampering in that she can verify the parties' claims against her decoy states. However, this attack can be countered by removing Eve's knowledge before the classical exchanges begin. Most security proofs of QKE [22,28,23] are based on a reduction to an entanglement-based variant, such that the fidelity of Alice and Bob's final state with |Ψ + ⊗m is shown to be exponentially close to 1. Moreover, secret key distillation techniques involving ED and quantum teleportation [7,14] can be used to faithfully transfer qubits from A to B by consuming ebits. To illustrate the relevance of distillation for deniability in QKE, consider the generalized template shown in Protocol 5, based on these well-known techniques. Protocol 5 Template for deniable QKE via entanglement distillation and teleportation 1: A and B share n noisy entangled pairs (assume i.i.d. states for simplicity). 2: They perform entanglement distillation to convert them into a state ρ such that F ( Ψ + ⊗m , ρ) is arbitrarily close to 1 where m < n. 3: Perform verification to make sure they share m maximally entangled states Ψ + ⊗m , and abort otherwise. 4: A prepares m qubits (e.g. BB84 states) and performs quantum teleportation to send them to B at the cost of consuming m ebits and exchanging 2m classical bits. 5: A and B proceed with standard classical distillation techniques to agree on a key based on their measurements. By performing ED, Alice and Bob make sure that the resulting state cannot be correlated with anything else due to the monogamy of entanglement (see e.g. [21,30]), thus factoring out Eve's system. The parties can open their records for steps (2) and (3) honestly, and open to arbitrary classical inputs for steps (3), (4) and (5): deniability follows from decoupling Eve's system, meaning that she is faced with a reduced density matrix on a pure bipartite maximally entangled state, i.e., a maximally mixed state ρ E = I/2, thus obtaining key equivocation. In terms of the hierarchy of entanglement-based constructions mentioned in [3], this approach mainly constitutes a generalization of such schemes. It should therefore be viewed more as a step towards a theoretical characterization of entanglement-based schemes for achieving information-theoretic deniability. Due to lack of space, we omit a discussion of how techniques from device-independent cryptography can deal with maliciously prepared initial states. Going beyond QKE, note that quantum teleportation allows the transfer of an unknown quantum state, meaning that even the sender would be oblivious as to what state is sent. Moreover, ebits can enable uniquely quantum tasks such as traceless exchange in the context of quantum anonymous transmission [12], to achieve incoercible protocols that allow parties to deny to any random input. Studying the deniability of public-key authenticated QKE both in our model and in the simulation paradigm, and the existence of an equivalence relation between our indistinguishability-based definition and a simulation-based one would be a natural continuation of this work. Other lines of inquiry include forward deniability, deniable QKE in conjunction with forward secrecy, deniability using covert communication in stronger adversarial models, a further analysis of the relation between the impossibility of unconditional quantum bit commitment and deniability mentioned in [3], and deniable QKE via uncloneable encryption. Finally, gaining a better understanding of entanglement distillation w.r.t. potential pitfalls in various adversarial settings and proposing concrete deniable protocols for QKE and other tasks beyond key exchange represent further research avenues.
6,580
1812.02245
2952049456
We revisit the notion of deniability in quantum key exchange (QKE), a topic that remains largely unexplored. In the only work on this subject by Donald Beaver, it is argued that QKE is not necessarily deniable due to an eavesdropping attack that limits key equivocation. We provide more insight into the nature of this attack and how it extends to other constructions such as QKE obtained from uncloneable encryption. We then adopt the framework for quantum authenticated key exchange, developed by , and extend it to introduce the notion of coercer-deniable QKE, formalized in terms of the indistinguishability of real and fake coercer views. Next, we apply results from a recent work by Arrazola and Scarani on covert quantum communication to establish a connection between covert QKE and deniability. We propose DC-QKE, a simple deniable covert QKE protocol, and prove its deniability via a reduction to the security of covert QKE. Finally, we consider how entanglement distillation can be used to enable information-theoretically deniable protocols for QKE and tasks beyond key exchange.
To the best of our knowledge, the only work related to deniability in QKE is a single paper by Beaver @cite_32 , in which the author suggests a negative result arguing that existing QKE schemes are not necessarily deniable.
{ "abstract": [ "We show that claims of perfect security for keys produced by quantum key exchange (QKE) are limited to privacy and integrity. Unlike a one-time pad, QKE does not necessarily enable Sender and Receiver to pretend later to have established a different key. This result is puzzling in light of Mayers' No-Go theorem showing the impossibility of quantum bit commitment. But even though a simple and intuitive application of Mayers' protocol transformation appears sufficient to provide deniability (else QBC would be possible), we show several reasons why such conclusions are ill-founded. Mayers' transformation arguments, while sound for QBC, are insufficient to establish deniability in QKE. Having shed light on several unadvertised pitfalls, we then provide a candidate deniable QKE protocol. This itself indicates further shortfalls in current proof techniques, including reductions that preserve privacy but fail to preserve deniability. In sum, purchasing undeniability with an off-the-shelf QKE protocol is significantly more expensive and dangerous than the mere optic fiber for which perfect security is advertised." ], "cite_N": [ "@cite_32" ], "mid": [ "2583450375" ] }
Revisiting Deniability in Quantum Key Exchange via Covert Communication and Entanglement Distillation
Deniability represents a fundamental privacy-related notion in cryptography. The ability to deny a message or an action is a desired property in many contexts such as off-the-record communication, anonymous reporting, whistle-blowing and coercionresistant secure electronic voting. The concept of non-repudiation is closely related to deniability in that the former is aimed at associating specific actions with legitimate parties and thereby preventing them from denying that they have performed a certain task, whereas the latter achieves the opposite property by allowing legitimate parties to deny having performed a particular action. For this reason, deniability is sometimes referred to as repudiability. The definitions and requirements for deniable exchange can vary depending on the cryptographic task in question, e.g., encryption, authentication or key exchange. Roughly speaking, the common underlying idea for a deniable scheme can be understood as the impossibility for an adversary to produce cryptographic proofs, using only algorithmic evidence, that would allow a third-party, often referred to as a judge, to decide if a particular entity has either taken part in a given exchange or exchanged a certain message, which can be a secret key, a digital signature, or a plaintext message. In the context of key exchange, this can be also formulated in terms of a corrupt party (receiver) proving to a judge that a message can be traced back to the other party [16]. In the public-key setting, an immediate challenge for achieving deniability is posed by the need for remote authentication as it typically gives rise to binding evidence, e.g., digital signatures, see [16,17]. The formal analysis of deniability in classical cryptography can be traced back to the original works of Canetti et al. and Dwork et al. on deniable encryption [11] and deniable authentication [18], respectively. These led to a series of papers on this topic covering a relatively wide array of applications. Deniable key exchange was first formalized by Di Raimondo et al. in [16] using a framework based on the simulation paradigm, which is closely related to that of zero-knowledge proofs. Despite being a well-known and fundamental concept in classical cryptography, rather surprisingly, deniability has been largely ignored by the quantum cryptography community. To put things into perspective, with the exception of a single paper by Donald Beaver [3], and a footnote in [20] commenting on the former, there are no other works that directly tackle deniable QKE. In the adversarial setting described in [3], it is assumed that the honest parties are approached by the adversary after the termination of a QKE session and demanded to reveal their private randomness, i.e., the raw key bits encoded in their quantum states. It is then claimed that QKE schemes, despite having perfect and unconditional security, are not necessarily deniable due to an eavesdropping attack. In the case of the BB84 protocol, this attack introduces a binding between the parties' inputs and the final key, thus constraining the space of the final secret key such that key equivocation is no longer possible. Note that since Beaver's work [3] appeared a few years before a formal analysis of deniability for key exchange was published, its analysis is partly based on the adversarial model formulated earlier in [11] for deniable encryption. For this reason, the setting corresponds more closely to scenarios wherein the honest parties try to deceive a coercer by presenting fake messages and randomness, e.g., deceiving a coercer who tries to verify a voter's claimed choice using an intercepted ciphertext of a ballot in the context of secure e-voting. Contributions and Structure In Section 3 we revisit the notion of deniability in QKE and provide more insight into the eavesdropping attack aimed at detecting attempts at denial described in [3]. Having shed light on the nature of this attack, we show that while coercer-deniability can be achieved by uncloneable encryption (UE) [19], QKE obtained from UE remains vulnerable to the same attack. We briefly elaborate on the differences between our model and simulation-based deniability [16]. To provide a firm foundation, we adopt the framework and security model for quantum authenticated key exchange (Q-AKE) developed by Mosca et al. [24] and extend them to introduce the notion of coercerdeniable QKE, which we formalize in terms of the indistinguishability of real and fake coercer views. A. Atashpendar et al. We establish a connection between the concept of covert communication and deniability in Section 4, which to the best of our knowledge has not been formally considered before. More precisely, we apply results from a recent work by Arrazola and Scarani on obtaining covert quantum communication and covert QKE via noise injection [1] to propose DC-QKE, a simple construction for coercer-deniable QKE. We prove the deniability of DC-QKE via a reduction to the security of covert QKE. Compared to the candidate PQECC protocol suggested in [3] that is claimed to be deniable, our construction does not require quantum computation and falls within the more practical realm of prepare-and-measure protocols. Finally, in Section 5 we consider how quantum entanglement distillation can be used not only to counter eavesdropping attacks, but also to achieve informationtheoretic deniability. We conclude by presenting some open questions in Section 6. It is our hope that this work will rekindle interest, more broadly, in the notion of deniable communication in the quantum setting, a topic that has received very little attention from the quantum cryptography community. Preliminaries in Quantum Information and QKE We use the Dirac bra-ket notation and standard terminology from quantum computing. Here we limit ourselves to a description of the most relevant concepts in quantum information theory. More details can be found in standard textbooks [25,32]. For brevity, let A and B denote the honest parties, and E the adversary. Given an orthonormal basis formed by |0 and |1 in a two-dimensional complex Hilbert space H 2 , let (+) ≡ {|0 , |1 } denote the computational basis and (×) ≡ {( 1 / √ 2)(|0 + |1 ), ( 1 / √ 2)(|0 − |1 )} the diagonal basis. If the state vector of a composite system cannot be expressed as a tensor product |ψ 1 ⊗ |ψ 2 , the state of each subsystem cannot be described independently and we say the two qubits are entangled. This property is best exemplified by maximally entangled qubits (ebits), the so-called Bell states Φ ± AB = 1 √ 2 (|00 AB ± |11 AB ) , Ψ ± AB = 1 √ 2 (|01 AB ± |10 AB ) A noisy qubit that cannot be expressed as a linear superposition of pure states is said to be in a mixed state, a classical probability distribution of pure states: {p X (x), |ψ x } x∈X . The density operator ρ, defined as a weighted sum of projectors, captures both pure and mixed states: ρ ≡ x∈X p X (x) |ψ x ψ x |. Given a density matrix ρ AB describing the joint state of a system held by A and B, the partial trace allows us to compute the local state of A (density operator ρ A ) if B's system is not accessible to A. To obtain ρ A from ρ AB (the reduced state of ρ AB on A), we trace out the system B: ρ A = Tr B (ρ AB ). As a distance measure, we use the expected fidelity F (|ψ , ρ) between a pure state |ψ and a mixed state ρ given by F (|ψ , ρ) = ψ| ρ |ψ . A crucial distinction between quantum and classical information is captured by the well-known No-Cloning theorem [33], which states that an arbitrary unknown quantum state cannot be copied or cloned perfectly. Quantum Key Exchange and Uncloneable Encryption QKE allows two parties to establish a common secret key with information-theoretic security using an insecure quantum channel, and a public authenticated classical channel. In Protocol 1 we describe the BB84 protocol, the most well-known QKE variant due to Bennett and Brassard [5]. For consistency with related works, we use the well-established formalism based on error-correcting codes, developed by Shor and Preskill [28]. Let C 1 [n, k 1 ] and C 2 [n, k 2 ] be two classical linear binary codes encoding k 1 and k 2 bits in n bits such that {0} ⊂ C 2 ⊂ C 1 ⊂ F n 2 where F n 2 is the binary vector space on n bits. A mapping of vectors v ∈ C 1 to a set of basis states (codewords) for the Calderbank-Shor-Steane (CSS) [10,29] code subspace is given by : v → ( 1 / |C2|) w∈C2 |v + w . Due to the irrelevance of phase errors and their decoupling from bit flips in CSS codes, Alice can send |v along with classical error-correction information u + v where u, v ∈ F n 2 and u ∈ C 1 , such that Bob can decode to a codeword in C 1 from (v + ǫ) − (u + v) where ǫ is an error codeword, with the final key being the coset leader of u + C 2 . Protocol 1 BB84 for an n-bit key with protection against δn bit errors 1: Alice generates two random bit strings a, b ∈ {0, 1} (4+δ)n , encodes ai into |ψi in basis (+) if bi = 0 and in (×) otherwise, and ∀i ∈ [1, |a|] sends |ψi to Bob. 2: Bob generates a random bit string b ′ ∈ {0, 1} (4+δ)n and upon receiving the qubits, measures |ψi in (+) or (×) according to b ′ i to obtain a ′ i . 3: Alice announces b and Bob discards a ′ i where bi = b ′ i , ending up with at least 2n bits with high probability. 4: Alice picks a set p of 2n bits at random from a, and a set q containing n elements of p chosen as check bits at random. Let v = p \ q. 5: Alice and Bob compare their check bits and abort if the error exceeds a predefined threshold. 6: Alice announces u + v, where v is the string of the remaining non-check bits, and u is a random codeword in C1. 7: Bob subtracts u + v from his code qubits, v + ǫ, and corrects the result, u + ǫ, to a codeword in C1. 8: Alice and Bob use the coset of u + C2 as their final secret key of length n. Uncloneable encryption (UE) enables transmission of ciphertexts that cannot be perfectly copied and stored for later decoding, by encoding carefully prepared codewords into quantum states, thereby leveraging the No-Cloning theorem. We refer to Gottesman's original work [19] for a detailed explanation of the sketch in Protocol 2. Alice and Bob agree on a message length n, a Message Authentication Code (MAC) of length s, an error-correcting code C 1 having message length K and codeword length N with distance 2δN for average error rate δ, and another error-correcting code C 2 (for privacy amplification) with message length K ′ and codeword length N and distance 2(δ + η)N to correct more errors than C 1 , satisfying C ⊥ 2 ⊂ C 1 , where C ⊥ 2 is the dual code containing all vectors orthogonal to C 2 . The pre-shared key is broken down into four pieces, all chosen uniformly at random: an authentication key k ∈ {0, 1} s , a one-time pad e ∈ {0, 1} n+s , a syndrome c 1 ∈ {0, 1} N −K , and a basis sequence b ∈ {0, 1} N . Protocol 2 Uncloneable Encryption for sending a message m ∈ {0, 1} n 1: Compute MAC(m) k = µ ∈ {0, 1} s . Let x = m||µ ∈ {0, 1} n+s . 2: Mask x with the one-time pad e to obtain y = x ⊕ e. 3: From the coset of C1 given by the syndrome c1, pick a random codeword z ∈ {0, 1} N that has syndrome bits y w. r.t. C ⊥ 2 , where C ⊥ 2 ⊂ C1. 4: For i ∈ [1, N ] encode ciphertext bit zi in the basis (+) if bi = 0 and in the basis (×) if bi = 1. The resulting state |ψi is sent to Bob. To perform decryption: 1: For i ∈ [1, N ], measure |ψ ′ i according to bi, to obtain z ′ i ∈ {0, 1} N . 2: Perform error-correction on z ′ using code C1 and evaluate the parity checks of C2/C ⊥ 1 for privacy amplification to get an (n + s)-bit string y ′ . 3: Invert the OTP step to obtain x ′ = y ′ ⊕ e. 4: Parse x ′ as the concatenation m ′ ||µ ′ and use k to verify if MAC(m ′ ) k = µ ′ . This is a copy of the author preprint. The final authenticated version is available online at: https://doi.org/10.1007/978-3-030-03638-6_7 QKE from UE. It is known [19] that any quantum authentication (QA) scheme can be used as a secure UE scheme, which can in turn be used to obtain QKE, with less interaction and more efficient error detection. We give a brief description of how QKE can be obtained from UE in Protocol 3. Protocol 3 Obtaining QKE from Uncloneable Encryption 1: Alice generates random strings k and x, and sends x to Bob via UE, keyed with k. 2: Bob announces that he has received the message, and then Alice announces k. 3: Bob decodes the classical message x, and upon MAC verification, if the message is valid, he announces this to Alice and they will use x as their secret key. Coercer-Deniable Quantum Key Exchange Following the setting in [3], in which it is implicitly assumed that the adversary has established a binding between the participants' identities and a given QKE session, we introduce the notion of coercer-deniability for QKE. This makes it possible to consider an adversarial setting similar to that of deniable encryption [11] and expect that the parties might be coerced into revealing their private coins after the termination of a session, in which case they would have to produce fake randomness such that the resulting transcript and the claimed values remain consistent with the adversary's observations. Beaver's analysis [3] is briefly addressed in a footnote in a paper by Ioannou and Mosca [20] and the issue is brushed aside based on the argument that the parties do not have to keep records of their raw key bits. It is argued that for deniability to be satisfied, it is sufficient that the adversary cannot provide binding evidence that attributes a particular key to the classical communication as their measurements on the quantum channel do not constitute a publicly verifiable proof. However, counterarguments for this view were already raised in the motivations for deniable encryption [11] in terms of secure erasure being difficult and unreliable, and that erasing cannot be externally verified. Moreover, it is also argued that if one were to make the physical security assumption that random choices made for encryption are physically unavailable, the deniability problem would disappear. We refer to [11] and references therein for more details. Bindings, or lack thereof, lie at the core of deniability. Although we leave a formal comparison of our model with the one formulated in the simulation paradigm [16] as future work, a notable difference can be expressed in terms of the inputs presented to the adversary. In the simulation paradigm, deniability is modelled only according to the simulatability of the legal transcript that the adversary or a corrupt party produces naturally via a session with a party as evidence for the judge, whereas for coercer-deniability, the adversary additionally demands that the honest parties reveal their private randomness. Finally, note that viewing deniability in terms of "convincing" the adversary is bound to be problematic and indeed a source of debate in the cryptographic research community as the adversary may never be convinced given their knowledge of the existence of faking algorithms. Hence, deniability is formulated in terms of the indistinguishability of views (or their simulatability [16]) such that a judge would have no reason to believe a given transcript provided by the adversary establishes a binding as it could have been forged or simulated. Defeating Deniability in QKE via Eavesdropping in a Nutshell We briefly review the eavesdropping attack described in [3] and provide further insight. Suppose Alice sends qubit |ψ m,b to Bob, which encodes a single-bit message m prepared in a basis determined by b ∈ {+, ×}. Let Φ(E, m) denote the state obtained after sending |ψ m,b , relayed and possibly modified by an adversary E. Moreover, let ρ(E, m) denote the view presented to the judge, obtained by tracing over inaccessible systems. Now for a qubit measured correctly by Eve, if a party tries to deny by pretending to have sent σ 1 = ρ(E, 1) instead of σ 2 = ρ(E, 0), e.g., by using some local transformation U neg to simply negate a given qubit, then F (σ 1 , σ 2 ) = 0, where F denotes the fidelity between σ 1 and σ 2 . Thus, the judge can successfully detect this attempt at denial. This attack can be mounted successfully with non-negligible probability without causing the session to abort: Assume that N qubits will be transmitted in a BB84 session and that the tolerable error rate is η N , where clearly η ∼ N . Eve measures each qubit with probability η N (choosing a basis at random) and passes on the remaining ones to Bob undisturbed, i.e., she plants a number of decoy states proportional to the tolerated error threshold. On average, η 2 measurements will come from matching bases, which can be used by Eve to detect attempts at denial, if Alice claims to have measured a different encoding. After discarding half the qubits in the sifting phase, this ratio will remain unchanged. Now Alice and/or Bob must flip at least one bit in order to deny without knowledge of where the decoy states lie in the transmitted sequence, thus getting caught with probability η 2N upon flipping a bit at random. On the Coercer-Deniability of Uncloneable Encryption The vulnerability described in Section 3.1 is made possible by an eavesdropping attack that induces a binding in the key coming from a BB84 session. Uncloneable encryption remains immune to this attack because the quantum encoding is done for an already one-time padded classical input. More precisely, a binding established at the level of quantum states can still be perfectly denied because the actual raw information bits m are not directly encoded into the sequence of qubits, instead the concatenation of m and the corresponding authentication tag µ = MAC k (m), i.e., x = m||µ, is masked with a one-time pad e to obtain y = x ⊕ e, which is then mapped onto a codeword z that is encoded into quantum states. For this reason, in the context of coercerdeniability, regardless of a binding established on z by the adversary, Alice can still deny to another input message in that she can pick a different input x ′ = m ′ ||µ ′ to compute a fake pad e ′ = y ⊕ x ′ , so that upon revealing e ′ to Eve, she will simply decode y ⊕ e ′ = x ′ , as intended. This is a copy of the author preprint. The final authenticated version is available online at: https://doi.org/10.1007/978-3-030-03638-6_7 However, note that a prepare-and-measure QKE obtained from UE still remains vulnerable to the same eavesdropping attack due to the fact that we can no longer make use of the deniability of the one-time pad in UE such that the bindings induced by Eve constrain the choice of the underlying codewords. Security Model We adopt the framework for quantum AKEs developed by Mosca et al. [24]. Due to space constraints, we mainly focus on our proposed extensions. Parties, including the adversary, are modelled as a pair of classical and quantum Turing machines (TM) that execute a series of interactive computations and exchange messages with each other through classical and quantum channels, collectively referred to as a protocol. An execution of a protocol is referred to as a session, identified with a unique session identifier. An ongoing session is called an active session, and upon completion, it either outputs an error term ⊥ in case of an abort, or it outputs a tuple (sk, pid, v, u) in case of a successful termination. The tuple consists of a session key sk, a party identifier pid and two vectors u and v that model public values and secret terms, respectively. We adopt an extended version of the adversarial model described in [24], to account for coercer-deniability. Let E be an efficient, i.e. (quantum) polynomial time, adversary with classical and quantum runtime bounds t c (k) and t q (k), and quantum memory bound m q (k), where bounds can be unlimited. Following standard assumptions, the adversary controls all communication between parties and carries the messages exchanged between them. We consider an authenticated classical channel and do not impose any special restrictions otherwise. Additionally, the adversary is allowed to approach either the sender or the receiver after the termination of a session and request access to a subset r ⊆ v of the private randomness used by the parties for a given session, i.e. set of values to be faked. Security notions can be formulated in terms of security experiments in which the adversary interacts with the parties via a set of well-defined queries. These queries typically involve sending messages to an active session or initiating one, corrupting a party, learning their long-term secret key, revealing the ephemeral keys of an incomplete session, obtaining the computed session key for a given session, and a test-session(id) query capturing the winning condition of the game that can be invoked only for a fresh session. Revealing secret values to the adversary is modeled via partnering. The notion of freshness captures the idea of excluding cases that would allow the adversary to trivially win the security experiment. This is done by imposing minimal restrictions on the set of queries the adversary can invoke for a given session such that there exist protocols that can still satisfy the definition of session-key security. A session remains fresh as long as at least one element in u and v remains secret, see [24] for more details. The transcript of a protocol consists of all publicly exchanged messages between the parties during a run or session of the protocol. The definition of "views" and "outputs" given in [3] coincides with that of transcripts in [16] in the sense that it allows us to model a transcript that can be obtained from observations made on the quantum channel. The view of a party P consists of their state in H P along with any classical strings they produce or observe. More generally, for a two-party protocol, captured by the global density matrix ρ AB for the systems of A and B, the individual system A corresponds to a partial trace that yields a reduced density matrix, i.e., ρ A = Tr B (ρ AB ), with a similar approach for any additional couplings. Coercer-Deniable QKE via View Indistinguishability We use the security model in Section 3.3 to introduce the notion of coercer-deniable QKE, formalized via the indistinguishability of real and fake views. Note that in this work we do not account for forward deniability and forward secrecy. Coercer-Deniability Security Experiment. Let CoercerDenQKE Π E,C (κ) denote this experiment and Q the same set of queries available to the adversary in a security game for session-key security, as described in Section 3.3, and [24]. Clearly, in addition to deniability, it is vital that the security of the session key remains intact as well. For this reason, we simply extend the requirements of the security game for a session-key secure KE by having the challenger C provide an additional piece of information to the adversary E when the latter calls the test-session() query. This means that the definition of a fresh session remains the same as the one given in [24]. E invokes queries from Q \ {test-session()} until E issues test-session() to a fresh session of their choice. C decides on a random bit b and if b = 0, C provides E with the real session key k and the real vector of private randomness r, and if b = 1, with a random (fake) key k ′ and a random (fake) vector of private randomness r ′ . Finally, E guesses an output b ′ and wins the game if b = b ′ . The experiment returns 1 if E succeeds, and 0 otherwise. Let Adv Π E (κ) = |Pr[b = b ′ ] − 1 /2| denote the winning advantage of E. Definition 1 (Coercer-Deniable QKE). For adversary E, let there be an efficient distinguisher D E on security parameter κ. We say that Π r is a coercer-deniable QKE protocol if, for any adversary E, transcript t, and for any k, k ′ , and a vector of private random inputs r = (r 1 , . . . , r ℓ ), there exists a denial/faking program F A,B that running on (k, k ′ , t, r) produces r ′ = (r ′ 1 , . . . , r ′ ℓ ) such that the following conditions hold: -Π is a secure QKE protocol. -The adversary E cannot do better than making a random guess for winning the coercer-deniability security experiment, i.e., Adv Π E (κ) ≤ negl(κ) Pr[CoercerDenQKE Π E,C (κ) = 1] ≤ 1 2 + negl(κ) Equivalently, we require that for all efficient distinguisher D E |Pr[D E (View Real (k, t, r)) = 1] − Pr[D E (View F ake (k ′ , t, r ′ )) = 1]| ≤ negl(κ), where the transcript t = (c, ρ E (k)) is a tuple consisting of a vector c, containing classical message exchanges of a session, along with the local view of the adversary w.r.t. the quantum channel obtained by tracing over inaccessible systems (see Section 3.3). A function f : N → R is negligible if for any constant k, there exists a N k such that ∀N ≥ N k , we have f (N ) < N −k . In other words, it approaches zero faster than any polynomial in the asymptotic limit. Remark 1. We introduced a vector of private random inputs r to avoid being restricted to a specific set of "fake coins" in a coercer-deniable setting such as the raw key bits in BB84 as used in Beaver's analysis. This allows us to include other private inputs as part of the transcript that need to be forged by the denying parties without having to provide a new security model for each variant. Indeed, in [24], Mosca et al. consider the security of QKE in case various secret values are compromised before or after a session. This means that these values can, in principle, be included in the set of random coins that might have to be revealed to the adversary and it should therefore be possible to generate fake alternatives using a faking algorithm. Deniable QKE via Covert Quantum Communication We establish a connection between covert communication and deniability by providing a simple construction for coercer-deniable QKE using covert QKE. We then show that deniability is reduced to the covertness property, meaning that deniable QKE can be performed as long as covert QKE is not broken by the adversary, formalized via the security reduction given in Theorem 2. Covert communication becomes relevant when parties wish to keep the very act of communicating secret or hidden from a malicious warden. This can be motivated by various requirements such as the need for hiding one's communication with a particular entity when this act alone can be incriminating. While encryption can make it impossible for the adversary to access the contents of a message, it would not prevent them from detecting exchanges over a channel under their observation. Bash et al. [2,27] established a square-root law for covert communication in the presence of an unbounded quantum adversary stating that O( √ n) covert bits can be exchanged over n channel uses. Recently, Arrazola and Scarani [1] extended covert communication to the quantum regime for transmitting qubits covertly. Covert quantum communication consists of two parties exchanging a sequence of qubits such that an adversary trying to detect this cannot succeed by doing better than making a random guess, i.e., P d ≤ 1 2 + ǫ for sufficiently small ǫ > 0, where P d denotes the probability of detection and ǫ the detection bias. Covert Quantum Key Exchange Since covert communication requires pre-shared secret randomness, a natural question to ask is whether QKE can be done covertly. This was also addressed in [1] and it was shown that covert QKE with unconditional security for the covertness property is impossible because the amount of key consumption is greater than the amount produced. However, a hybrid approach involving pseudo-random number generators (PRNG) was proposed to achieve covert QKE with a positive key rate such that the resulting secret key remains information-theoretically secure, while the covertness of QKE is shown to be at least as strong as the security of the PRNG. The PRNG is used to expand a truly random pre-shared key into an exponentially larger pseudorandom output, which is then used to determine the time-bins for sending signals in covert QKE. Covert QKE Security Experiment. Let CovertQKE Π cov E,C (κ) denote the security experiment. The main property of covert QKE, denoted by Π cov , can be expressed as a game played by the adversary E against a challenger C who decides on a random bit b and if b = 0, C runs Π cov , otherwise (if b = 1), C does not run Π cov . Finally, E guesses a random bit b ′ and wins the game if b = b ′ . The experiment outputs 1 if E succeeds, and 0 otherwise. The winning advantage of E is given by -Π cov G is a secure QKE protocol. -The probability that E guesses the bit b correctly (b ′ = b), i.e., E manages to distinguish between Alice and Bob running Π cov G or not, is no more than 1 2 plus a negligible function in the security parameter κ, i.e., Adv Π cov E (κ) = |Pr[b = b ′ ] − 1 /2| and we want that Adv Π cov E (κ) ≤ negl(κ).Pr[CovertQKE Π cov E,C (κ) = 1] ≤ 1 2 + negl(κ) Theorem 1. (Sourced from [1]) The secret key obtained from the covert QKE protocol Π cov G is informational-theoretically secure and the covertness of Π cov G is as secure as the underlying PRNG. Deniable Covert Quantum Key Exchange (DC-QKE) We are now in a position to describe DC-QKE, a simple construction shown in Protocol 4, which preserves unconditional security for the final secret key, while its deniability is as secure as the underlying PRNG used in Π cov r,G . In terms of the Security Experiment 3.4, Π cov r,G is run to establish a real key k, while non-covert QKE Π r ′ is used to produce a fake key k ′ aimed at achieving deniability, where r and r ′ are the respective vectors of real and fake private inputs. Operationally, consider a setting wherein the parties suspect in advance that they might be coerced into revealing their private coins for a given run: their joint strategy consists of running both components in Protocol 4 and claiming to have employed Π r ′ to establish the fake key k ′ using the fake private randomness r ′ (e.g. raw key bits in BB84) and provide these as input to the adversary upon termination of a session. Thus, for Eve to be able to produce a proof showing that the revealed values are fake, she would have to break the security of covert QKE to detect the presence of Π cov r,G , as shown in Theorem 2. Moreover, note that covert communication can be used for dynamically agreeing on a joint strategy for denial, further highlighting its relevance for deniability. Remark 2. The original analysis in [3] describes an attack based solely on revealing fake raw key bits that may be inconsistent with the adversary's observations. An advantage of DC-QKE in this regard is that Alice's strategy for achieving coercerdeniability consists of revealing all the secret values of the non-covert QKE Π r ′ honestly. This allows her to cover the full range of private randomness that could be considered in different variants of deniability as discussed in Remark 1. A potential drawback is the extra cost induced by F A,B , which could, in principle, be mitigated using a less interactive solution such as QKE via UE. Remark 3. If the classical channel is authenticated by an information-theoretically secure algorithm, the minimal entropy overhead in terms of pre-shared key (logarithmic in the input size) for Π can be generated by Π cov r . Example 1. In the case of encryption, A can send c = m ⊕ k over a covert channel to B, while for denying to m ′ , she can send c ′ = m ′ ⊕ k ′ over a non-covert channel. Alternatively, she can transmit a single ciphertext over a non-covert channel such that it can be opened to two different messages. To do so, given c = m ⊕ k, Alice computes k ′ = m ′ ⊕ c = m ′ ⊕ m ⊕ k, and she can then either encode k ′ as a codeword, as described in Section 2.1, and run Π r ′ via uncloneable encryption, thus allowing her to reveal the entire transcript to Eve honestly, or she can agree with Bob on a suitable privacy amplification (PA) function (with PA being many-to-one) as part of their denying program in order to obtain k ′ . Theorem 2. If Π cov r,G is a covert QKE protocol, then DC-QKE given in Protocol 4 is a coercer-deniable QKE protocol that satisfies Definition 1. Proof. The main idea consists of showing that breaking the deniability property of DC-QKE amounts to breaking the security of covert QKE, such that coercerdeniability follows from the contrapositive of this implication, i.e., if there exists no efficient algorithm for compromising the security of covert QKE, then there exists no efficient algorithm for breaking the deniability of DC-QKE. We formalize this via a reduction, sketched as follows. Let w ′ = View F ake (k ′ , t E , r ′ ) and w = View Real (k, t E , r) denote the two views. Flip a coin b for an attempt at denial: if b = 0, then t E = (t ′ , ∅), else (b = 1), t E = (t ′ , t cov ), where t cov and t ′ denote the transcripts of covert and non-covert exchanges from Π cov r,G and Π r ′ . Now if DC-QKE is constructed from Π cov , then given an efficient adversary E that can distinguish w from w ′ with probability p 1 , we can use E to construct an efficient distinguisher D to break the security of Deniable QKE via Entanglement Distillation and Teleportation We now argue why performing randomness distillation at the quantum level, thus requiring quantum computation, plays an important role w.r.t. deniability. The subtleties alluded to in [3] arise from the fact that randomness distillation is performed in the classical post-processing step. This allows Eve to leverage her tampering in that she can verify the parties' claims against her decoy states. However, this attack can be countered by removing Eve's knowledge before the classical exchanges begin. Most security proofs of QKE [22,28,23] are based on a reduction to an entanglement-based variant, such that the fidelity of Alice and Bob's final state with |Ψ + ⊗m is shown to be exponentially close to 1. Moreover, secret key distillation techniques involving ED and quantum teleportation [7,14] can be used to faithfully transfer qubits from A to B by consuming ebits. To illustrate the relevance of distillation for deniability in QKE, consider the generalized template shown in Protocol 5, based on these well-known techniques. Protocol 5 Template for deniable QKE via entanglement distillation and teleportation 1: A and B share n noisy entangled pairs (assume i.i.d. states for simplicity). 2: They perform entanglement distillation to convert them into a state ρ such that F ( Ψ + ⊗m , ρ) is arbitrarily close to 1 where m < n. 3: Perform verification to make sure they share m maximally entangled states Ψ + ⊗m , and abort otherwise. 4: A prepares m qubits (e.g. BB84 states) and performs quantum teleportation to send them to B at the cost of consuming m ebits and exchanging 2m classical bits. 5: A and B proceed with standard classical distillation techniques to agree on a key based on their measurements. By performing ED, Alice and Bob make sure that the resulting state cannot be correlated with anything else due to the monogamy of entanglement (see e.g. [21,30]), thus factoring out Eve's system. The parties can open their records for steps (2) and (3) honestly, and open to arbitrary classical inputs for steps (3), (4) and (5): deniability follows from decoupling Eve's system, meaning that she is faced with a reduced density matrix on a pure bipartite maximally entangled state, i.e., a maximally mixed state ρ E = I/2, thus obtaining key equivocation. In terms of the hierarchy of entanglement-based constructions mentioned in [3], this approach mainly constitutes a generalization of such schemes. It should therefore be viewed more as a step towards a theoretical characterization of entanglement-based schemes for achieving information-theoretic deniability. Due to lack of space, we omit a discussion of how techniques from device-independent cryptography can deal with maliciously prepared initial states. Going beyond QKE, note that quantum teleportation allows the transfer of an unknown quantum state, meaning that even the sender would be oblivious as to what state is sent. Moreover, ebits can enable uniquely quantum tasks such as traceless exchange in the context of quantum anonymous transmission [12], to achieve incoercible protocols that allow parties to deny to any random input. Studying the deniability of public-key authenticated QKE both in our model and in the simulation paradigm, and the existence of an equivalence relation between our indistinguishability-based definition and a simulation-based one would be a natural continuation of this work. Other lines of inquiry include forward deniability, deniable QKE in conjunction with forward secrecy, deniability using covert communication in stronger adversarial models, a further analysis of the relation between the impossibility of unconditional quantum bit commitment and deniability mentioned in [3], and deniable QKE via uncloneable encryption. Finally, gaining a better understanding of entanglement distillation w.r.t. potential pitfalls in various adversarial settings and proposing concrete deniable protocols for QKE and other tasks beyond key exchange represent further research avenues.
6,580
1812.02122
2902517736
This paper presents a region-partition based attraction field dual representation for line segment maps, and thus poses the problem of line segment detection (LSD) as the region coloring problem. The latter is then addressed by learning deep convolutional neural networks (ConvNets) for accuracy, robustness and efficiency. For a 2D line segment map, our dual representation consists of three components: (i) A region-partition map in which every pixel is assigned to one and only one line segment; (ii) An attraction field map in which every pixel in a partition region is encoded by its 2D projection vector w.r.t. the associated line segment; and (iii) A squeeze module which squashes the attraction field to a line segment map that almost perfectly recovers the input one. By leveraging the duality, we learn ConvNets to compute the attraction field maps for raw in-put images, followed by the squeeze module for LSD, in an end-to-end manner. Our method rigorously addresses several challenges in LSD such as local ambiguity and class imbalance. Our method also harnesses the best practices developed in ConvNets based semantic segmentation methods such as the encoder-decoder architecture and the a-trous convolution. In experiments, our method is tested on the WireFrame dataset and the YorkUrban dataset with state-of-the-art performance obtained. Especially, we advance the performance by 4.5 percents on the WireFrame dataset. Our method is also fast with 6.6 10.4 FPS, outperforming most of existing line segment detectors.
The study of line segment detection has a very long history since 1980s @cite_9 . The early pioneers tried to detect line segments based upon the edge map estimation. Then, the perception grouping approaches based on the are proposed. Both of these methods concentrate on the hand-crafted low-level features for the detection, which have become a limitation. Recently, the line segment detection and its related problem edge detection have been studied under the perspective of deep learning, which dramatically improved the detection performance and brings us of great practical importance for real applications.
{ "abstract": [ "Abstract The Hough transform is a method for detecting curves by exploiting the duality between points on a curve and parameters of that curve. The initial work showed how to detect both analytic curves (1,2) and non-analytic curves, (3) but these methods were restricted to binary edge images. This work was generalized to the detection of some analytic curves in grey level images, specifically lines, (4) circles (5) and parabolas. (6) The line detection case is the best known of these and has been ingeniously exploited in several applications. (7,8,9) We show how the boundaries of an arbitrary non-analytic shape can be used to construct a mapping between image space and Hough transform space. Such a mapping can be exploited to detect instances of that particular shape in an image. Furthermore, variations in the shape such as rotations, scale changes or figure ground reversals correspond to straightforward transformations of this mapping. However, the most remarkable property is that such mappings can be composed to build mappings for complex shapes from the mappings of simpler component shapes. This makes the generalized Hough transform a kind of universal transform which can be used to find arbitrarily complex shapes." ], "cite_N": [ "@cite_9" ], "mid": [ "22745672" ] }
Learning Attraction Field Representation for Robust Line Segment Detection
Line segment detection (LSD) is an important yet challenging low-level task in computer vision. The resulting line segment maps provide compact structural information that facilitate many up-level vision tasks such as 3D reconstruction [2,3], image partition [4], stereo matching [5], scene parsing [6,7], camera pose estimation [8], and image stitching [9]. LSD usually consists of two steps: line heat map generation and line segment model fitting. The former can be computed either simply by the gradient magnitude map (mainly used before the recent resurgence of deep learning) [10][11][12], or by a learned convolutional neural network (ConvNet) [13,14] in state-of-the-art methods [1]. The latter needs to address the challenging issue of handling unknown multi-scale discretization nuisance factors (e.g., the classic zig-zag artifacts of line segments in digital images) when aligning pixels or linelets to form line segments in the line heat map. Different schema have been proposed, e.g., the -meaningful alignment method proposed in [10] and the junction [15] guided alignment method proposed in [1]. The main drawbacks of existing two-stage methods are in two-fold: lacking elegant solutions to solve the local ambiguity and/or class imbalance in line heat map generation, and requiring extra carefully designed heuristics or supervisedly learned contextual information in inferring line segments in the line heat map. In this paper, we focus on learning based LSD framework and propose a single-stage method which rigorously addresses the drawbacks of existing LSD approaches. Our method is motivated by two observations, • The duality between region representation and boundary contour representation of objects or surfaces, which is a well-known fact in computer vision. • The recent remarkable progresses for image semantic segmentation by deep ConvNet based methods such as U-Net [16] and DeepLab V3+ [17]. So, the intuitive idea of this paper is that if we can bridge line segment maps and their dual region representations, we will pose the problem of LSD as the problem of region coloring, and thus open the door to leveraging the best practices developed in state-of-the-art deep ConvNet based image semantic segmentation methods to improve performance for LSD. By dual region representations, it means they are capable of recovering the input line segment maps in a nearly perfect way via a simple algorithm. We present an efficient and straightforward method for computing the dual region representation. By re-formulating LSD as the equivalent region coloring problem, we address the aforementioned challenges of handling local ambiguity and class imbalance in a principled way. Figure 1 illustrates the proposed method. Given a 2D line segment map, we represent each line segment by its geometry model using the two end-points 1 . In computing the dual region representation, there are three components (detailed in Section 3). Method Overview • A region-partition map. It is computed by assigning every pixel to one and only one line segment based on a proposed point to line segmentation distance function. The pixels associated with one line segment form a region. All regions represent a partition of the image lattice (i.e., mutually exclusive and the union occupies the entire image lattice). • An attraction field map. Each pixel in a partition region has one and only one corresponding projection point on the geometry line segment (but the reverse is often a one-to-many mapping). In the attraction field map, every pixel in a partition region is then represented by its attraction/projection vector between the pixel and its projection point on the geometry line segment 2 . • A light-weight squeeze module. It follows the attraction field to squash partition regions in an attraction field map to line segments that almost perfectly recovers the input ones, thus bridging the duality between region-partition based attraction field maps and line segment maps. The proposed method can also be viewed as an intuitive expansion-and-contraction operation between 1D line segments and 2D regions in a simple projection vector field: The region-partition map generation jointly expands all line segments into partition regions, and the squeeze module degenerates regions into line segments. With the duality between a line segment map and the corresponding region-partition based attraction field map, we first convert all line segment maps in the training dataset to their attraction field maps. Then, we learn ConvNets to predict the attraction field maps from raw input images in an end-to-end way. We utilize U-Net [16] and a modified network based on DeepLab V3+ [17] in our experiments. After the attraction field map is computed, we use the squeeze module to compute its line segment map. In experiments, the proposed method is tested on the WireFrame dataset [1] and the YorkUrban dataset [2] with state-of-the-art performance obtained comparing with [1,10,12,18]. In particular, we improve the performance by 4.5% on the WireFrame dataset. Our method is also fast with 6.6 ∼ 10.4 FPS, outperforming most of line segment detectors. Detection based on Hand-crafted Features In a long range of time, the hand-crafted low-level features (especially for image gradients) are heavily used for line segment detection. These approaches can be divided into edge map based approaches [18,[20][21][22][23][24] and perception grouping approaches [10,12,25]. The edge map based approaches treat the visual features as a discriminated feature for edge map estimation and subsequently applying the Hough transform [19] to globally search line configurations and then cutting them by using thresholds. In contrast to the edge map based approaches, the grouping methods directly use the image gradients as local geometry cues to group pixels into line segment candidates and filter out the false positives [10,12]. Actually, the features used for line segment detection can only characterize the local response from the image appearance. For the edge detection, only local response without global context cannot avoid false detection. On the other hand, both the magnitude and orientation of image gradients are easily affected by the external imaging condition (e.g. noise and illumination). Therefore, the local nature of these features limits us to extract line segments from images robustly. In this paper, we break the limitation of locally estimated features and turn to learn the deep features that hierarchically represent the information of images from low-level cues to high-level semantics. Deep Edge and Line Segment Detection Recently, HED [13] opens up a new era for edge perception from images by using ConvNets. The learned multi-scale and multi-level features dramatically addressed the problem of false detection in the edge-like texture regions and approaching human-level performance on the BSDS500 dataset [26]. Followed by this breakthrough, a tremendous number of deep learning based edge detection approaches are proposed [14,17,[27][28][29][30]. Under the perspective of binary classification, the edge detection has been solved to some extent. It is natural to upgrade the traditional edge map based line segment detection by alternatively using the edge map estimated by ConvNets. However, the edge maps estimated by ConvNets are usually over-smoothed, which will lead to local ambiguities for accurate localization. Further, the edge maps do not contain enough geometric information for the detection. According to the development of deep learning, it should be more reasonable to propose an end-to-end line segment detector instead of only applying the advances of deep edge detection. Most recently, Huang et al. [1] have taken an important step towards this goal by proposing a largescale dataset with high quality line segment annotations and approaching the problem of line segment detection as two parallel tasks, i.e., edge map detection and junction detection. As a final step for the detection, the resulted edge map and junctions are fused to produce line segments. To the best of our knowledge, this is the first attempt to develop a deep learning based line segment detector. However, due to the sophisticated relation between edge map and junctions, it still remains a problem unsolved. Benefiting from our proposed formulation, we can directly learn the line segments from the attraction field maps that can be easily obtained from the line segment annotations without the junction cues. Our Contributions The proposed method makes the following main contributions to robust line segment detection. • A novel dual representation is proposed by bridging line segment maps and region-partition-based attraction field maps. To our knowledge, it is the first work that utilizes this simple yet effective representation in LSD. • With the proposed dual representation, the LSD problem is re-formulated as the region coloring problem, thus opening the door to leveraging state-of-the-art semantic segmentation methods in addressing the challenges of local ambiguity and class imbalance in existing LSD approaches in a principled way. • The proposed method obtains state-of-the-art performance on two widely used LSD benchmarks, the WireFrame dataset (with 4.5% significant improvement) and the YorkUrban dataset. The Attraction Field Representation In this section, we present details of the proposed region-partition representation for LSD. The Region-Partition Map Let Λ be an image lattice (e.g., 800×600). A line segment is denote by l i = (x s i , x e i ) with the two end-points being x s i and x e i (non-negative real-valued positions due to sub-pixel precision is used in annotating line segments) respectively. The set of line segments in a 2D line segment map is denoted by L = {l 1 , · · · , l n } . For simplicity, we also denote the line segment map by L. Figure 2 illustrates a line segment map with 3 line segments in a 10 × 10 image lattice. Computing the region-partition map for L is assigning every pixel in the lattice to one and only one of the n line segments. To that end, we utilize the point-to-line-segment distance function. Consider a pixel p ∈ Λ and a line segment l i = (x s i , x e i ) ∈ L, we first project the pixel p to the straight line going through l i in the continuous geometry space. If the projection point is not on the line segment, we use the closest end-point of the line segment as the projection point. Then, we compute the Euclidean distance between the pixel and the projection point. Formally, we define the distance between p and l i by d(p, l i ) = min t∈[0,1] ||x s i + t · (x e i − x s i ) − p|| 2 2 , t * p = arg min t d(p, l i ),(1) where the projection point is the original point-to-line projection point if t * p ∈ (0, 1), and the closest end-point if t * p = 0 or 1. So, the region in the image lattice for a line segment l i is defined by R i = {p | p ∈ Λ; d(p, l i ) < d(p, l j ), ∀j = i, l j ∈ L}.(2) It is straightforward to see that R i ∩ R j = ∅ and ∪ n i=1 R i = Λ, i.e., all R i 's form a partition of the image lattice. Figure 2(a) illustrates the partition region generation for a line segment in the toy example ( Figure 2). Denote by R = {R 1 , · · · , R n } the region-partition map for a line segment map L. Computing the Attraction Field Map Consider the partition region R i associated with a line segment l i , for each pixel p ∈ R i , its projection point p on l i is defined by p = x s i + t * p · (x e i − x s i ),(3) We define the 2D attraction or projection vector for a pixel p as, a(p) = p − p,(4) where the attraction vector is perpendicular to the line segment if t * p ∈ (0, 1) (see Figure 2(b)). Figure 1 shows examples of the x-and y-component of an attraction field map (AFM). Denote by A = {a(p) | p ∈ Λ} the attraction field map for a line segment map L. The Squeeze Module Given an attraction field map A, we first reverse it by computing the real-valued projection point for each pixel p in the lattice, v(p) = p + a(p), and its corresponding discretized point in the image lattice, v Λ (p) = v(p) + 0.5 .(6) where · represents the floor operation, and v Λ (p) ∈ Λ. Then, we compute a line proposal map in which each pixel q ∈ Λ collects the attraction field vectors whose discretized projection points are q. The candidate set of attraction field vectors collected by a pixel q is then defined by C(q) = {a(p) | p ∈ Λ, v Λ (p) = q},(7) where C(q)'s are usually non-empty for a sparse set of pixels q's which correspond to points on the line segments. An example of the line proposal map is shown in Figure 2(c), which project the pixels of the support region for a line segment into pixels near the line segment. With the line proposal map, our squeeze module utilizes an iterative and greedy grouping algorithm to fit line segments, similar in spirit to the region growing algorithm used in [10]. • Given the current set of active pixels each of which has a non-empty candidate set of attraction field vectors, we randomly select a pixel q and one of its attraction field vector a(p) ∈ C(q). The tangent direction of the selected attraction field vector a(p) is used as the initial direction of the line segment passing the pixel q. • Then, we search the local observation window centered at q (e.g., a 3 × 3 window is used in this paper) to find the attraction field vectors which are aligned with a(p) with angular distance less than a threshold τ (e.g., τ = 10 • used in this paper). -If the search fails, we discard a(p) from C(q), and further discard the pixel q if C(q) becomes empty. -Otherwise, we grow q into a set and update its direction by averaging the aligned attraction vectors. The aligned attractiion vectors will be marked as used (and thus inactive for the next round search). For the two end-points of the set, we recursively apply the greedy search algorithm to grow the line segment. • Once terminated, we obtain a candidate line segment l q = (x s q , x e q ) with the support set of realvalued projection points. We fit the minimum outer rectangle using the support set. We verify the candidate line segment by checking the aspect ratio between width and length of the approximated rectangle with respect to a predefined threshold to ensure the approximated rectangle is "thin enough". If the checking fails, we mark the pixel q inactive and release the support set to be active again. Verifying the Duality and its Scale Invariance We test the proposed attraction field representation on the WireFrame dataset [1]. We first compute the attraction field map for each annotated line segment map and then compute the estimated line segment map using the squeeze module. We run the test across multiple scales, ranging from 0.5 to 2.0 with stepsize 0.1. We evaluate the estimated line segment maps by measuring the precision and recall following the protocol provided in the dataset. Figure 3 shows the precision-recall curves. The average precision and recall rates are above 0.99 and 0.93 respectively, thus verifying the duality between line segment maps and corresponding region-partition based attractive field maps, as well as the scale invariance of the duality. So, the problem of LSD can be posed as the region coloring problem almost without hurting the performance. In the region coloring formulation, our goal is to learn ConvNets to infer the attraction field maps for input images. The attraction field representation eliminates local ambiguity in traditional gradient magnitude based line heat map, and the predicting attraction field in learning gets rid of the imbalance problem in line v.s. non-line classification. Robust Line Segment Detector In this section, we present details of learning ConvNets for robust LSD. ConvNets are used to predict AFMs from raw input images under the image-to-image transformation framework, and thus we adopt encoder-decoder network architectures. Data Processing Denote by D = {(I i , L i ); i = 1, · · · , N } the provided training dataset consisting of N pairs of raw images and annotated line segment maps. We first compute the AFMs for each training image. Then, let D = {(I i , a i ); i = 1, · · · , N } be the dual training dataset. To make the AFMs insensitive to the sizes of raw images, we adopt a simple normalization scheme. For an AFM a with the spatial dimensions being W × H, the size-normalization is done by a x := a x /W, a y := a y /H, where a x and a y are the component of a along x and y axes respectively. However, the size-normalization will make the values in a small and thus numerically unstable in training. We apply a point-wise invertible value stretching transformation for the size-normalized AFM z := S(z) = −sign(z) · log(|z| + ε), where ε = 1e−6 to avoid log(0). The inverse function S −1 (·) is defined by z := S −1 (z ) = sign(z )e (−|z |) . For notation simplicity, denote by R(·) the composite reverse function, and we still denote by D = {(I i , a i ); i = 1, · · · , N } the final training dataset. Inference Denote by f Θ (·) a ConvNet with the parameters collected by Θ. As illustrated in Figure 1(b), for an input image I Λ , our robust LSD is defined bŷ a = f Θ (I Λ )(11)L = Squeeze(R(â))(12) whereâ is the predicted AFM for the input image (the size-normalized and value-stretched one), Squeeze(·) the squeeze module andL the inferred line segment map. Network Architectures We utilize two network architectures to realize f Θ (): one is U-Net [16], and the other is a modified U-Net, called a-trous Residual U-Net which uses the ASSP module proposed in DeepLab v3+ [31] and the skip-connection as done in ResNet [32]. Table 1 shows the configurations of the two architectures. The network consists of 5 encoder and 4 decoder stages indexed by c1, . . . , c5 and d1, . . . , d4 respectively. • For U-Net, the double conv operator that contains two convolution layers is applied and denoted as {·}. The {·} * operator of d i stage upscales the output feature map of its last stage and then concatenate it with the feature map of c i stage together before applying the double conv operator. • For the a-trous Residual U-Net, we replace the double conv operator to the Residual block, denoted as [·]. Different from the ResNet, we use the plain convolution layer with 3 × 3 kernel size and stride 1. Similar to {·} * , the operator [·] * also takes the input from two sources and upscales the feature of first input source. The first layer of [·] * contains two parallel convolution operators to reduce the depth of feature maps and then concatenate them together for the subsequent calculations. In the stage d 4 , we apply the 4 ASPP operators with the output channel size 256 and the dilation rate 1, 6, 12, 18 and then concatenate their outputs. The output stage use the convolution operator with 1 × 1 kernel size and stride 1 without batch normalization [33] and ReLU [34] for the attraction field map prediction. Training We follow standard deep learning protocol to estimate the parameters Θ. Loss function. We adopt the l 1 loss function in training. (â, a) = (x,y)∈Λ a(x, y) −â(x, y) 1 .(13) Implementation details. We train the two networks (U-Net and a-trous Residual U-Net) from scratch on the training set of Wireframe dataset [1]. Similar to [1], we follow the standard data augmentation strategy to enrich the training samples with image domain operations including mirroring and flipping upside-down. The stochastic gradient descent (SGD) optimizer with momentum 0.9 and initial learning rates 0.01 is applied for network optimization. We train these networks with 200 epochs and the learning rate is decayed with the factor of 0.1 after every 50 epochs. In training phase, we resize the images to 320 × 320 and then generate the offset maps from resized line segment annotations to form the mini batches. As discussed in Section 3, the rescaling step with reasonable factor will not affect the results. The mini-batch sizes for the two networks are 16 and 4 respectively due to the GPU memory. In testing, a test image is also resized to 320 × 320 as input to the network. Then, we use the squeeze module to convert the attraction field map to line segments. Since the line segments are insensitive to scales, we can directly resize them to original image size without loss of accuracy. The squeeze module is implemented with C++ on CPU. Experiments In this section, we evaluate the proposed line segment detector and make the comparison with existing state-of-the-art line segment detectors [1,10,12,18]. As shown below, our proposed line segment detector outperforms these existing methods on the WireFrame dataset [1] and YorkUrban dataset [2]. Datasets and Evaluation Metrics We follow the evaluation protocol from the deep wireframe parser [1] to make a comparison. Since we train on the Wreframe dataset [1], it is necessary to evaluate our proposed method on its testing dataset, which includes 462 images for man-made environments (especially for indoor scenes). To validate the generalization ability, we also evaluate our proposed approach on the YorkUrban Line Segment Dataset [2]. Afterthat, we also compared our proposed line segment detector on the images fetched from Internet. All methods are evaluated quantitatively by the precision and recall as described in [1,26]. The precision rate indicates the proportion of positive detection among all of the detected line segments whereas recall reflects the fraction of detected line segments among all in the scene. The detected and ground-truth line segments are digitized to image domain and we define the "positive detection" pixelwised. The line segment pixels within 0.01 of the image diagonal is regarded as positive. After getting the precision (P) and recall (R), we compare the performance of algorithms with F-measure F = 2 · P ·R P +R . Comparisons for Line Segment Detection We compare our proposed method with Deep Wireframe Parser 3 [1], Linelet 4 [12], the Markov Chain Marginal Line Segment Detector 5 (MCMLSD) [18] and the Line Segment Detector (LSD) 6 [10]. The source codes of compared methods are obtained from the authors provide links. It is noticeable that the authors of Deep Wireframe Parser do not provide the pre-trained model for line segment detection, we reproduced their result by ourselves. Threshold Configuration In our proposed method, we finally use the aspect ratio to filter out false detections. Here, we vary the threshold of the aspect ratio in the range (0, 1] with the step size ∆τ = 0. [1], both the threshold for the junction localization confidence and the orientation confidence of junctions branches are fixed to 0.5. Then, we use the author recommended threshold array [2,6,10,20,30,50,80,100,150,200,250,255] to binarize the line heat map and detect line segments. [2]. The precision-recall curves and F-measure are reported in Figure 4 and Table 2. Without bells and whistles, our proposed method outperforms all of these approaches on Wireframe and YorkUrban datasets by a significant margin even with a 18-layer network. Deeper network architecture with ASPP module further improves the F-measure performance. Due to the YorkUrban dataset aiming at Manhattan frame estimation, some line segments in the images are not labeled, which causes the F-measure performance of all methods on this dataset decreased. Speed We evaluate the computational time consuming for the abovementioned approaches on the Wireframe dataset. We run 462 frames with image reading and result writing steps and count the averaged time consuming because the size of testing images are not equal. As reported in Table 2, our proposed method can detect line segments fast (outperforms all methods except for the LSD) while getting the best performances. All experiments perform on a PC workstation, which is equipped with an Intel Xeon E5-2620 2.10 GHz CPU and 4 NVIDIA Titan X GPU devices. Only one GPU is used and the CPU programs are executed in a single thread. Benefiting from the simplicity of original U-Net, our method can detect line segments fast. The deep wireframe parser [1] spends much time for junction and line map fusion. On the other hand, benefiting from our novel formulation, we can resize the input images into 320 × 320 and then transform the output line segments to the original scales, which can further reduce the computational cost. Visualization Further, we visualize the detected line segments with different methods on Wireframe dataset (see Figure 5), YorkUrban dataset (see Figure 6) and images fetched from Internet (see Figure 7). Since the images fetched from Internet do not have ground truth annotations, we display the input images as reference for comparasion. The threshold configurations for visualization are as follow: [2] dataset with different approaches LSD [10], MCMLSD [18], Linelet [12], Deep Wireframe Parser [1] and ours with the a-trous Residual U-Net are shown from left to right. The ground truths are listed in last column as reference. By observing these figures, it is easy to find that Deep Wireframe Parser [1] can detect more complete line segments compared with the previous methods, however, our proposed approach can get better result in the perspective of completeness. On the other hand, this junction driven approach indeed induces some uncertainty for the detection. The orientation of line segments estimated by junction branches is not accurate, which will affect the orientation of line segments. Meanwhile, some junctions are misconnected to get false detections. In contrast, our proposed method gets rid of junction detection and directly detect the line segments from images. LSD MCMLSD Linelet Wireframe Ours Input Figure 7: Some Results of line segment detection on images fetched from Internet with different approaches LSD [10], MCMLSD [18], Linelet [12], Deep Wireframe Parser [1] and ours with the a-trous Residual U-Net are shown from left to right. The input images are listed in last column as reference. Comparing with the rest of approaches [10,12,18], the deep learning based methods (including ours) can utilize the global information to get complete results in the low-contrast regions while suppressing the false detections in the edge-like texture regions. Due to the limitation of local features, the approaches [10,12,18] cannot handle the results with global information and still get some false detections even with powerful validation approaches. Although the overall F-measure of LSD is slightly better than Linelet, the visualization results of Linelet are cleaner. Conclusion In this paper, we proposed a method of building the duality between the region-partition based attraction field representation and the line segment representation. We then pose the problem of line segment detection (LSD) as the region coloring problem which is addressed by learning convolutional neural networks. The proposed attraction field representation rigorously addresses several challenges in LSD such as local ambiguity and class imbalance. The region coloring formulation of LSD harnesses the best practices developed in ConvNets based semantic segmentation methods such as the encoder-decoder architecture and the a-trous convolution. In experiments, our method is tested on two widely used LSD benchmarks, the WireFrame dataset [1] and the YorkUrban dataset [2], with state-of-the-art performance obtained and 6.6 ∼ 10.4 FPS speed.
4,587
1812.02122
2902517736
This paper presents a region-partition based attraction field dual representation for line segment maps, and thus poses the problem of line segment detection (LSD) as the region coloring problem. The latter is then addressed by learning deep convolutional neural networks (ConvNets) for accuracy, robustness and efficiency. For a 2D line segment map, our dual representation consists of three components: (i) A region-partition map in which every pixel is assigned to one and only one line segment; (ii) An attraction field map in which every pixel in a partition region is encoded by its 2D projection vector w.r.t. the associated line segment; and (iii) A squeeze module which squashes the attraction field to a line segment map that almost perfectly recovers the input one. By leveraging the duality, we learn ConvNets to compute the attraction field maps for raw in-put images, followed by the squeeze module for LSD, in an end-to-end manner. Our method rigorously addresses several challenges in LSD such as local ambiguity and class imbalance. Our method also harnesses the best practices developed in ConvNets based semantic segmentation methods such as the encoder-decoder architecture and the a-trous convolution. In experiments, our method is tested on the WireFrame dataset and the YorkUrban dataset with state-of-the-art performance obtained. Especially, we advance the performance by 4.5 percents on the WireFrame dataset. Our method is also fast with 6.6 10.4 FPS, outperforming most of existing line segment detectors.
In a long range of time, the hand-crafted low-level features (especially for image gradients) are heavily used for line segment detection. These approaches can be divided into edge map based approaches @cite_15 @cite_16 @cite_34 @cite_0 @cite_6 @cite_8 and perception grouping approaches @cite_21 @cite_12 @cite_26 . The edge map based approaches treat the visual features as a discriminated feature for edge map estimation and subsequently applying the Hough transform @cite_9 to globally search line configurations and then cutting them by using thresholds. In contrast to the edge map based approaches, the grouping methods directly use the image gradients as local geometry cues to group pixels into line segment candidates and filter out the false positives @cite_12 @cite_26 .
{ "abstract": [ "", "", "Abstract The Hough transform is a method for detecting curves by exploiting the duality between points on a curve and parameters of that curve. The initial work showed how to detect both analytic curves (1,2) and non-analytic curves, (3) but these methods were restricted to binary edge images. This work was generalized to the detection of some analytic curves in grey level images, specifically lines, (4) circles (5) and parabolas. (6) The line detection case is the best known of these and has been ingeniously exploited in several applications. (7,8,9) We show how the boundaries of an arbitrary non-analytic shape can be used to construct a mapping between image space and Hough transform space. Such a mapping can be exploited to detect instances of that particular shape in an image. Furthermore, variations in the shape such as rotations, scale changes or figure ground reversals correspond to straightforward transformations of this mapping. However, the most remarkable property is that such mappings can be composed to build mappings for complex shapes from the mappings of simpler component shapes. This makes the generalized Hough transform a kind of universal transform which can be used to find arbitrarily complex shapes.", "This paper presents a new approach to the extraction of straight lines in intensity images. Pixels are grouped into line-support regions of similar gradient orientation, and then the structure of the associated intensity surface is used to determine the location and properties of the edge. The resulting regions and extracted edge parameters form a low-level representation of the intensity variations in the image that can be used for a variety of purposes. The algorithm appears to be more effective than previous techniques for two key reasons: 1) the gradient orientation (rather than gradient magnitude) is used as the initial organizing criterion prior to the extraction of straight lines, and 2) the global context of the intensity variations associated with a straight line is determined prior to any local decisions about participating edge elements.", "Voting in each column around an initial peak is considered to be a random variable.The optimal ? is determined by fitting and minimizing a 2nd-order curve.The optimal ? is determined by fitting and interpolating a sine curve.We calculate voting boundaries instead of searching for non-zero voting cells.The endpoint coordinates are determined by fitting instead of by solving equations. Line segment detection is a fundamental procedure in computer vision, pattern recognition, or image analysis applications. This paper proposes a statistical method based on the Hough transform for line segment detection by considering quantization error, image noise, pixel disturbance, and peak spreading, also taking the choice of the coordinate origin into account.A random variable is defined in each column in a peak region. Statistical means and statistical variances are calculated; the statistical non-zero cells are analyzed and computed. The normal angle is determined by minimizing the function which fits the statistical variances; the normal distance is calculated by interpolating the function which fits the statistical means. Endpoint coordinates of a detected line segment are determined by fitting a sine curve (rather than searching for the first and last non-zero voting cells, and solving equations containing coordinates of such cells).Experimental results on simulated data and real world images validate the performance of the proposed method for line segment detection.", "The Hough transform is a popular technique used in the field of image processing and computer vision. With a Hough transform technique, not only the normal angle and distance of a line but also the line-segment’s length and midpoint (centroid) can be extracted by analysing the voting distribution around a peak in the Hough space. In this paper, a method based on minimum-entropy analysis is proposed to extract the set of parameters of a line segment. In each column around a peak in Hough space, the voting values specify probabilistic distributions. The corresponding entropies and statistical means are computed. The line-segment’s normal angle and length are simultaneously computed by fitting a quadratic polynomial curve to the voting entropies. The line-segment’s midpoint and normal distance are computed by fitting and interpolating a linear curve to the voting means. The proposed method is tested on simulated images for detection accuracy by providing comparative results. Experimental results on real-world images verify the method as well. The proposed method for line-segment detection is both accurate and robust in the presence of quantization error, background noise, or pixel disturbances.", "This paper proposes a novel closed-form solution to complete line-segment extraction. Given a voting angle in image space, the voting distribution is analyzed and two functional relationships are deduced. Regarding the corresponding column in Hough space, voting along the distance axis is considered as being a random variable, and voting values in cells are considered as forming a probability distribution. Statistical characteristics of this distribution are used to fit a quadratic polynomial curve and a linear curve. Direction, length, and width of a line segment are simultaneously computed in a closed form based on coefficients of fitted quadratic polynomial curves. The midpoint of a line segment is determined based on the fitted linear curve. The method is tested on simulated and real-world images; results show that the proposed closed-form solution is feasible in the presence of quantization errors or image noise. HighlightsWe proposed a method for obtaining the complete set of line-segment parameters.The direction, length and width of a line-segment are extracted simul- taneously in a closed form.We provided a complete theoretical derivation of voting variance with respect to voting angle.The midpoint of a line-segment is determined by the coefficients of the fitted linear curve.Parallel, crossing, and aligned line-segments are discussed by analysing events in image space and Hough space.", "Hough transform (HT) is a well-known technique for extracting lines. However, it is difficult for most existing HT methods to extract line segments robustly from complicated images, mainly because the influence from various objects other than line segments are not taken into account. This paper proposes an accurate and robust evaluator that dynamically removes contributions of backgrounds and analyzes voting patterns around peaks in the accumulator space. In the experiments, four peak detection algorithms are tested against seven images completely automatically. Results show that our method is superior to existing methods in terms of accuracy and robustness while there are no clear differences in execution time. The proposed evaluator detects peaks after the HT and hence it can be applied to any HT that keeps the basic characteristics of the voting process.", "Abstract The Hough transform (HT) is commonly used in machine vision applications for detecting discontinuous patterns in noisy images. The process of using the HT to detect lines in an image involves the computation of the HT for the entire image, accumulating votes in an accumulator array and searching the array for peaks which hold information of potential lines present in the input image. The peaks provide only the length of the normal to the line and the angle that the normal makes with the x -axis. They do not provide any information regarding the length, position or end points of the line segments. However, the butterfly shaped [1] spread of votes in the accumulator array, generated by the process of peak formation, holds vital information like the length and position of the input line segment. Some authors [2] have used this property to develop an algorithm to determine the coordinates of the end points, the length, and the normal parameters of straight lines. A limitation of this method, making it unsuitable for application to a real machine vision problem, is that it would yield erroneous results if applied to an image consisting of anything more than a single line segment. Moreover, the precision of this algorithm is dependent on the sharpness of the peak. In this paper, new techniques which address the above mentioned shortcomings have been described. This paper details the method developed to provide complete line segment description for an image consisting of multiple line segments. In addition, the developed techniques are more robust and accurate than the previously proposed methods as they do not depend upon the sharpness of the peak.", "We propose a linear-time line segment detector that gives accurate results, a controlled number of false detections, and requires no parameter tuning. This algorithm is tested and compared to state-of-the-art algorithms on a wide set of natural images." ], "cite_N": [ "@cite_26", "@cite_8", "@cite_9", "@cite_21", "@cite_6", "@cite_34", "@cite_0", "@cite_15", "@cite_16", "@cite_12" ], "mid": [ "", "", "22745672", "2145158371", "418579443", "2057710539", "2193950854", "1964005581", "1998447437", "2160072137" ] }
Learning Attraction Field Representation for Robust Line Segment Detection
Line segment detection (LSD) is an important yet challenging low-level task in computer vision. The resulting line segment maps provide compact structural information that facilitate many up-level vision tasks such as 3D reconstruction [2,3], image partition [4], stereo matching [5], scene parsing [6,7], camera pose estimation [8], and image stitching [9]. LSD usually consists of two steps: line heat map generation and line segment model fitting. The former can be computed either simply by the gradient magnitude map (mainly used before the recent resurgence of deep learning) [10][11][12], or by a learned convolutional neural network (ConvNet) [13,14] in state-of-the-art methods [1]. The latter needs to address the challenging issue of handling unknown multi-scale discretization nuisance factors (e.g., the classic zig-zag artifacts of line segments in digital images) when aligning pixels or linelets to form line segments in the line heat map. Different schema have been proposed, e.g., the -meaningful alignment method proposed in [10] and the junction [15] guided alignment method proposed in [1]. The main drawbacks of existing two-stage methods are in two-fold: lacking elegant solutions to solve the local ambiguity and/or class imbalance in line heat map generation, and requiring extra carefully designed heuristics or supervisedly learned contextual information in inferring line segments in the line heat map. In this paper, we focus on learning based LSD framework and propose a single-stage method which rigorously addresses the drawbacks of existing LSD approaches. Our method is motivated by two observations, • The duality between region representation and boundary contour representation of objects or surfaces, which is a well-known fact in computer vision. • The recent remarkable progresses for image semantic segmentation by deep ConvNet based methods such as U-Net [16] and DeepLab V3+ [17]. So, the intuitive idea of this paper is that if we can bridge line segment maps and their dual region representations, we will pose the problem of LSD as the problem of region coloring, and thus open the door to leveraging the best practices developed in state-of-the-art deep ConvNet based image semantic segmentation methods to improve performance for LSD. By dual region representations, it means they are capable of recovering the input line segment maps in a nearly perfect way via a simple algorithm. We present an efficient and straightforward method for computing the dual region representation. By re-formulating LSD as the equivalent region coloring problem, we address the aforementioned challenges of handling local ambiguity and class imbalance in a principled way. Figure 1 illustrates the proposed method. Given a 2D line segment map, we represent each line segment by its geometry model using the two end-points 1 . In computing the dual region representation, there are three components (detailed in Section 3). Method Overview • A region-partition map. It is computed by assigning every pixel to one and only one line segment based on a proposed point to line segmentation distance function. The pixels associated with one line segment form a region. All regions represent a partition of the image lattice (i.e., mutually exclusive and the union occupies the entire image lattice). • An attraction field map. Each pixel in a partition region has one and only one corresponding projection point on the geometry line segment (but the reverse is often a one-to-many mapping). In the attraction field map, every pixel in a partition region is then represented by its attraction/projection vector between the pixel and its projection point on the geometry line segment 2 . • A light-weight squeeze module. It follows the attraction field to squash partition regions in an attraction field map to line segments that almost perfectly recovers the input ones, thus bridging the duality between region-partition based attraction field maps and line segment maps. The proposed method can also be viewed as an intuitive expansion-and-contraction operation between 1D line segments and 2D regions in a simple projection vector field: The region-partition map generation jointly expands all line segments into partition regions, and the squeeze module degenerates regions into line segments. With the duality between a line segment map and the corresponding region-partition based attraction field map, we first convert all line segment maps in the training dataset to their attraction field maps. Then, we learn ConvNets to predict the attraction field maps from raw input images in an end-to-end way. We utilize U-Net [16] and a modified network based on DeepLab V3+ [17] in our experiments. After the attraction field map is computed, we use the squeeze module to compute its line segment map. In experiments, the proposed method is tested on the WireFrame dataset [1] and the YorkUrban dataset [2] with state-of-the-art performance obtained comparing with [1,10,12,18]. In particular, we improve the performance by 4.5% on the WireFrame dataset. Our method is also fast with 6.6 ∼ 10.4 FPS, outperforming most of line segment detectors. Detection based on Hand-crafted Features In a long range of time, the hand-crafted low-level features (especially for image gradients) are heavily used for line segment detection. These approaches can be divided into edge map based approaches [18,[20][21][22][23][24] and perception grouping approaches [10,12,25]. The edge map based approaches treat the visual features as a discriminated feature for edge map estimation and subsequently applying the Hough transform [19] to globally search line configurations and then cutting them by using thresholds. In contrast to the edge map based approaches, the grouping methods directly use the image gradients as local geometry cues to group pixels into line segment candidates and filter out the false positives [10,12]. Actually, the features used for line segment detection can only characterize the local response from the image appearance. For the edge detection, only local response without global context cannot avoid false detection. On the other hand, both the magnitude and orientation of image gradients are easily affected by the external imaging condition (e.g. noise and illumination). Therefore, the local nature of these features limits us to extract line segments from images robustly. In this paper, we break the limitation of locally estimated features and turn to learn the deep features that hierarchically represent the information of images from low-level cues to high-level semantics. Deep Edge and Line Segment Detection Recently, HED [13] opens up a new era for edge perception from images by using ConvNets. The learned multi-scale and multi-level features dramatically addressed the problem of false detection in the edge-like texture regions and approaching human-level performance on the BSDS500 dataset [26]. Followed by this breakthrough, a tremendous number of deep learning based edge detection approaches are proposed [14,17,[27][28][29][30]. Under the perspective of binary classification, the edge detection has been solved to some extent. It is natural to upgrade the traditional edge map based line segment detection by alternatively using the edge map estimated by ConvNets. However, the edge maps estimated by ConvNets are usually over-smoothed, which will lead to local ambiguities for accurate localization. Further, the edge maps do not contain enough geometric information for the detection. According to the development of deep learning, it should be more reasonable to propose an end-to-end line segment detector instead of only applying the advances of deep edge detection. Most recently, Huang et al. [1] have taken an important step towards this goal by proposing a largescale dataset with high quality line segment annotations and approaching the problem of line segment detection as two parallel tasks, i.e., edge map detection and junction detection. As a final step for the detection, the resulted edge map and junctions are fused to produce line segments. To the best of our knowledge, this is the first attempt to develop a deep learning based line segment detector. However, due to the sophisticated relation between edge map and junctions, it still remains a problem unsolved. Benefiting from our proposed formulation, we can directly learn the line segments from the attraction field maps that can be easily obtained from the line segment annotations without the junction cues. Our Contributions The proposed method makes the following main contributions to robust line segment detection. • A novel dual representation is proposed by bridging line segment maps and region-partition-based attraction field maps. To our knowledge, it is the first work that utilizes this simple yet effective representation in LSD. • With the proposed dual representation, the LSD problem is re-formulated as the region coloring problem, thus opening the door to leveraging state-of-the-art semantic segmentation methods in addressing the challenges of local ambiguity and class imbalance in existing LSD approaches in a principled way. • The proposed method obtains state-of-the-art performance on two widely used LSD benchmarks, the WireFrame dataset (with 4.5% significant improvement) and the YorkUrban dataset. The Attraction Field Representation In this section, we present details of the proposed region-partition representation for LSD. The Region-Partition Map Let Λ be an image lattice (e.g., 800×600). A line segment is denote by l i = (x s i , x e i ) with the two end-points being x s i and x e i (non-negative real-valued positions due to sub-pixel precision is used in annotating line segments) respectively. The set of line segments in a 2D line segment map is denoted by L = {l 1 , · · · , l n } . For simplicity, we also denote the line segment map by L. Figure 2 illustrates a line segment map with 3 line segments in a 10 × 10 image lattice. Computing the region-partition map for L is assigning every pixel in the lattice to one and only one of the n line segments. To that end, we utilize the point-to-line-segment distance function. Consider a pixel p ∈ Λ and a line segment l i = (x s i , x e i ) ∈ L, we first project the pixel p to the straight line going through l i in the continuous geometry space. If the projection point is not on the line segment, we use the closest end-point of the line segment as the projection point. Then, we compute the Euclidean distance between the pixel and the projection point. Formally, we define the distance between p and l i by d(p, l i ) = min t∈[0,1] ||x s i + t · (x e i − x s i ) − p|| 2 2 , t * p = arg min t d(p, l i ),(1) where the projection point is the original point-to-line projection point if t * p ∈ (0, 1), and the closest end-point if t * p = 0 or 1. So, the region in the image lattice for a line segment l i is defined by R i = {p | p ∈ Λ; d(p, l i ) < d(p, l j ), ∀j = i, l j ∈ L}.(2) It is straightforward to see that R i ∩ R j = ∅ and ∪ n i=1 R i = Λ, i.e., all R i 's form a partition of the image lattice. Figure 2(a) illustrates the partition region generation for a line segment in the toy example ( Figure 2). Denote by R = {R 1 , · · · , R n } the region-partition map for a line segment map L. Computing the Attraction Field Map Consider the partition region R i associated with a line segment l i , for each pixel p ∈ R i , its projection point p on l i is defined by p = x s i + t * p · (x e i − x s i ),(3) We define the 2D attraction or projection vector for a pixel p as, a(p) = p − p,(4) where the attraction vector is perpendicular to the line segment if t * p ∈ (0, 1) (see Figure 2(b)). Figure 1 shows examples of the x-and y-component of an attraction field map (AFM). Denote by A = {a(p) | p ∈ Λ} the attraction field map for a line segment map L. The Squeeze Module Given an attraction field map A, we first reverse it by computing the real-valued projection point for each pixel p in the lattice, v(p) = p + a(p), and its corresponding discretized point in the image lattice, v Λ (p) = v(p) + 0.5 .(6) where · represents the floor operation, and v Λ (p) ∈ Λ. Then, we compute a line proposal map in which each pixel q ∈ Λ collects the attraction field vectors whose discretized projection points are q. The candidate set of attraction field vectors collected by a pixel q is then defined by C(q) = {a(p) | p ∈ Λ, v Λ (p) = q},(7) where C(q)'s are usually non-empty for a sparse set of pixels q's which correspond to points on the line segments. An example of the line proposal map is shown in Figure 2(c), which project the pixels of the support region for a line segment into pixels near the line segment. With the line proposal map, our squeeze module utilizes an iterative and greedy grouping algorithm to fit line segments, similar in spirit to the region growing algorithm used in [10]. • Given the current set of active pixels each of which has a non-empty candidate set of attraction field vectors, we randomly select a pixel q and one of its attraction field vector a(p) ∈ C(q). The tangent direction of the selected attraction field vector a(p) is used as the initial direction of the line segment passing the pixel q. • Then, we search the local observation window centered at q (e.g., a 3 × 3 window is used in this paper) to find the attraction field vectors which are aligned with a(p) with angular distance less than a threshold τ (e.g., τ = 10 • used in this paper). -If the search fails, we discard a(p) from C(q), and further discard the pixel q if C(q) becomes empty. -Otherwise, we grow q into a set and update its direction by averaging the aligned attraction vectors. The aligned attractiion vectors will be marked as used (and thus inactive for the next round search). For the two end-points of the set, we recursively apply the greedy search algorithm to grow the line segment. • Once terminated, we obtain a candidate line segment l q = (x s q , x e q ) with the support set of realvalued projection points. We fit the minimum outer rectangle using the support set. We verify the candidate line segment by checking the aspect ratio between width and length of the approximated rectangle with respect to a predefined threshold to ensure the approximated rectangle is "thin enough". If the checking fails, we mark the pixel q inactive and release the support set to be active again. Verifying the Duality and its Scale Invariance We test the proposed attraction field representation on the WireFrame dataset [1]. We first compute the attraction field map for each annotated line segment map and then compute the estimated line segment map using the squeeze module. We run the test across multiple scales, ranging from 0.5 to 2.0 with stepsize 0.1. We evaluate the estimated line segment maps by measuring the precision and recall following the protocol provided in the dataset. Figure 3 shows the precision-recall curves. The average precision and recall rates are above 0.99 and 0.93 respectively, thus verifying the duality between line segment maps and corresponding region-partition based attractive field maps, as well as the scale invariance of the duality. So, the problem of LSD can be posed as the region coloring problem almost without hurting the performance. In the region coloring formulation, our goal is to learn ConvNets to infer the attraction field maps for input images. The attraction field representation eliminates local ambiguity in traditional gradient magnitude based line heat map, and the predicting attraction field in learning gets rid of the imbalance problem in line v.s. non-line classification. Robust Line Segment Detector In this section, we present details of learning ConvNets for robust LSD. ConvNets are used to predict AFMs from raw input images under the image-to-image transformation framework, and thus we adopt encoder-decoder network architectures. Data Processing Denote by D = {(I i , L i ); i = 1, · · · , N } the provided training dataset consisting of N pairs of raw images and annotated line segment maps. We first compute the AFMs for each training image. Then, let D = {(I i , a i ); i = 1, · · · , N } be the dual training dataset. To make the AFMs insensitive to the sizes of raw images, we adopt a simple normalization scheme. For an AFM a with the spatial dimensions being W × H, the size-normalization is done by a x := a x /W, a y := a y /H, where a x and a y are the component of a along x and y axes respectively. However, the size-normalization will make the values in a small and thus numerically unstable in training. We apply a point-wise invertible value stretching transformation for the size-normalized AFM z := S(z) = −sign(z) · log(|z| + ε), where ε = 1e−6 to avoid log(0). The inverse function S −1 (·) is defined by z := S −1 (z ) = sign(z )e (−|z |) . For notation simplicity, denote by R(·) the composite reverse function, and we still denote by D = {(I i , a i ); i = 1, · · · , N } the final training dataset. Inference Denote by f Θ (·) a ConvNet with the parameters collected by Θ. As illustrated in Figure 1(b), for an input image I Λ , our robust LSD is defined bŷ a = f Θ (I Λ )(11)L = Squeeze(R(â))(12) whereâ is the predicted AFM for the input image (the size-normalized and value-stretched one), Squeeze(·) the squeeze module andL the inferred line segment map. Network Architectures We utilize two network architectures to realize f Θ (): one is U-Net [16], and the other is a modified U-Net, called a-trous Residual U-Net which uses the ASSP module proposed in DeepLab v3+ [31] and the skip-connection as done in ResNet [32]. Table 1 shows the configurations of the two architectures. The network consists of 5 encoder and 4 decoder stages indexed by c1, . . . , c5 and d1, . . . , d4 respectively. • For U-Net, the double conv operator that contains two convolution layers is applied and denoted as {·}. The {·} * operator of d i stage upscales the output feature map of its last stage and then concatenate it with the feature map of c i stage together before applying the double conv operator. • For the a-trous Residual U-Net, we replace the double conv operator to the Residual block, denoted as [·]. Different from the ResNet, we use the plain convolution layer with 3 × 3 kernel size and stride 1. Similar to {·} * , the operator [·] * also takes the input from two sources and upscales the feature of first input source. The first layer of [·] * contains two parallel convolution operators to reduce the depth of feature maps and then concatenate them together for the subsequent calculations. In the stage d 4 , we apply the 4 ASPP operators with the output channel size 256 and the dilation rate 1, 6, 12, 18 and then concatenate their outputs. The output stage use the convolution operator with 1 × 1 kernel size and stride 1 without batch normalization [33] and ReLU [34] for the attraction field map prediction. Training We follow standard deep learning protocol to estimate the parameters Θ. Loss function. We adopt the l 1 loss function in training. (â, a) = (x,y)∈Λ a(x, y) −â(x, y) 1 .(13) Implementation details. We train the two networks (U-Net and a-trous Residual U-Net) from scratch on the training set of Wireframe dataset [1]. Similar to [1], we follow the standard data augmentation strategy to enrich the training samples with image domain operations including mirroring and flipping upside-down. The stochastic gradient descent (SGD) optimizer with momentum 0.9 and initial learning rates 0.01 is applied for network optimization. We train these networks with 200 epochs and the learning rate is decayed with the factor of 0.1 after every 50 epochs. In training phase, we resize the images to 320 × 320 and then generate the offset maps from resized line segment annotations to form the mini batches. As discussed in Section 3, the rescaling step with reasonable factor will not affect the results. The mini-batch sizes for the two networks are 16 and 4 respectively due to the GPU memory. In testing, a test image is also resized to 320 × 320 as input to the network. Then, we use the squeeze module to convert the attraction field map to line segments. Since the line segments are insensitive to scales, we can directly resize them to original image size without loss of accuracy. The squeeze module is implemented with C++ on CPU. Experiments In this section, we evaluate the proposed line segment detector and make the comparison with existing state-of-the-art line segment detectors [1,10,12,18]. As shown below, our proposed line segment detector outperforms these existing methods on the WireFrame dataset [1] and YorkUrban dataset [2]. Datasets and Evaluation Metrics We follow the evaluation protocol from the deep wireframe parser [1] to make a comparison. Since we train on the Wreframe dataset [1], it is necessary to evaluate our proposed method on its testing dataset, which includes 462 images for man-made environments (especially for indoor scenes). To validate the generalization ability, we also evaluate our proposed approach on the YorkUrban Line Segment Dataset [2]. Afterthat, we also compared our proposed line segment detector on the images fetched from Internet. All methods are evaluated quantitatively by the precision and recall as described in [1,26]. The precision rate indicates the proportion of positive detection among all of the detected line segments whereas recall reflects the fraction of detected line segments among all in the scene. The detected and ground-truth line segments are digitized to image domain and we define the "positive detection" pixelwised. The line segment pixels within 0.01 of the image diagonal is regarded as positive. After getting the precision (P) and recall (R), we compare the performance of algorithms with F-measure F = 2 · P ·R P +R . Comparisons for Line Segment Detection We compare our proposed method with Deep Wireframe Parser 3 [1], Linelet 4 [12], the Markov Chain Marginal Line Segment Detector 5 (MCMLSD) [18] and the Line Segment Detector (LSD) 6 [10]. The source codes of compared methods are obtained from the authors provide links. It is noticeable that the authors of Deep Wireframe Parser do not provide the pre-trained model for line segment detection, we reproduced their result by ourselves. Threshold Configuration In our proposed method, we finally use the aspect ratio to filter out false detections. Here, we vary the threshold of the aspect ratio in the range (0, 1] with the step size ∆τ = 0. [1], both the threshold for the junction localization confidence and the orientation confidence of junctions branches are fixed to 0.5. Then, we use the author recommended threshold array [2,6,10,20,30,50,80,100,150,200,250,255] to binarize the line heat map and detect line segments. [2]. The precision-recall curves and F-measure are reported in Figure 4 and Table 2. Without bells and whistles, our proposed method outperforms all of these approaches on Wireframe and YorkUrban datasets by a significant margin even with a 18-layer network. Deeper network architecture with ASPP module further improves the F-measure performance. Due to the YorkUrban dataset aiming at Manhattan frame estimation, some line segments in the images are not labeled, which causes the F-measure performance of all methods on this dataset decreased. Speed We evaluate the computational time consuming for the abovementioned approaches on the Wireframe dataset. We run 462 frames with image reading and result writing steps and count the averaged time consuming because the size of testing images are not equal. As reported in Table 2, our proposed method can detect line segments fast (outperforms all methods except for the LSD) while getting the best performances. All experiments perform on a PC workstation, which is equipped with an Intel Xeon E5-2620 2.10 GHz CPU and 4 NVIDIA Titan X GPU devices. Only one GPU is used and the CPU programs are executed in a single thread. Benefiting from the simplicity of original U-Net, our method can detect line segments fast. The deep wireframe parser [1] spends much time for junction and line map fusion. On the other hand, benefiting from our novel formulation, we can resize the input images into 320 × 320 and then transform the output line segments to the original scales, which can further reduce the computational cost. Visualization Further, we visualize the detected line segments with different methods on Wireframe dataset (see Figure 5), YorkUrban dataset (see Figure 6) and images fetched from Internet (see Figure 7). Since the images fetched from Internet do not have ground truth annotations, we display the input images as reference for comparasion. The threshold configurations for visualization are as follow: [2] dataset with different approaches LSD [10], MCMLSD [18], Linelet [12], Deep Wireframe Parser [1] and ours with the a-trous Residual U-Net are shown from left to right. The ground truths are listed in last column as reference. By observing these figures, it is easy to find that Deep Wireframe Parser [1] can detect more complete line segments compared with the previous methods, however, our proposed approach can get better result in the perspective of completeness. On the other hand, this junction driven approach indeed induces some uncertainty for the detection. The orientation of line segments estimated by junction branches is not accurate, which will affect the orientation of line segments. Meanwhile, some junctions are misconnected to get false detections. In contrast, our proposed method gets rid of junction detection and directly detect the line segments from images. LSD MCMLSD Linelet Wireframe Ours Input Figure 7: Some Results of line segment detection on images fetched from Internet with different approaches LSD [10], MCMLSD [18], Linelet [12], Deep Wireframe Parser [1] and ours with the a-trous Residual U-Net are shown from left to right. The input images are listed in last column as reference. Comparing with the rest of approaches [10,12,18], the deep learning based methods (including ours) can utilize the global information to get complete results in the low-contrast regions while suppressing the false detections in the edge-like texture regions. Due to the limitation of local features, the approaches [10,12,18] cannot handle the results with global information and still get some false detections even with powerful validation approaches. Although the overall F-measure of LSD is slightly better than Linelet, the visualization results of Linelet are cleaner. Conclusion In this paper, we proposed a method of building the duality between the region-partition based attraction field representation and the line segment representation. We then pose the problem of line segment detection (LSD) as the region coloring problem which is addressed by learning convolutional neural networks. The proposed attraction field representation rigorously addresses several challenges in LSD such as local ambiguity and class imbalance. The region coloring formulation of LSD harnesses the best practices developed in ConvNets based semantic segmentation methods such as the encoder-decoder architecture and the a-trous convolution. In experiments, our method is tested on two widely used LSD benchmarks, the WireFrame dataset [1] and the YorkUrban dataset [2], with state-of-the-art performance obtained and 6.6 ∼ 10.4 FPS speed.
4,587
1811.07580
1986573745
The aim of tool path planning is to maximize the efficiency against some given precision criteria. In practice, scallop height should be kept constant to avoid unnecessary cutting, while the tool path should be smooth enough to maintain a high feed rate. However, iso-scallop and smoothness often conflict with each other. Existing methods smooth iso-scallop paths one-by-one, which make the final tool path far from being globally optimal. This paper proposes a new framework for tool path optimization. It views a family of iso-level curves of a scalar function defined over the surface as tool path so that desired tool path can be generated by finding the function that minimizes certain energy functional and different objectives can be considered simultaneously. We use the framework to plan globally optimal tool path with respect to iso-scallop and smoothness. The energy functionals for planning iso-scallop, smoothness, and optimal tool path are respectively derived, and the path topology is studied too. Experimental results are given to show effectiveness of the proposed methods.
Last decade has seen a great deal of literature on tool path planning for free-form surfaces, such as iso-parametric method @cite_25 @cite_12 @cite_1 , iso-planar method @cite_19 @cite_9 @cite_18 , iso-scallop method @cite_23 @cite_32 @cite_13 @cite_17 @cite_3 @cite_8 @cite_21 , iso-phote method @cite_4 and C-space method @cite_33 , to name a few. Surveys of much more work about tool path planning research can be found in @cite_0 @cite_30 . Since we aim at optimal tool paths with respect to iso-scallop and smoothness, we put special interest in the iso-scallop method, which means the height of the points at the scallop curves remains as high as a given value so that the tool path has no unnecessary cutting. Conventionally, constant scallop height is obtained by varying the offset magnitude along each path. A mathematical method for generating iso-scallop tool paths following such strategy was first proposed by @cite_23 . Afterwards, methods to improve the computing efficiency @cite_2 @cite_3 and accuracy @cite_16 @cite_17 @cite_21 were proposed. In 2007, Kim @cite_27 reformulated the iso-scallop tool path as geodesic parallel curves on the design surface by defining a new Riemannian metric.
{ "abstract": [ "", "", "This paper presents an analytical method for planning an efficient tool-path in machining free-form surfaces on 3-axis milling machines. This new approach uses a nonconstant offset of the previous tool-path, which guarantees the cutter moving in an unmachined area of the part surface and without redundant machining. The method comprises three steps : (1) the calculation of the tool-path interval, (2) the conversion from the path interval to the parametric interval, and (3) the synthesis of efficient tool-path planning.", "", "This paper presents a novel approach for generating efficient tool paths in machining free-form surfaces. Concept of iso-phote is used to facilitate tool-path generation. An iso-phote is defined as a region on a surface where the normal vector does not differ by more than a prescribed angle from a fixed reference vector. The boundary curves of the iso-phote, called iso-inclination curves, are numerically generated and are served as the initial master tool paths. These iso-inclination curves are then projected to a 2D plane which is perpendicular to the fried reference vector. 2D curve offsetting of the projected iso-inclination curve is then performed. The resulted 2D offset curves are projected back to 3D surface to form final tool paths. The resulted tool paths can guarantee the satisfaction of machining tolerance requirements. A comparison study of this iso-phote based machining with the conventional iso-parametric machining and the iso-planar machining shows favorite result for the new approach.", "", "", "A novel approach for the NC tool-path generation of free-form surfaces is presented. Traditionally, the distance between adjacent tool-paths in either the Euclidean space or in the parametric space is kept constant. Instead, in this work, the scallop-height is kept constant. This leads to a significant reduction in the size of the CL (cutter location) data accompanied by a reduction in the machining time. This work focuses on the zig-zag (meander) finishing using a ball-end milling cutter.", "", "", "An algorithm for three-axis NC tool path generation on sculptured surfaces is presented. Non-constant parameter tool contact curves are defined on the part by intersecting parallel planes with the part model surface. Four essential elements of this algorithm are introduced: initial chordal approximation, true machining error calculation, direct gouge elimination, and non-constant parameter tool pass interval adjustment. A software implementation of this algorithm produces graphical output depicting the tool path superimposed over the part surface, and it outputs cutter location (CL) data for further post-processing. Several applications examples are presented to demonstrate the capabilities of the algorithm. The results of this technique are compared to those generated from a commercially available computer-aided manufacturing program, and indicate that equivalent accuracy is obtained with many fewer CL points.", "In free-form surface milling, cusps on a part surface need to be regulated. They should be small enough for precision purposes. On the other hand, we should maintain high enough cusps so as not to waste effort making unnecessary cuts. A widely accepted practice is to maintain a constant cusp height over the surface. This paper introduces a new approach to generating constant cusp height tool paths. First, we define a Riemannian manifold by assigning a new metric to a part surface without embedding. This new metric is constructed from the curvature tensors of a part and a tool surface, which we refer to as a cusp-metric. Then, we construct geodesic parallels on the new Riemannian manifold. We prove that a selection from such a family of geodesic parallels constitutes a ''rational'' approximation of accurate constant cusp height tool paths.", "Numerically controlled milling is the primary method for generating complex die surfaces. These complex surfaces are generated by a milling cutter which removes material as it traces out pre-specified tool paths. The accuracy of tool paths directly affects the accuracy of the manufactured surface. The geometry and spacing of the tool paths impact the scallop height and time of manufacturing respectively. In this paper we propose a new method for generating NC tool paths. This method gives the part programmer direct control over the scallop height of the manufactured surface. The method also provides options to the part programmer for generating a variety of tool paths based on practical metrics such as tool path length, tool path curvature and number of tool retractions.", "The continuation of a trend towards more user-friendly, interactive CADCAM systems, prompted the creation of a prototype software package called CISPA (Computer Interactive Surfaces Pre-APT). This system uses a menu-driven front end with graphical feedback to guide a user through curve and free form surface definition resulting in a mathematical model which may be used to generate NC machine CLSF. This front end was built upon the curve and surface complex taken from CAM-I's CASPA (Computer-Aided Sculptured Pre-APT). This complex is part of the execution phase of APT 4. Improvements were made in the definition and calculation of step size basing the input on geometric attributes such as chordal deviation and scallop height rather than on abstract algebraic quantities. Further additions include increased flexibility in milling surfaces, by roughing to depth.", "", "Presented in the paper is a new approach to tool-path generation for sculptured surface machining. In the proposed C-space approach, the geometric data describing the design-surface, stock-surface and tool shape are transformed into C-space elements, and then, all the tool-path generation decisions are made in the configuration space (C-space). The C-space approach provides a number of distinctive features suitable for sculptured surface machining, including: 1) gouge-free tool-paths; 2) uncut handling; 3) balanced cutting-load; 4) smooth cutter movement; 5) collision-free tool-path.", "", "", "Recently, a large amount of new research related to numerical control (NC) tool path generation has appeared in the literature. Unfortunately, finding information on a particular topic can be difficult. Not only does path generation span several disciplines, but the material tends to vary both in content and focus. In this paper, the literature is partitioned into categories and papers related to path generation classified according to the topics they cover. This should be useful to those looking for references on specific topics as well as those seeking an introduction to the literature as a whole.", "" ], "cite_N": [ "@cite_30", "@cite_3", "@cite_2", "@cite_18", "@cite_4", "@cite_8", "@cite_21", "@cite_23", "@cite_17", "@cite_32", "@cite_19", "@cite_27", "@cite_16", "@cite_25", "@cite_12", "@cite_33", "@cite_9", "@cite_1", "@cite_0", "@cite_13" ], "mid": [ "", "", "2010067699", "", "2091236479", "", "", "2012978469", "", "", "2072874228", "2031677213", "2079104147", "2076428309", "", "74375137", "", "", "2018207148", "" ] }
Iso-level tool path planning for free-form surfaces
The terminology "tool path" refers to a specified trajectory along which machine tools move their ends (i.e., cutter and table) to form desired surfaces. The automatic generation of such trajectories are of central importance in modern CAD/CAM systems. There are two fundamental criteria, i.e., precision and efficiency, for automatic tool path generation. Precision means the error of approximating a surface with a family of curves, and approximating a curve with a family of segments or arcs. Efficiency concerns the time of machining along the tool path. The aim of tool path planning is to maximize the efficiency under the given precision criteria. In this paper, we propose a method, which can take these two criteria into consideration together, to generate globally optimal tool paths. Our approach In this paper, we aim to plan optimal tool path regarding iso-scallop and smoothness. We propose a framework that is able to obtain a globally optimal tool path by considering several objectives together. The tool path is represented as a family of level set curves from a scalar function defined over the surface, and our method computes an optimal scalar function by solving a single optimization problem, instead of generating the curves one-by-one. We refer to the level sets as iso-level curves, and the proposed tool path planning method as iso-level method, in order to be consistent with other terminologies in the literature such as iso-parametric, iso-planar, iso-scallop and iso-phote. As the tool path is represented by the iso-level curves of some optimized scalar function, desired properties of the tool path are encoded into the properties of the scalar function. In this work, we give the details of how to control the scalar function so that the desired tool path, e.g., iso-scallop tool path, can be generated. We first propose an iso-scallop condition for the target function, which shapes two neighboring iso-level curves to be iso-scallop. Then we propose a smoothness objective. Finally we combine them together to form the objective energy functional so that its minimizer corresponds to an optimal tool path with respect to iso-scallop and smoothness. To the best of our knowledge, this paper is the first work where these formulas are given, through which interval between iso-level curves and their smoothness can be controlled globally. The minimizer of the iso-scallop objective can not only be exploited to plan tool path of constant scallop, but also has an interesting machining meaning, namely, the level increment of two neighbor iso-level curves equals to the square root of scallop height. In addition, the optimal scalar function can be reused to generated tool path of different scallop height tolerances. Compared with existing tool path generation methods, the proposed method solves the tool path planing problem in a global optimization manner. Besides, the proposed iso-level tool path planning method can free us from the tedious post-processing step for self-intersection and disjunction, which will be demonstrated in more details in Section 2.4. In addition, since the scalar function is defined all over the surface, the model is completely covered by the iso-level curves, i.e., there are no regions that are not machined, as opposed to the offset based methods (illustrated in Fig. 1). Our optimization framework can also be easily extended to include other objectives, such as tool wear, machine kinematics and dynamics. The remainder of this paper is organized as follows: Section 2 describes the optimization models for iso-level method, including iso-scallop tool path generation (Section 2.1), smooth tool path generation (Section 2.2), optimal tool path generation (Section 2.3), followed by a discussion on tool path topology (Section 2.4). In Section 3, we present the numerical solution to the optimization models. Section 4 summarizes the overall procedures for planning iso-level tool paths. Section 5 shows the experimental results. Finally, we conclude the whole paper in Section 6. Optimal iso-level tool path Consider a surface S embedded in R 3 and a scalar function ϕ : S → R defined over it. The curves on S which correspond to a set of values {l i } n i=1 bounded by the range of the scalar function are selected as tool path for the surface. There are two problems to concern when generating tool path following this strategy: the design of ϕ and the mathematical method for determining {l i }. In this section, we describe our solution to them, and demonstrate how to plan iso-level tool paths. Iso-scallop tool path generation In general, a tool path is discretized as a family of curves on the surface. Scallop refers to the remaining material that is generated when the cutter sweeps along two neighbor paths, which results in deviation between the machined surface and the design surface. Generally, we use the height from the points at the ridge of the scallop to the design surface to quantify such error, as illustrated in Fig. 2(a). On one hand, the closer the two neighboring curves are, the lower the scallop height becomes. On the other hand, closer curves may lead to longer path and time to machine the whole surface. Therefore, the iso-scallop method generates tool paths with the scallop height as high as a specific tolerance in order to avoid redundant machining and achieves higher efficiency. The scallop height is determined by the interval between two neighbor paths and they are related by the following formula [10] h = κ s + κ c 8 w 2 + O w 3 ,(1) where w denotes the interval p i p i+1 , h is the scallop height, κ s is the normal curvature along the direction normal to path C i , as shown in Fig. 2(b), and κ c is the curvature of the cutter. Let C i , C i+1 be the iso-level curves {p ∈ S | ϕ(p) = l i } and {p ∈ S | ϕ(p) = l i+1 }, respectively. Then appeal to the Taylor's theorem, we have l i+1 − l i = (∇ϕ) T (p i+1 − p i ) + O p i+1 − p i 2 .(2) The gradient ∇ϕ is a vector in the tangent plane of the surface at point p i , and normal to the path. Therefore, the expression can be rewritten as |l i+1 − l i | = ∇ϕ · p i+1 − p i + O p i+1 − p i 2 .(3) Then ∇ϕ = lim p i+1 −p i →0 |l i+1 − l i | p i+1 − p i .(4) If the level increment |l i+1 − l i | of the scalar funciton is endowed with a machining meaning by letting it equal to the square root of scallop height, Eq. (4) will be ∇ϕ = κ s + κ c 8 ,(5) and the scallop height between two neighbor iso-level curves of ϕ will be constant and equal to the square of the increment. This can be easily verified by substituting Eq. (1) into Eq. (4). Thus an iso-scallop tool path can be generated by finding a scalar function satisfying Eq. (5). And we obtain such ϕ by solving a nonlinear least square problem min ϕ E w (ϕ) = S ∇ϕ − κ s + κ c 8 2 dS,(6) where κ c is a user input and κ s is computed by κ s = ∇ϕ ∇ϕ T T ∇ϕ ∇ϕ ,(7) with T denoting the curvature tensor (see [26,27] for its definition and numerical computation). Finally, the iso-level curves corresponding to level values i √ h n i=1 are iso-scallop tool path with constant height h, that is, level increments between neighboring iso-level curves all equal to √ h. Thus, a large √ h can generate a tool path for rough machining, and a small increment for finish machining. The novelty here is that they share the same scalar function. We refer to this as multiresolution property. Smooth tool path generation As explained in the introduction section, a smooth tool path is preferred as we can get a nearly constant feed rate along it. For a curve in 3D space, its curvature measures how much it bends at a given point. This is quantified by the norm of its second derivative with respect to arc-length parameter, which measures the rate at which the unit tangent turns along the curve [26]. It is the very metric to measure the smoothness of the curve. As the tool path is embedded on the design surface, its second derivative with respect to arc-length parameter can be decomposed into two components, one tangent to the surface and the other normal to the surface (see Fig. 3) [26]. The norms of these components are called the geodesic curvature and the normal curvature respectively, and they are related to the curve curvature by κ 2 = κ 2 g + κ 2 n ,(8) where κ is the curve curvature, and κ g , κ n are the geodesic curvature and the normal curvature respectively. For an iso-level curve φ = const, its normal curvature can be computed by κ n = n × ∇ϕ ∇ϕ T T n × ∇ϕ ∇ϕ = A ∇ϕ ∇ϕ ) T T (A ∇ϕ ∇ϕ = ∇ϕ ∇ϕ ) T T ( ∇ϕ ∇ϕ ,(9) where T is the curvature tensor, n = (n x , n y , n z ) T is the unit normal vector of the surface, and A =   0 −n z n y n z 0 −n x −n y n x 0   , T = A T T A.(10) The geodesic curvature can be computed by κ g = div ∇ϕ ∇ϕ ,(11) where div(·) is the divergence operator. For a planar curve, its normal curvature is zero and we have κ = κ g = div( ∇ϕ ∇ϕ ) = div(∇ϕ) = ∇ 2 (ϕ).(12) Therefore, Laplacian can't ensure smoothness of a tool path for pocket milling. To guarantee the smoothness of all iso-level curves on surface S, we define the smoothness energy as E κ (ϕ) = S κ 2 dS = S κ 2 g dS + S κ 2 n dS.(13) In Section 2.1, we employ the formula |l i+1 − l i | = √ h to generate isolevel tool path. But for smooth tool path, the following strategy is exploited: First, a certain number of points are sampled from iso-level curve C i ; Then the level increment |l i+1 − l i | is computed for each point with respect to a given scallop height h using Eq. (1) and Eq. (3); Finally, the smallest level increment is chosen to be the level increment between C i and its next path C i+1 . This results in level increments of different values as opposed to the iso-scallop method, while the scalar function remains unchanged, i.e., the multiresolution property still holds. Figure 3: The curvature vector kn of curve C on S has two orthogonal components: the normal curvature vector k n n n and the geodesic curvature vector k g n g . Optimal tool path generation The width term Equ. (5) and the smoothness term Equ. (8) can control the interval between neighboring paths and smoothness of the paths respectively. Thus an optimal tool path in terms of iso-scallop and smoothness can be obtained by computing ϕ through a nonlinear least square optimization which minimizes a linear combination of the two energies Equ. (6) and Equ. (13) E (ϕ) = E w (ϕ) + λE κ (ϕ),(14) where λ is a positive weight controlling the trade-off between the two terms. And in order to ensure the tool path is regular (i.e., either contour parallel or direction parallel), we introduce a hard constraint ∇ϕ > 0. The impact of this constraint is demonstrated in Section 2.4. Then the optimization problem becomes min ϕ S ∇ϕ − κ s + κ c 8 2 + λ κ 2 g + κ 2 n dS s.t. ∇ϕ > 0.(15) However, because of the smoothness energy, the optimization result may violate Equ. (5), and thus the formula |l i+1 − l i | = √ h would be invalid. Therefore, we employ the method described in Section 2.2 to select iso-level curves with respect to a certain scallop height tolerance. Note that we can use the same scalar function for planning tool paths of different scallop height tolerances, which again shows the multiresolution property of our approach. Since different machine tools have different feed rate capability, for those of good capability we can choose a lower weight on smooth term. Thus the freedom of choosing weights λ provides the possibility of applying the proposed method to various machine tools. Path topology In this section, we will show that each iso-level curve generated by the proposed method is either a closed loop or a curve segment without selfintersection and disjunction. In addition, this kind of path topology can be exploited to quickly extract iso-level curves. Lemma 1. For a given scalar function ϕ defined over a surface S, if the norm of its gradient does not vanish anywhere, the endpoints of iso-level curves (if they exist) are on the boundary. Proof. For an interior point p, ∇ϕ = 0 implies that along the two directions p 1 , p 2 orthogonal to ∇ϕ, we have, in a small range, the following expression ϕ(p + p i ) − ϕ(p) = (∇ϕ) T p i = 0 for i = 1, 2.(16) Namely, each interior point has exactly two directions sharing the same level value with it. Therefore, the endpoints can only be on the boundary. Lemma 2. For the scalar function ϕ, its iso-level curves never intersect with each other and do not have self-intersections. Proof. Since each point corresponds to a unique value, iso-level curves for different values do not intersect with each other. Generally, we have two types of self-intersections, as shown in Fig. 4. The difference between them is that in case (b) the self-intersection is tangential. For case (a), the two curve segments have different tangent directions at the self-intersection point. It is well-known that the gradient direction at a point is orthogonal to the tangent direction of the iso-level curve. Thus the two different tangent directions at the self-intersection point results in a contradiction that there are two different gradient direction at that point. For case (b), we view the selfintersection as two iso-level curves that are of same level value and tangential at the intersection point and separate from each other in its neighborhood. But ∇ϕ = 0 along an iso-level curve implies that iso-level curves near it are of different level values, which again leads to a contradiction. Note that these properties are employed to extract each desired iso-level curves in the following sections so that traversal is not needed. As Lemma 1 and Lemma 2 show, for any interior point of the surface, it has and only has two directions that share the same level value with the point. Accordingly, we can use a "Seed Growth" like algorithm to find the iso-level curves, namely, if we want to find the iso-level curve of a given level value, say l, we can start from an initial edge on which there exists a point whose level value is l, and then search through the edge's adjacent triangles (if the point is a vertex of the mesh, i.e., the endpoint of the edge, the adjacent triangles are all the 1-ring triangles) to get exactly two edges containing level value l, repeat this procedure and finally the initial point can grow to be an iso-level curve of interest. In addition, it follows immediately from Lemma 1 and Lemma 2 that: Proposition 1. Each iso-level path generated by the proposed method is either contour parallel or direction parallel and free from self-intersection and disjunction. Numerical solution Numerically, the iso-level method described in the above section can be applied to any domain with a discrete gradient operator ∇, divergence operator div(·), and curvature tensor T . To solve the optimization models for free-from surfaces, we appeal to the Finite Element Method (FEM), i.e., in this work, we focus on triangular meshes. However, this method can be easily extended to other domains, such as point clouds. Assume that M ⊂ R 3 is a compact triangulated surface with no degenerate triangles. Let N 1 (i) be the 1-neighborhood of vertex v i , which is the index set for vertices connecting to v i . Let D 1 (i) be the 1-disk of the vertex v i , which is the index set for triangles containing v i . The dual cell of a vertex v i is part of its 1-disk which is more near to v i than its N 1 (i). Fig. 5 (a) shows the dual cell C i for an interior vertex v i , while Fig. 5 (b) shows the dual cell for a boundary vertex. A function ϕ defined over the triangulated surface M is considered to be a piecewise linear function, such that ϕ reaches value ϕ i at vertex v i and is linear within each triangle. Based on these, the energies shown in Equ. (6), Equ. (13), and Equ. (15) are computed by integrating the width term and smooth term over the whole mesh domain, while the mesh domain can be decomposed into a set of triangles or a set of dual cells. To compute the width term and the smooth term on a mesh, we need to discretize the gradient, the divergence, and the curvature tensor, which we will describe briefly, since they are basic in FEM. The gradient of ϕ over each triangle is constant as the function ϕ is linear within the triangle. The gradient in a given triangle can be expressed as where A i is the area of the face f i , N i is its unit normal, Ω i is the set of edge indices for face f i , e j is the j−th edge vector (oriented counter-clockwise), and ϕ i is the opposing value of ϕ as shown in Fig. 6. According to the Stokes' theorem, the integral of divergence over the dual cell is equal to the outward flux along the boundary of the dual cell. Thus the divergence operator associated with vertex v i is dicretized by dividing the outward flux by the dual cell area ∇ϕ(f i ) = 1 2A i j∈Ω i ϕ j (N i × e j ),(17)div(X) = 1 2C i j∈D 1 (i) cot θ 1 j (e 1 j · X j ) + cot θ 2 j (e 2 j · X j ),(18) where the sum is taken over the vertex's incident triangles f j with a vector X j , e 1 j and e 2 j are the two edge vectors of triangle f j containing vertex v i , θ 1 j and θ 2 j are the opposing angles, and C i is the dual cell area for vertex v i . Accordingly, the geodesic curvature value of curve ϕ = const associated with the vertex v i can be computed by κ i g = 1 2C i j∈D 1 (i) cot θ 1 j (e 2 j · ∇ϕ(j)) + cot θ 1 j (e 2 j · ∇ϕ(j)) ∇ϕ(j) . The curvature tensor (second fundamental tensor) T is defined in terms of the directional derivatives of the surface normal: T = D u n D v n = ∂n ∂u · u ∂n ∂v · u ∂n ∂v · u ∂n ∂v · v ,(20) where (u, v) are the directions of an orthogonal coordinate system in the tangent frame (the sign convention used here yields positive curvatures for convex surfaces with outward-facing normals). Multiplying this tensor by any vector in the tangent plane gives the derivative of the normal in that direction. Although this definition holds only for smooth surfaces, we can approximate it in the discrete case using finite difference. In this work, the curvature tensor for each face is computed by the method in [27]. Then the whole optimization model can be formulated as min ϕ |F | j=1 A j   ∇ϕ(j) − κ j s + κ c 8   2 + λ   |F | j=1 A j (κ j n ) 2 + |V | i=1 C i (κ i g ) 2   s.t. ∇ϕ(j) > 0.(21) where |F | is the number of faces and |V | denotes the number of vertices. This is a well established nonlinear least square optimization problem with inequality constraints, which can be easily solved by the interior point method [28][29][30]. The interior point solver requires the gradient of the target function and the constraint functions. The gradient calculation boils down to computing the gradient of ∇ϕ and ∇ϕ/ ∇ϕ , which we do as follows. As demonstrated previously, the gradient of a piecewise linear scalar function within a given triangle f k is a linear combination of constant vectors N k × e i , and thus, the partial derivative of ∇ϕ(k) with respect to ϕ j is ∂ ∂ϕ j ∇ϕ(k) = 1 2A k ∂ ∂ϕ j i∈Ω k ϕ i (N k × e i ) = 1 2A k i∈Ω k δ ij (N k × e i ),(22) where δ ij = 1 i = j 0 i = j is the Kronecker delta function. As for the gradient of ∇ϕ/ ∇ϕ , it is ∂ ∂ϕ j ∇ϕ(k) ∇ϕ(k) = ∂ ∂ϕ j ∇ϕ(k) ∇ϕ(k) − ∇ϕ(k) (∇ϕ(k)) T ∂ ∂ϕ j ∇ϕ(k) ∇ϕ(k) ∇ϕ(k) 2 .(23) The final solution to the optimization problem Equ. (21) would be affected by the initial value. In this work, we initialize the tool path by paths from [31]. Tool path planning algorithm Planning tool-path is to represent a surface with a series of curves against some error criteria (i.e., chord deviation and scallop height). We next summarize the overall process for generating such curves on a surface by the iso-level method as follows: 1. Select an initial curve C 0 on the surface S and fix its level value to zero, i.e., l 0 = 0. C 0 is a part of boundary for direction parallel tool path and the whole boundary for contour parallel tool path. 2. Find the solution to the models Equ. (6), Equ. (13), and Equ. (15), including meshing and numerical optimization. 3. Select level values {l i } n i=1 , where l 1 = ϕ min , l (n) = ϕ max , with the method described in Section 2.1 for iso-scallop tool path and the method in Section 2.2 for smooth or optimal tool path. For direction parallel tool path the last tool path corresponds to l n = ϕ max , while for contour parallel tool path the last corresponds to l n−1 . Then fastly extract iso-level curves on the triangular mesh based on the method described in Section 2.4. 4. Convert the iso-level curves on mesh which actually are polygons to surface S. The vertices of an iso-level curve on the mesh are either vertices of the mesh or points on edges of the mesh. For the former case, the vertices are also on S. For the latter case, a vertex is first proportionally mapped to the parameter domain with respect to the two ends of the edge it is on and then find its corresponding point on the surface. 5. Greedily merge short segments of the polygons to approach the chord deviation tolerance as closely as possible. Then finally, these reduced iso-level curves (polygons) are desired tool path. Experimental results In this section, the proposed tool path planning method is implemented on real data. A free-form surface and a human face are chosen to illustrate the effectiveness of it, as in Fig. 7. The free-form surface is exploited to show the generation of direction parallel tool path. The human face was generated by a coordinate measuring machine. We utilize it to show the generation of contour parallel tool path. To plan iso-level tool path, the first thing to do is to construct a proper scalar function over the surface. Since the Finite Element Method is employed to find the optimizer of the optimization models, meshing is needed. We choose the element to be triangular. Fig. 8(a) shows the meshing results of the free-form surface and Fig. 9(a) shows that of the human face. The optimal scalar functions are illustrated in Fig. 8(b) and Fig. 9(b) by varying color. Fig. 8(b) shows the scalar function of the free-form surface for generating direction parallel tool path and Fig. 9(b) shows that of the human face for generating contour parallel tool path. And the varying from blue to red represents the rising of level value. As the optimal scalar functions have been constructed for both surfaces, tool path that is optimal with respect to iso-scallop and smoothness can be generated. A ball-end cutter with radius 4mm is chosen to show the path generation so that tool orientation doesn't matter. The limited scallop height is 1mm and chord deviation is 0.01mm. In order to clearly show tool paths, the error criterion (i.e., scallop height) is set to be much greater than those in real cases. Fig. 8(c) shows the optimal direction parallel paths on the freeform surface and Fig. 9(c) shows corresponding result of contour parallel tool path on the human face. Their weights are both λ = 1. We next show some comparisons and analyses of the generated tool paths. According to the demonstration of [32], the contour parallel tool path will be emphasized. Fig. 10 shows the tool paths form smooth to iso-scallop generated by the proposed method. Fig. 10(a) shows the smooth contour parallel tool path generated by the proposed method with λ = 10, Fig. 10(b) shows the optimal tool path with λ = 1, and Fig. 10(c) shows the iso-scallop tool path with λ = 0. As described in above sections, the iso-scallop condition Equ. (5) characterizes the overlapping between neighbor paths. Therefore, to analyze the overlapping of the generated tool paths, we conduct statistics on the relative deviation w.r.t. the iso-scallop condition along the paths. It is computed by ∆ = ∇ϕ − κs+κc 8 κs+κc 8 = 1 − ∇ϕ κs+κc 8 .(24) And the statistics results are depicted in Fig. 10(d), 10(e), 10(f). As the figures show, for the iso-scallop tool path, the relative deviation are all less than 5%, and centered around 1%. For the optimal tool path, we could find the ratio move to the greater side, as imagined, and there are a few points which are much greater than the rest. Most of these points are located in the corner parts of the tool path. And for the smooth tool path, its overlapping is much more obvious and there are about 2% of points whose ratio are greater than 10%. But the losing of iso-scallop condition brings smoothness to the tool paths, which is shown in Fig. 10(g), 10(h), 10(i). In conclusion, the optimal tool path tries to find a balance between the overlapping and smoothness. We also compare the optimal tool path with the Laplacian based one in Fig. 11. Although the Laplacian based tool path is obviously smooth than the optimal one, from the overlapping analysis figures, i.e., Fig. 11(c), 11(d), we can find that it is much more severely overlapped for neighbor paths. (b) optimal tool path; (c) iso-scallop tool path; (d) overlapping analysis for smooth tool path; (e) overlapping analysis for optimal tool path; (f) overlapping analysis for iso-scallop tool path; (g) curvature analysis for smooth tool path; (h) curvature analysis for optimal tool path; (i) curvature analysis for iso-scallop tool path. Conclusion In this paper, a new framework of tool path planning is proposed. The novelty of our method is that it allows several objectives to be considered in a unified framework and thus making global optimization of tool paths possible. Moreover, the scalar function only has to be constructed once, then it can be utilized to generate tool paths for machining from rough to fine. The proposed framework is applied to find an optimal tool path that takes smoothness and iso-scallop requirements into consideration simultane- ously. Equ. (5) for controlling interval between neighbor iso-level curves and Equ. (8) for measuring curvature of an iso-level curve are derived to lay a foundation for the formulation of optimization models. It is likely that this theory has further potential in planning other optimal tool path, and the derived formulas can also be directly applied to level set based tool path planning methods, e.g., [22].
4,807
1811.07580
1986573745
The aim of tool path planning is to maximize the efficiency against some given precision criteria. In practice, scallop height should be kept constant to avoid unnecessary cutting, while the tool path should be smooth enough to maintain a high feed rate. However, iso-scallop and smoothness often conflict with each other. Existing methods smooth iso-scallop paths one-by-one, which make the final tool path far from being globally optimal. This paper proposes a new framework for tool path optimization. It views a family of iso-level curves of a scalar function defined over the surface as tool path so that desired tool path can be generated by finding the function that minimizes certain energy functional and different objectives can be considered simultaneously. We use the framework to plan globally optimal tool path with respect to iso-scallop and smoothness. The energy functionals for planning iso-scallop, smoothness, and optimal tool path are respectively derived, and the path topology is studied too. Experimental results are given to show effectiveness of the proposed methods.
There also exist some efforts to generate smooth tool paths without considering the overlapping between neighbor machining strips (i.e., the iso-scallop condition). Generally, such methods are based on the Laplacian. For example, Bieterman and Sandstrom @cite_10 proposed a Laplacian based contour parallel tool path generation method by selecting the level sets of a harmonic function defined over a pocket as the tool path. But how to choose the level sets for it still remains an open problem, namely there is no formula for path interval calculation so far. Similarly, Chuang and Yang @cite_34 combined the Laplacian method and iso-parametric method to generate tool paths for pockets with complex topology, i.e., complex boundaries and islands. However, the smoothness of the tool path cannot be guaranteed through Laplacian energy as small Laplacian value does not necessarily mean small curvature of the level set curves. And solving a Laplace equation over a surface can only generate a unique and uncontrollable scalar function (scaling has no impact on the shape of tool paths). Another drawback of the Laplacian based approach is the severe overlapping between machining strips of neighbor paths, especially for paths near the boundary, which results in too much redundant machining.
{ "abstract": [ "In this paper, a method for generating boundary-conformed pocketing toolpaths is developed. Based on the 2D Laplace parameterization of pocket contours and the redistribution of the original Laplace isoparametrics, continuous toolpaths are generated. These generated toolpaths have neither thin walls nor leftover tool marks. Detailed algorithms are formulated in steps. The method can be applied to general pockets either with or without islands. Some examples are provided to demonstrate the applicability of this method. In most cases, the method can successfully generate satisfactory toolpaths for arbitrary shaped pockets. However, according to the shape of the pockets and the distribution of the islands, when using this method, over machining may occur in some narrow or bottlenecked areas. Further investigation on how to alleviate this problem is needed. We believe that this method provides an alternative choice for pocket machining.", "A novel curvilinear tool-path generation method is described for planar milling of pockets. The method uses the solution of an elliptic partial differential equation boundary value problem defined on a pocket region. This mathematical function helps morph a smooth low-curvature spiral path in a pocket interior to one that conforms to the pocket boundary. This morphing leads to substantial reductions of tool wear in cutting hard met als and of machining time in cutting all met als, as experiments described here show. A variable feed-rate optimization procedure is also described. This procedure incorporates path, tool-engagement, and machine constraints and can be applied to maximize machine performance for any tool path." ], "cite_N": [ "@cite_34", "@cite_10" ], "mid": [ "1998951994", "2010512922" ] }
Iso-level tool path planning for free-form surfaces
The terminology "tool path" refers to a specified trajectory along which machine tools move their ends (i.e., cutter and table) to form desired surfaces. The automatic generation of such trajectories are of central importance in modern CAD/CAM systems. There are two fundamental criteria, i.e., precision and efficiency, for automatic tool path generation. Precision means the error of approximating a surface with a family of curves, and approximating a curve with a family of segments or arcs. Efficiency concerns the time of machining along the tool path. The aim of tool path planning is to maximize the efficiency under the given precision criteria. In this paper, we propose a method, which can take these two criteria into consideration together, to generate globally optimal tool paths. Our approach In this paper, we aim to plan optimal tool path regarding iso-scallop and smoothness. We propose a framework that is able to obtain a globally optimal tool path by considering several objectives together. The tool path is represented as a family of level set curves from a scalar function defined over the surface, and our method computes an optimal scalar function by solving a single optimization problem, instead of generating the curves one-by-one. We refer to the level sets as iso-level curves, and the proposed tool path planning method as iso-level method, in order to be consistent with other terminologies in the literature such as iso-parametric, iso-planar, iso-scallop and iso-phote. As the tool path is represented by the iso-level curves of some optimized scalar function, desired properties of the tool path are encoded into the properties of the scalar function. In this work, we give the details of how to control the scalar function so that the desired tool path, e.g., iso-scallop tool path, can be generated. We first propose an iso-scallop condition for the target function, which shapes two neighboring iso-level curves to be iso-scallop. Then we propose a smoothness objective. Finally we combine them together to form the objective energy functional so that its minimizer corresponds to an optimal tool path with respect to iso-scallop and smoothness. To the best of our knowledge, this paper is the first work where these formulas are given, through which interval between iso-level curves and their smoothness can be controlled globally. The minimizer of the iso-scallop objective can not only be exploited to plan tool path of constant scallop, but also has an interesting machining meaning, namely, the level increment of two neighbor iso-level curves equals to the square root of scallop height. In addition, the optimal scalar function can be reused to generated tool path of different scallop height tolerances. Compared with existing tool path generation methods, the proposed method solves the tool path planing problem in a global optimization manner. Besides, the proposed iso-level tool path planning method can free us from the tedious post-processing step for self-intersection and disjunction, which will be demonstrated in more details in Section 2.4. In addition, since the scalar function is defined all over the surface, the model is completely covered by the iso-level curves, i.e., there are no regions that are not machined, as opposed to the offset based methods (illustrated in Fig. 1). Our optimization framework can also be easily extended to include other objectives, such as tool wear, machine kinematics and dynamics. The remainder of this paper is organized as follows: Section 2 describes the optimization models for iso-level method, including iso-scallop tool path generation (Section 2.1), smooth tool path generation (Section 2.2), optimal tool path generation (Section 2.3), followed by a discussion on tool path topology (Section 2.4). In Section 3, we present the numerical solution to the optimization models. Section 4 summarizes the overall procedures for planning iso-level tool paths. Section 5 shows the experimental results. Finally, we conclude the whole paper in Section 6. Optimal iso-level tool path Consider a surface S embedded in R 3 and a scalar function ϕ : S → R defined over it. The curves on S which correspond to a set of values {l i } n i=1 bounded by the range of the scalar function are selected as tool path for the surface. There are two problems to concern when generating tool path following this strategy: the design of ϕ and the mathematical method for determining {l i }. In this section, we describe our solution to them, and demonstrate how to plan iso-level tool paths. Iso-scallop tool path generation In general, a tool path is discretized as a family of curves on the surface. Scallop refers to the remaining material that is generated when the cutter sweeps along two neighbor paths, which results in deviation between the machined surface and the design surface. Generally, we use the height from the points at the ridge of the scallop to the design surface to quantify such error, as illustrated in Fig. 2(a). On one hand, the closer the two neighboring curves are, the lower the scallop height becomes. On the other hand, closer curves may lead to longer path and time to machine the whole surface. Therefore, the iso-scallop method generates tool paths with the scallop height as high as a specific tolerance in order to avoid redundant machining and achieves higher efficiency. The scallop height is determined by the interval between two neighbor paths and they are related by the following formula [10] h = κ s + κ c 8 w 2 + O w 3 ,(1) where w denotes the interval p i p i+1 , h is the scallop height, κ s is the normal curvature along the direction normal to path C i , as shown in Fig. 2(b), and κ c is the curvature of the cutter. Let C i , C i+1 be the iso-level curves {p ∈ S | ϕ(p) = l i } and {p ∈ S | ϕ(p) = l i+1 }, respectively. Then appeal to the Taylor's theorem, we have l i+1 − l i = (∇ϕ) T (p i+1 − p i ) + O p i+1 − p i 2 .(2) The gradient ∇ϕ is a vector in the tangent plane of the surface at point p i , and normal to the path. Therefore, the expression can be rewritten as |l i+1 − l i | = ∇ϕ · p i+1 − p i + O p i+1 − p i 2 .(3) Then ∇ϕ = lim p i+1 −p i →0 |l i+1 − l i | p i+1 − p i .(4) If the level increment |l i+1 − l i | of the scalar funciton is endowed with a machining meaning by letting it equal to the square root of scallop height, Eq. (4) will be ∇ϕ = κ s + κ c 8 ,(5) and the scallop height between two neighbor iso-level curves of ϕ will be constant and equal to the square of the increment. This can be easily verified by substituting Eq. (1) into Eq. (4). Thus an iso-scallop tool path can be generated by finding a scalar function satisfying Eq. (5). And we obtain such ϕ by solving a nonlinear least square problem min ϕ E w (ϕ) = S ∇ϕ − κ s + κ c 8 2 dS,(6) where κ c is a user input and κ s is computed by κ s = ∇ϕ ∇ϕ T T ∇ϕ ∇ϕ ,(7) with T denoting the curvature tensor (see [26,27] for its definition and numerical computation). Finally, the iso-level curves corresponding to level values i √ h n i=1 are iso-scallop tool path with constant height h, that is, level increments between neighboring iso-level curves all equal to √ h. Thus, a large √ h can generate a tool path for rough machining, and a small increment for finish machining. The novelty here is that they share the same scalar function. We refer to this as multiresolution property. Smooth tool path generation As explained in the introduction section, a smooth tool path is preferred as we can get a nearly constant feed rate along it. For a curve in 3D space, its curvature measures how much it bends at a given point. This is quantified by the norm of its second derivative with respect to arc-length parameter, which measures the rate at which the unit tangent turns along the curve [26]. It is the very metric to measure the smoothness of the curve. As the tool path is embedded on the design surface, its second derivative with respect to arc-length parameter can be decomposed into two components, one tangent to the surface and the other normal to the surface (see Fig. 3) [26]. The norms of these components are called the geodesic curvature and the normal curvature respectively, and they are related to the curve curvature by κ 2 = κ 2 g + κ 2 n ,(8) where κ is the curve curvature, and κ g , κ n are the geodesic curvature and the normal curvature respectively. For an iso-level curve φ = const, its normal curvature can be computed by κ n = n × ∇ϕ ∇ϕ T T n × ∇ϕ ∇ϕ = A ∇ϕ ∇ϕ ) T T (A ∇ϕ ∇ϕ = ∇ϕ ∇ϕ ) T T ( ∇ϕ ∇ϕ ,(9) where T is the curvature tensor, n = (n x , n y , n z ) T is the unit normal vector of the surface, and A =   0 −n z n y n z 0 −n x −n y n x 0   , T = A T T A.(10) The geodesic curvature can be computed by κ g = div ∇ϕ ∇ϕ ,(11) where div(·) is the divergence operator. For a planar curve, its normal curvature is zero and we have κ = κ g = div( ∇ϕ ∇ϕ ) = div(∇ϕ) = ∇ 2 (ϕ).(12) Therefore, Laplacian can't ensure smoothness of a tool path for pocket milling. To guarantee the smoothness of all iso-level curves on surface S, we define the smoothness energy as E κ (ϕ) = S κ 2 dS = S κ 2 g dS + S κ 2 n dS.(13) In Section 2.1, we employ the formula |l i+1 − l i | = √ h to generate isolevel tool path. But for smooth tool path, the following strategy is exploited: First, a certain number of points are sampled from iso-level curve C i ; Then the level increment |l i+1 − l i | is computed for each point with respect to a given scallop height h using Eq. (1) and Eq. (3); Finally, the smallest level increment is chosen to be the level increment between C i and its next path C i+1 . This results in level increments of different values as opposed to the iso-scallop method, while the scalar function remains unchanged, i.e., the multiresolution property still holds. Figure 3: The curvature vector kn of curve C on S has two orthogonal components: the normal curvature vector k n n n and the geodesic curvature vector k g n g . Optimal tool path generation The width term Equ. (5) and the smoothness term Equ. (8) can control the interval between neighboring paths and smoothness of the paths respectively. Thus an optimal tool path in terms of iso-scallop and smoothness can be obtained by computing ϕ through a nonlinear least square optimization which minimizes a linear combination of the two energies Equ. (6) and Equ. (13) E (ϕ) = E w (ϕ) + λE κ (ϕ),(14) where λ is a positive weight controlling the trade-off between the two terms. And in order to ensure the tool path is regular (i.e., either contour parallel or direction parallel), we introduce a hard constraint ∇ϕ > 0. The impact of this constraint is demonstrated in Section 2.4. Then the optimization problem becomes min ϕ S ∇ϕ − κ s + κ c 8 2 + λ κ 2 g + κ 2 n dS s.t. ∇ϕ > 0.(15) However, because of the smoothness energy, the optimization result may violate Equ. (5), and thus the formula |l i+1 − l i | = √ h would be invalid. Therefore, we employ the method described in Section 2.2 to select iso-level curves with respect to a certain scallop height tolerance. Note that we can use the same scalar function for planning tool paths of different scallop height tolerances, which again shows the multiresolution property of our approach. Since different machine tools have different feed rate capability, for those of good capability we can choose a lower weight on smooth term. Thus the freedom of choosing weights λ provides the possibility of applying the proposed method to various machine tools. Path topology In this section, we will show that each iso-level curve generated by the proposed method is either a closed loop or a curve segment without selfintersection and disjunction. In addition, this kind of path topology can be exploited to quickly extract iso-level curves. Lemma 1. For a given scalar function ϕ defined over a surface S, if the norm of its gradient does not vanish anywhere, the endpoints of iso-level curves (if they exist) are on the boundary. Proof. For an interior point p, ∇ϕ = 0 implies that along the two directions p 1 , p 2 orthogonal to ∇ϕ, we have, in a small range, the following expression ϕ(p + p i ) − ϕ(p) = (∇ϕ) T p i = 0 for i = 1, 2.(16) Namely, each interior point has exactly two directions sharing the same level value with it. Therefore, the endpoints can only be on the boundary. Lemma 2. For the scalar function ϕ, its iso-level curves never intersect with each other and do not have self-intersections. Proof. Since each point corresponds to a unique value, iso-level curves for different values do not intersect with each other. Generally, we have two types of self-intersections, as shown in Fig. 4. The difference between them is that in case (b) the self-intersection is tangential. For case (a), the two curve segments have different tangent directions at the self-intersection point. It is well-known that the gradient direction at a point is orthogonal to the tangent direction of the iso-level curve. Thus the two different tangent directions at the self-intersection point results in a contradiction that there are two different gradient direction at that point. For case (b), we view the selfintersection as two iso-level curves that are of same level value and tangential at the intersection point and separate from each other in its neighborhood. But ∇ϕ = 0 along an iso-level curve implies that iso-level curves near it are of different level values, which again leads to a contradiction. Note that these properties are employed to extract each desired iso-level curves in the following sections so that traversal is not needed. As Lemma 1 and Lemma 2 show, for any interior point of the surface, it has and only has two directions that share the same level value with the point. Accordingly, we can use a "Seed Growth" like algorithm to find the iso-level curves, namely, if we want to find the iso-level curve of a given level value, say l, we can start from an initial edge on which there exists a point whose level value is l, and then search through the edge's adjacent triangles (if the point is a vertex of the mesh, i.e., the endpoint of the edge, the adjacent triangles are all the 1-ring triangles) to get exactly two edges containing level value l, repeat this procedure and finally the initial point can grow to be an iso-level curve of interest. In addition, it follows immediately from Lemma 1 and Lemma 2 that: Proposition 1. Each iso-level path generated by the proposed method is either contour parallel or direction parallel and free from self-intersection and disjunction. Numerical solution Numerically, the iso-level method described in the above section can be applied to any domain with a discrete gradient operator ∇, divergence operator div(·), and curvature tensor T . To solve the optimization models for free-from surfaces, we appeal to the Finite Element Method (FEM), i.e., in this work, we focus on triangular meshes. However, this method can be easily extended to other domains, such as point clouds. Assume that M ⊂ R 3 is a compact triangulated surface with no degenerate triangles. Let N 1 (i) be the 1-neighborhood of vertex v i , which is the index set for vertices connecting to v i . Let D 1 (i) be the 1-disk of the vertex v i , which is the index set for triangles containing v i . The dual cell of a vertex v i is part of its 1-disk which is more near to v i than its N 1 (i). Fig. 5 (a) shows the dual cell C i for an interior vertex v i , while Fig. 5 (b) shows the dual cell for a boundary vertex. A function ϕ defined over the triangulated surface M is considered to be a piecewise linear function, such that ϕ reaches value ϕ i at vertex v i and is linear within each triangle. Based on these, the energies shown in Equ. (6), Equ. (13), and Equ. (15) are computed by integrating the width term and smooth term over the whole mesh domain, while the mesh domain can be decomposed into a set of triangles or a set of dual cells. To compute the width term and the smooth term on a mesh, we need to discretize the gradient, the divergence, and the curvature tensor, which we will describe briefly, since they are basic in FEM. The gradient of ϕ over each triangle is constant as the function ϕ is linear within the triangle. The gradient in a given triangle can be expressed as where A i is the area of the face f i , N i is its unit normal, Ω i is the set of edge indices for face f i , e j is the j−th edge vector (oriented counter-clockwise), and ϕ i is the opposing value of ϕ as shown in Fig. 6. According to the Stokes' theorem, the integral of divergence over the dual cell is equal to the outward flux along the boundary of the dual cell. Thus the divergence operator associated with vertex v i is dicretized by dividing the outward flux by the dual cell area ∇ϕ(f i ) = 1 2A i j∈Ω i ϕ j (N i × e j ),(17)div(X) = 1 2C i j∈D 1 (i) cot θ 1 j (e 1 j · X j ) + cot θ 2 j (e 2 j · X j ),(18) where the sum is taken over the vertex's incident triangles f j with a vector X j , e 1 j and e 2 j are the two edge vectors of triangle f j containing vertex v i , θ 1 j and θ 2 j are the opposing angles, and C i is the dual cell area for vertex v i . Accordingly, the geodesic curvature value of curve ϕ = const associated with the vertex v i can be computed by κ i g = 1 2C i j∈D 1 (i) cot θ 1 j (e 2 j · ∇ϕ(j)) + cot θ 1 j (e 2 j · ∇ϕ(j)) ∇ϕ(j) . The curvature tensor (second fundamental tensor) T is defined in terms of the directional derivatives of the surface normal: T = D u n D v n = ∂n ∂u · u ∂n ∂v · u ∂n ∂v · u ∂n ∂v · v ,(20) where (u, v) are the directions of an orthogonal coordinate system in the tangent frame (the sign convention used here yields positive curvatures for convex surfaces with outward-facing normals). Multiplying this tensor by any vector in the tangent plane gives the derivative of the normal in that direction. Although this definition holds only for smooth surfaces, we can approximate it in the discrete case using finite difference. In this work, the curvature tensor for each face is computed by the method in [27]. Then the whole optimization model can be formulated as min ϕ |F | j=1 A j   ∇ϕ(j) − κ j s + κ c 8   2 + λ   |F | j=1 A j (κ j n ) 2 + |V | i=1 C i (κ i g ) 2   s.t. ∇ϕ(j) > 0.(21) where |F | is the number of faces and |V | denotes the number of vertices. This is a well established nonlinear least square optimization problem with inequality constraints, which can be easily solved by the interior point method [28][29][30]. The interior point solver requires the gradient of the target function and the constraint functions. The gradient calculation boils down to computing the gradient of ∇ϕ and ∇ϕ/ ∇ϕ , which we do as follows. As demonstrated previously, the gradient of a piecewise linear scalar function within a given triangle f k is a linear combination of constant vectors N k × e i , and thus, the partial derivative of ∇ϕ(k) with respect to ϕ j is ∂ ∂ϕ j ∇ϕ(k) = 1 2A k ∂ ∂ϕ j i∈Ω k ϕ i (N k × e i ) = 1 2A k i∈Ω k δ ij (N k × e i ),(22) where δ ij = 1 i = j 0 i = j is the Kronecker delta function. As for the gradient of ∇ϕ/ ∇ϕ , it is ∂ ∂ϕ j ∇ϕ(k) ∇ϕ(k) = ∂ ∂ϕ j ∇ϕ(k) ∇ϕ(k) − ∇ϕ(k) (∇ϕ(k)) T ∂ ∂ϕ j ∇ϕ(k) ∇ϕ(k) ∇ϕ(k) 2 .(23) The final solution to the optimization problem Equ. (21) would be affected by the initial value. In this work, we initialize the tool path by paths from [31]. Tool path planning algorithm Planning tool-path is to represent a surface with a series of curves against some error criteria (i.e., chord deviation and scallop height). We next summarize the overall process for generating such curves on a surface by the iso-level method as follows: 1. Select an initial curve C 0 on the surface S and fix its level value to zero, i.e., l 0 = 0. C 0 is a part of boundary for direction parallel tool path and the whole boundary for contour parallel tool path. 2. Find the solution to the models Equ. (6), Equ. (13), and Equ. (15), including meshing and numerical optimization. 3. Select level values {l i } n i=1 , where l 1 = ϕ min , l (n) = ϕ max , with the method described in Section 2.1 for iso-scallop tool path and the method in Section 2.2 for smooth or optimal tool path. For direction parallel tool path the last tool path corresponds to l n = ϕ max , while for contour parallel tool path the last corresponds to l n−1 . Then fastly extract iso-level curves on the triangular mesh based on the method described in Section 2.4. 4. Convert the iso-level curves on mesh which actually are polygons to surface S. The vertices of an iso-level curve on the mesh are either vertices of the mesh or points on edges of the mesh. For the former case, the vertices are also on S. For the latter case, a vertex is first proportionally mapped to the parameter domain with respect to the two ends of the edge it is on and then find its corresponding point on the surface. 5. Greedily merge short segments of the polygons to approach the chord deviation tolerance as closely as possible. Then finally, these reduced iso-level curves (polygons) are desired tool path. Experimental results In this section, the proposed tool path planning method is implemented on real data. A free-form surface and a human face are chosen to illustrate the effectiveness of it, as in Fig. 7. The free-form surface is exploited to show the generation of direction parallel tool path. The human face was generated by a coordinate measuring machine. We utilize it to show the generation of contour parallel tool path. To plan iso-level tool path, the first thing to do is to construct a proper scalar function over the surface. Since the Finite Element Method is employed to find the optimizer of the optimization models, meshing is needed. We choose the element to be triangular. Fig. 8(a) shows the meshing results of the free-form surface and Fig. 9(a) shows that of the human face. The optimal scalar functions are illustrated in Fig. 8(b) and Fig. 9(b) by varying color. Fig. 8(b) shows the scalar function of the free-form surface for generating direction parallel tool path and Fig. 9(b) shows that of the human face for generating contour parallel tool path. And the varying from blue to red represents the rising of level value. As the optimal scalar functions have been constructed for both surfaces, tool path that is optimal with respect to iso-scallop and smoothness can be generated. A ball-end cutter with radius 4mm is chosen to show the path generation so that tool orientation doesn't matter. The limited scallop height is 1mm and chord deviation is 0.01mm. In order to clearly show tool paths, the error criterion (i.e., scallop height) is set to be much greater than those in real cases. Fig. 8(c) shows the optimal direction parallel paths on the freeform surface and Fig. 9(c) shows corresponding result of contour parallel tool path on the human face. Their weights are both λ = 1. We next show some comparisons and analyses of the generated tool paths. According to the demonstration of [32], the contour parallel tool path will be emphasized. Fig. 10 shows the tool paths form smooth to iso-scallop generated by the proposed method. Fig. 10(a) shows the smooth contour parallel tool path generated by the proposed method with λ = 10, Fig. 10(b) shows the optimal tool path with λ = 1, and Fig. 10(c) shows the iso-scallop tool path with λ = 0. As described in above sections, the iso-scallop condition Equ. (5) characterizes the overlapping between neighbor paths. Therefore, to analyze the overlapping of the generated tool paths, we conduct statistics on the relative deviation w.r.t. the iso-scallop condition along the paths. It is computed by ∆ = ∇ϕ − κs+κc 8 κs+κc 8 = 1 − ∇ϕ κs+κc 8 .(24) And the statistics results are depicted in Fig. 10(d), 10(e), 10(f). As the figures show, for the iso-scallop tool path, the relative deviation are all less than 5%, and centered around 1%. For the optimal tool path, we could find the ratio move to the greater side, as imagined, and there are a few points which are much greater than the rest. Most of these points are located in the corner parts of the tool path. And for the smooth tool path, its overlapping is much more obvious and there are about 2% of points whose ratio are greater than 10%. But the losing of iso-scallop condition brings smoothness to the tool paths, which is shown in Fig. 10(g), 10(h), 10(i). In conclusion, the optimal tool path tries to find a balance between the overlapping and smoothness. We also compare the optimal tool path with the Laplacian based one in Fig. 11. Although the Laplacian based tool path is obviously smooth than the optimal one, from the overlapping analysis figures, i.e., Fig. 11(c), 11(d), we can find that it is much more severely overlapped for neighbor paths. (b) optimal tool path; (c) iso-scallop tool path; (d) overlapping analysis for smooth tool path; (e) overlapping analysis for optimal tool path; (f) overlapping analysis for iso-scallop tool path; (g) curvature analysis for smooth tool path; (h) curvature analysis for optimal tool path; (i) curvature analysis for iso-scallop tool path. Conclusion In this paper, a new framework of tool path planning is proposed. The novelty of our method is that it allows several objectives to be considered in a unified framework and thus making global optimization of tool paths possible. Moreover, the scalar function only has to be constructed once, then it can be utilized to generate tool paths for machining from rough to fine. The proposed framework is applied to find an optimal tool path that takes smoothness and iso-scallop requirements into consideration simultane- ously. Equ. (5) for controlling interval between neighbor iso-level curves and Equ. (8) for measuring curvature of an iso-level curve are derived to lay a foundation for the formulation of optimization models. It is likely that this theory has further potential in planning other optimal tool path, and the derived formulas can also be directly applied to level set based tool path planning methods, e.g., [22].
4,807
1811.07579
2901595091
We consider active learning of deep neural networks. Most active learning works in this context have focused on studying effective querying mechanisms and assumed that an appropriate network architecture is a priori known for the problem at hand. We challenge this assumption and propose a novel active strategy whereby the learning algorithm searches for effective architectures on the fly, while actively learning. We apply our strategy using three known querying techniques (softmax response, MC-dropout, and coresets) and show that the proposed approach overwhelmingly outperforms active learning using fixed architectures.
In (NAS), the goal is to devise algorithms that automatically optimize the neural architecture for a given problem. Several NAS papers have recently proposed a number of approaches. In @cite_25 , a reinforcement learning algorithm was used to optimize the architecture of a neural network. In @cite_8 , a genetic algorithm is used to optimize the structure of two types of blocks'' (a combination of neural network layers and building components) that have been used for constructing architectures. The number of blocks comprising the full architecture was manually optimized. It was observed that the optimal number of blocks is mostly dependent on the size of the training set. More efficient optimization techniques were proposed in @cite_13 @cite_27 @cite_1 @cite_0 . In all these works, the architecture search algorithms were focused on optimizing the structure of one (or two) blocks that were manually connected together to span the full architecture. The algorithm proposed in @cite_14 optimizes both the block structure and the number of blocks simultaneously.
{ "abstract": [ "We explore efficient neural architecture search methods and show that a simple yet powerful evolutionary algorithm can discover new architectures with excellent performance. Our approach combines a novel hierarchical genetic representation scheme that imitates the modularized design pattern commonly adopted by human experts, and an expressive search space that supports complex topologies. Our algorithm efficiently discovers architectures that outperform a large number of manually designed models for image classification, obtaining top-1 error of 3.6 on CIFAR-10 and 20.3 when transferred to ImageNet, which is competitive with the best existing neural architecture search approaches. We also present results using random search, achieving 0.3 less top-1 accuracy on CIFAR-10 and 0.1 less on ImageNet whilst reducing the search time from 36 hours down to 1 hour.", "Developing neural network image classification models often requires significant architecture engineering. In this paper, we study a method to learn the model architectures directly on the dataset of interest. As this approach is expensive when the dataset is large, we propose to search for an architectural building block on a small dataset and then transfer the block to a larger dataset. The key contribution of this work is the design of a new search space (which we call the \"NASNet search space\") which enables transferability. In our experiments, we search for the best convolutional layer (or \"cell\") on the CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking together more copies of this cell, each with their own parameters to design a convolutional architecture, which we name a \"NASNet architecture\". We also introduce a new regularization technique called ScheduledDropPath that significantly improves generalization in the NASNet models. On CIFAR-10 itself, a NASNet found by our method achieves 2.4 error rate, which is state-of-the-art. Although the cell is not searched for directly on ImageNet, a NASNet constructed from the best cell achieves, among the published works, state-of-the-art accuracy of 82.7 top-1 and 96.2 top-5 on ImageNet. Our model is 1.2 better in top-1 accuracy than the best human-invented architectures while having 9 billion fewer FLOPS - a reduction of 28 in computational demand from the previous state-of-the-art model. When evaluated at different levels of computational cost, accuracies of NASNets exceed those of the state-of-the-art human-designed models. For instance, a small version of NASNet also achieves 74 top-1 accuracy, which is 3.1 better than equivalently-sized, state-of-the-art models for mobile platforms. Finally, the image features learned from image classification are generically useful and can be transferred to other computer vision problems. On the task of object detection, the learned features by NASNet used with the Faster-RCNN framework surpass state-of-the-art by 4.0 achieving 43.1 mAP on the COCO dataset.", "The effort devoted to hand-crafting image classifiers has motivated the use of architecture search to discover them automatically. Reinforcement learning and evolution have both shown promise for this purpose. This study introduces a regularized version of a popular asynchronous evolutionary algorithm. We rigorously compare it to the non-regularized form and to a highly-successful reinforcement learning baseline. Using the same hardware, compute effort and neural network training code, we conduct repeated experiments side-by-side, exploring different datasets, search spaces and scales. We show regularized evolution consistently produces models with similar or higher accuracy, across a variety of contexts without need for re-tuning parameters. In addition, regularized evolution exhibits considerably better performance than reinforcement learning at early search stages, suggesting it may be the better choice when fewer compute resources are available. This constitutes the first controlled comparison of the two search algorithms in this context. Finally, we present new architectures discovered with regularized evolution that we nickname AmoebaNets. These models set a new state of the art for CIFAR-10 (mean test error = 2.13 ) and mobile-size ImageNet (top-5 accuracy = 92.1 with 5.06M parameters), and reach the current state of the art for ImageNet (top-5 accuracy = 96.2 ).", "This paper addresses the scalability challenge of architecture search by formulating the task in a differentiable manner. Unlike conventional approaches of applying evolution or reinforcement learning over a discrete and non-differentiable search space, our method is based on the continuous relaxation of the architecture representation, allowing efficient search of the architecture using gradient descent. Extensive experiments on CIFAR-10, ImageNet, Penn Treebank and WikiText-2 show that our algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modeling, while being orders of magnitude faster than state-of-the-art non-differentiable techniques. Our implementation has been made publicly available to facilitate further research on efficient architecture search algorithms.", "We propose Efficient Neural Architecture Search (ENAS), a fast and inexpensive approach for automatic model design. In ENAS, a controller learns to discover neural network architectures by searching for an optimal subgraph within a large computational graph. The controller is trained with policy gradient to select a subgraph that maximizes the expected reward on the validation set. Meanwhile the model corresponding to the selected subgraph is trained to minimize a canonical cross entropy loss. Thanks to parameter sharing between child models, ENAS is fast: it delivers strong empirical performances using much fewer GPU-hours than all existing automatic model design approaches, and notably, 1000x less expensive than standard Neural Architecture Search. On the Penn Treebank dataset, ENAS discovers a novel architecture that achieves a test perplexity of 55.8, establishing a new state-of-the-art among all methods without post-training processing. On the CIFAR-10 dataset, ENAS designs novel architectures that achieve a test error of 2.89 , which is on par with NASNet (, 2018), whose test error is 2.65 .", "We propose a new method for learning the structure of convolutional neural networks (CNNs) that is more efficient than recent state-of-the-art methods based on reinforcement learning and evolutionary algorithms. Our approach uses a sequential model-based optimization (SMBO) strategy, in which we search for structures in order of increasing complexity, while simultaneously learning a surrogate model to guide the search through structure space. Direct comparison under the same search space shows that our method is up to 5 times more efficient than the RL method of (2018) in terms of number of models evaluated, and 8 times faster in terms of total compute. The structures we discover in this way achieve state of the art classification accuracies on CIFAR-10 and ImageNet.", "Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214." ], "cite_N": [ "@cite_14", "@cite_8", "@cite_1", "@cite_0", "@cite_27", "@cite_13", "@cite_25" ], "mid": [ "2767002384", "2964081807", "2785430118", "2951104886", "2785366763", "2771727678", "2553303224" ] }
Deep Active Learning with a Neural Architecture Search
Active learning allows a learning algorithm to control the learning process, by actively selecting the labeled training sample from a large pool of unlabeled instances. Theoretically, active learning has a huge potential, especially in cases where exponential speedup in sample complexity can be achieved [12,29,11]. Active learning becomes particularly important when considering supervised deep neural models, which are hungry for large and costly labeled training samples. For example, when considering supervised learning of medical diagnoses based on radiology images, the labeling of images must be performed by professional radiologists whose availability is scarce and consultation time is costly. In this paper, we focus on active learning of image classification with deep neural models. There are only a few works on this topic and, for the most part, they concentrate on one issue: How to select the subsequent instances to be queried. They are also mostly based on the uncertainty sampling principle in which querying uncertain instances tends to expedite the learning process. A drawback of most of these works is their heavy use of prior knowledge regarding the neural architecture. That is, they utilize an architecture already known to be useful for the classification problem at hand. When considering active learning of a new learning task, e.g., involving medical images or remote sensing, there is no known off-the-shelf working architecture. Moreover, even if one receives from an oracle the "correct" architecture for the passive learning problem (an architecture that induces the best performance if trained over a very large labeled training sample), it is unlikely that this architecture will be effective in the early stages of an active learning session. The reason is that a large and expressive architecture will tend to overfit when trained over a small sample and, consequently, its generalization performance and the induced querying function (from the overfit model) can be poor (we demonstrate this phenomenon in Section 6). To overcome this challenge, we propose to perform a neural architecture search (NAS) in every active learning round. We present a new algorithm, the incremental neural architecture search (iNAS), which can be integrated togethe with any active querying strategy. In iNAS, we perform an incremental search for the best architecture from a restricted set of candidate architectures. The motivating intuition is that the capacity of the architectural class should start small, with limited architectural capacity, and should be monotonically non-decreasing along the active learning process. The iNAS algorithm thus only allows for small architectural increments in each active round. We implement iNAS using a flexible architecture family consisting of changeable numbers of stacks, each consisting of a fluid number of Resnet blocks. The resulting active learning algorithm, which we term active-iNAS, consistently and significantly improves all known deep active learning algorithms. We demonstrate this advantage of active-iNAS with the above three querying functions over three image classification datasets: CIFAR-10, CIFAR-100, and SVHN. Problem Setting We first define a standard supervised learning problem. Let X be a feature space and Y be a label space. Let P (X, Y ) be an unknown underlying distribution, where X ∈ X , Y ∈ Y. Based on a labeled training set S m = {(x i , y i )} of m labeled training samples, the goal is to select a prediction function f ∈ F, f : X → Y, so as to minimize the risk R (f ) = E (X,Y ) [ (f (x), y)], where (·) ∈ R + is a given loss function. For any labeled set S (training or validation), the empirical risk over S is defined asr S (f ) = 1 |S| |S| i=1 (f (x i ), y i ). In the pool-based active learning setting, we are given a set U = {x 1 , x 2 , ...x u } of unlabeled samples. Typically, the acquisition of unlabeled instances is cheap and, therefore, U can be very large. The task of the active learner is to choose points from U to be labeled by an annotator so as to train an accurate model for a given labeling budget (number of labeled points). The points are selected by a query function denoted by Q. Query functions often select points based on information inferred from the current model f θ , the existing training set S, and the current pool U . In the mini-batch pool-based active learning setting, the n points to be labeled are queried in bundles that are called mini-batches such that a model is trained after each mini-batch. NAS is formulated as follows. Consider a class A of architectures, where each architecture A ∈ A represents a hypothesis class containing all models f θ ∈ A, where θ represents the parameter vector of the architecture A. The objective in NAS is to solve A = argmin A∈A min f θ ∈A|S (R (f )). ( Since R (f ) depends on an unknown distribution, it is typically proxied by an empirical quantity such asr S (f ) where S is a training or validation set. Deep Active Learning with a Neural Architecture Search In this section we define a neural architecture search space over which we apply a novel search algorithm. This search space together with the algorithm constitute a new NAS technique that drives our new active algorithm. Modular Architecture Search Space Modern neural network architectures are often modeled as a composition of one or several basic building blocks (sometimes referred to as "cells") containing several layers [13,16,31,30,14]. Stacks are composed of several blocks connected together. The full architecture is a sequence of stacks, where usually down-sampling and depth-expansion are performed between stacks. For example, consider the Resnet-18 architecture. This network begins with two initial layers and continues with four consecutive stacks, each consisting of two Resnet basic blocks, followed by an average pooling and ending with a softmax layer. The Resnet basic block contains two batch normalized 3 × 3 convolutional layers with a ReLU activation and a residual connection. Between every two stacks, the feature maps' resolution is reduced by a factor of 2 (using a strided convolution layer), and the width (number of feature maps in each layer, denoted as W ) is doubled, starting from 64 in the first block. This classic architecture has several variants, which differ by the number and type of blocks in each stack. In this work, we consider "homogenous" architectures composed of a single block type and with each stack containing the same number of blocks. We denote such an architecture by A(B, N blocks , N stacks ), where B is the building block, N blocks is the number of blocks in each stack, and N stacks is the number of stacks. For example, using this notation, Resnet-18 is A(B r , 2, 4) where B r is the Resnet basic block. Figure 1 depicts the proposed homogeneous architecture. For a given block B, we define a modular architecture search space as A = {A(B, i, j) : i ∈ {1, 2, 3, ..., N blocks }, j ∈ {1, 2, 3, ..., N stacks }}, which is simply all possible architectures spanned by the grid defined by the two corners A(B, 1, 1) and A(B, N blocks , N stacks ). Clearly, the space A is restricted in the sense that it only contains a limited subspace of architectures but nevertheless it contains N blocks × N stacks architectures with diversity in both numbers of layers and parameters. Search Space as an Acyclic Directed Graph (DAG) The main idea in our search strategy is to start from the smallest possible architecture (in the modular search space) and iteratively search for an optimal incremental architectural expansion within the modular search space. We define the depth of an architecture to be the number of layers in the architecture. We denote the depth of A(B, i, j) by |A(B, i, j)| = ijβ + α, where β is the number of layers in the block B and α is the number of layers in the initial block (all the layers appearing before the first block) plus the number of layers in the classification block (all the layers appearing after the last block). It is convenient to represent the architecture search space as a di- . . , N blocks } is the number of blocks in each stack, and j ∈ {1, 2, 3, . . . , N stacks } is the number of stacks. The edge set E is defined based on two incremental expansion steps. The first step, increases the depth of the network without changing the number of stacks (i.e., without affecting the width), and the second step increases the depth while also increasing the number of stacks (i.e., increasing the width). Both increment steps are defined so as to perform the minimum possible architectural expansion (within the search steps). Thus, when expanding A(B, i, j) using the first step, the resulting architecture is A(B, i + 1, j). When expanding A(B, i, j) using the second step, we reduce the number of blocks in each stack to perform a minimal expansion resulting in the architecture A(B, ij j+1 + 1, j + 1). The parameters of the latter architecture are obtained by rounding up the solution i of the following problem, i = argmin i >0 |A(B, i , j + 1)| s.t.|A(B, i , j + 1)| > |A(B, i, j)| . We conclude that each of these steps are indeed depthexpanding. In the first step, the expansion is only made along the depth dimension, while the second step affects the number of stacks and expands the width as well. In both steps, the incremental step is the smallest possible within the modular search space. In Figure 2, we depict the DAG G on a grid whose coordinates are i (blocks) and j (stacks). The modular search space in this example is all the architectures in the range A(B, 1, 1) to A(B, 5, 4). The arrows represents all edges in G. In this formulation, it is evident that every path starting from any architecture can be expanded up to the largest possible architecture. Moreover, every architecture is reachable when starting from the smallest architec- ture A(B, 1, 1). These two properties serve well our search strategy. Incremental Neural Architecture Search The proposed incremental neural architecture search (iNAS) procedure is described in Algorithm 1 and operates as follows. Given a small initial architecture A(B, i 0 , j 0 ), a training set S, and an architecture search space A, we first randomly partition the set S into training and validation subsets, S and V , respectively, S = S ∪ V . On iteration t, a set of candidate architectures is selected based on the edges of the search DAG (see Section 4.2) including the current architecture and the two connected vertices (lines 5-6). This step creates a candidate set, A , consisting of three models, A = {A(B, i, j), A(B, ij j+1 +1, j +1), A(B, i+ 1, j)}. In line 6, the best candidate in terms of validation performance is selected and denoted A t = A(B, i t , j t ). The optimization problem formulated in line 6 is an approximation of the NAS objective formulated in Equation (1). The algorithm terminates whenever A t = A t−1 , or a predefined maximum number of iterations is reached (in which case A t is the final output). Active Learning with iNAS The deep active learning with incremental neural architecture search (active-iNAS) technique is described in Algorithm 2 and works as follows. Given a pool U of unlabeled instances from X , a set of architectures A is induced for t=1:T iN AS do: 4: i ← i t−1 ; j ← j t−1 Return A(B, i t , j t ) 13: end function using a composition of basic building blocks B as shown in Section 4.1, an initial (small) architecture A 0 ∈ A, a query function Q, an initial (passively) labeled training set size k, and an active learning batch size b. We first sample uniformly at random k points from U to constitute the initial training set S 1 . We then iterate the following three steps. First, we search for an optimal neural architecture using the iNAS algorithm over the search space A with the current training set S t (line 6). The initial architecture for iNAS is chosen to be the selected architecture from the previous active round (A t−1 ), assuming that the architecture size is non-decreasing along the active learning process. The resulting architecture at iteration t is denoted A t . Next, we train a model f θ ∈ A t based on S t (line 7). Finally, if the querying budget allows, the algorithm requests b new points using Q(f θ , S t , U t , b) and updates S t+1 and U t+1 correspondingly. Otherwise the algorithm returns f θ (lines 8-14). 5: A = { A(B, i, j), A(B, ij j+1 + 1, j + 1), A(B, i + 1, j)} 6: A = A ∩ A 7: A(B, i t , j t ) = = argmin A∈A r V (argmin f θ ∈ArS (f θ )) 8: if A(B, i t , j t ) = A(B, i t−1 , j t−1 ) Motivation and Implementation Notes The iNAS algorithm is designed to exploit the prior knowledge gleaned from samples of increasing size, which is motivated from straightforward statistical learning arguments. iNAS starts with small capacity so as to avoid overfitting in early stages, and then allows for capacity increments as labeled data accumulates. Moreover, iNAS preserves small capacity by considering only two small incremental steps (at each iteration). Alternative approaches such as a full grid-search on each active round would not enjoy these benefits and will be prone to overfitting (note also that full-grid search could be computationally prohibitive). Turning now to analyze the run time of active-iNAS, when running with small active learning mini-batches, it is evident that the iNAS algorithm will only require one itera- 14: t ← t + 1 15: end while 16: end function tion at each round, resulting in only having to train three additional models at each round. In our implementation of iNAS, we apply "premature evaluation" as considered in [26]; our models are evaluated after T SGD /4 epochs where T SGD is the total number of epochs in each round. Our final active-iNAS implementation thus only takes 1.75T SGD for each active round. For example, in the CIFAR-10 experiment T SGD = 200 requires less than 2 GPU hours (on average) for an active learning round (Nvidia Titan-Xp GPU). S ← Q(f θ , S t , U t , b) 12: S t+1 ← S t ∪ S 13: U t+1 ← U t \S Experimental Design and Details Datasets CIFAR-10. The CIFAR-10 dataset [18] is an image classification dataset containing 50,000 training images and 10,000 test images that are classified into 10 categories. The image size is 32 × 32 × 3 pixels (RGB images). CIFAR-100. The CIFAR-100 dataset [18] is an image classification dataset containing 50,000 training images and 10,000 test images that are classified into 100 categories. The image size is 32 × 32 × 3 pixels (RGB images). Street View House Numbers (SVHN). The SVHN dataset [22] is an image classification dataset containing 73,257 training images and 26,032 test images classified into 10 classes representing digits. The images are digits of house numbers cropped and aligned, taken from the Google Street View service. Image size is 32 × 32 × 3 pixels. Architectures and Hyperparameters We used an architecture search space that is based on the Resnet architecture [13]. The initial block contains a convolutional layer with filter size of 3 × 3 and depth of 64, followed by a max-pooling layer having a spatial size of 3 × 3 and strides of 2. The basic block contains two convo-lutional layers of size 3 × 3 followed by a ReLU activation. A residual connection is added before the activation of the second convolutional layer, and a batch normalization [17] is used after each layer. The classification block contains an average pooling layer that reduces the spatial dimension to 1 × 1, and a fully connected classification layer followed by softmax. The search space is defined according to the formulation in Section 4.1, and spans all architectures in the range A (B r , 1, 1) to A(B r , 12, 5). As a baseline, we chose two fixed architectures. The first architecture was the one optimized for the first active round (optimized over the initial seed of labeled points), and which coincidentally happened to be A(B r , 1, 2) on all tested datasets. The second architecture was the wellknown Resnet-18, denoted as A(B r , 2, 4), which is some middle point in our search grid. We trained all models using stochastic gradient descent (SGD) with a batch size of 128 and momentum of 0.9 for 200 epochs. We used a learning rate of 0.1, with a learning rate multiplicative decay of 0.1 after epochs 100 and 150. Since we were dealing with different sizes of training sets along the active learning process, the epoch size kept changing. We fixed the size of an epoch to be 50,000 instances (by oversampling), regardless of the current size of the training set S t . A weight decay of 5e-4 was used, and standard data augmentation was applied containing horizontal flips, four pixel shifts and up to 15-degree rotations. The active learning was implemented with an initial labeled training seed (k) of 2000 instances. The active minibatch size (b) was initialized to 2000 instances and updated to 5000 after reaching 10000 labeled instances. The maximal budget was set to 50,000 for all datasets 1 . For time efficiency reasons, the iNAS algorithm was implemented with T iN AS = 1, and the training of new architectures in iNAS was early-stopped after 50 epochs, similar to what was done in [26]. Query Functions We applied the following three well known query functions. Softmax Response. The softmax response method (SR) simply estimates prediction confidence (the inverse of uncertainty) by the maximum softmax value of the instance. In the batch pool-based active learning setting that we consider here, we simply query labels for the least confident b points. MC-dropout [8,7]. The points in the pool are ordered based on their prediction uncertainty estimated by MCdropout and queried in that order. The MC-dropout was implemented with p = 0.5 for the dropout rate and 100 feed-forward iterations for each sample. Coreset [25,9]. The coreset method was implemented as follows. For a trained model f , we denote the output of the representation layer (the layer before the last) as φ f . For every sample x ∈ U , its coreset loss is measured as min x ∈S (d(φ f (x ), φ f (x)), where d is the l2 euclidean distance. We iteratively sampled the point with the highest coreset loss with respect to the latest set S, b times. Experimental Results We first compare active-iNAS to active learning performed with a fixed architecture over the three datasets, and apply the above three querying functions. Then we analyze the architectures learned by iNAS along the active process. We also empirically motivate the use of iNAS by showing how optimized architecture can improve the query function. Finally, we compare the resulting active learning algorithm obtained with the active-iNAS framework. Active-iNAS vs. Fixed Architecture The results of an active learning algorithm are often depicted by a learning curve measuring the trade-off between labeled points consumed (or a budget) vs. performance (accuracy in our case). For example, in Figure 3(a) we see the results obtained by active-iNAS and two fixed architectures for classifying CIFAR-10 images using the softmax response querying function. In black (solid), we see the curve for the active-iNAS method. The results of A(B r , 1, 2) and Resnet-18 (A(B r , 2, 4)) appear in (dashed) red and (dashed) blue, respectively. The X axis corresponds to the labeled points consumed, starting from k = 2000 (the initial seed size), and ending with 50,000 (the maximal budget). In each active learning curve, the tandard error of the mean over three random repetitions is shadowed. We present results for CIFAR-10, CIFAR-100 and SVHN. We first analyze the results for CIFAR-10 (Figure 3). Consider the graphs corresponding to the fixed architectures (red and blue). It is evident that for all query functions, the small architecture (red) outperforms the big one (Resnet-18 in blue) in the early stage of the active process. Later on, we see that the big and expressive Resnet-18 outperforms the small architecture. Active-iNAS, performance consistently and significantly outperforms both fixed architectures almost throughout the entire range. It is most striking that active-iNAS is better than each of the fixed architectures even when all are consuming the entire training budget. Later on we speculate about the reason for this phenomenon as well as the switch between the red and blue curves occuring roughly around 15,000 training points (in Figure 3(a)). Turning now to CIFAR-100 (Figure 4), we see qualitatively very similar behaviors and relations between the various active learners. We now see that the learning problem is considerably harder, as indicated by the smaller area under all the curves. Nevertheless, in this problem active-iNAS achieves a substantially greater advantage over the fixed architectures in all three query functions. Finally, in the SVHN digit classification task, which is known to be easier than both the CIFAR tasks, we again see qualitatively similar behaviors that are now much less pronounced, as all active learners are quite effective. On the other hand, in the SVHN task, active-iNAS impressively obtains almost maximal performance after consuming only 20% of the training budget. Analyzing the Learned Architectures In addition to standard performance results presented in Section 6.1, it is interesting to inspect the sequence of architectures that have been selected by iNas along the active learning process. In Figure 7 we depict this dynamics; for example, consider the CIFAR-10 dataset appearing in solid lines, where the blue curve represents the number of pa-rameters in the network and the black shows the number of layers in the architecture. Comparing CIFAR-10 (solid) and CIFAR-100 (dashed), we see that active-iNAS prefers, for CIFAR-100, deeper architectures compared to its choices for CIFAR-10. In contrast, in SVHN (dotted), active-iNAS gravitates to shallower but wider architectures, which result significantly larger numbers of parameters. The iNAS algorithm is relatively stable in the sense that in the vast majority of random repeats of the experiments, similar sequences of architectures have been learned (this result is not shown in the figure). A hypothesis that might explain the latter results is that CIFAR-100 contains a larger number of "concepts" requiring deeper hierarchy of learned CNN layers compared to CIFAR-10. The SVHN is a simpler and less noisy learning problem, and, therefore, larger architectures can play without significant risk of overfitting. Enhanced Querying with Active-iNAS In this section we argue and demonstrate that optimized architectures not only improve generalization at each step, but also enhance the query function quality 2 . In order to isolate the contribution of the query function, we normalize the active performance by the performance of a passive learner obtained with the same model. A common approach for this normalization has already been proposed in [15,2], and can be defined as follows. 3 Let the relative AUC gain be the relative reduction of area under the curve (AUC) of the 0-1 loss in the active learning curve, compared to the AUC of the passive learner (trained over the same number of random queries, at each round); namely, AUC-GAIN(P A, AC, m) = AU C m (P A) − AU C m (AC) AU C m (P A) , where AC is an active learning algorithm, P A is its passive application (with the same architecture), m is a labeling budget, and AU C m (·) is the area under the learning curve (0-1 loss) of the algorithm using m labeled examples. Clearly, high values of AUC-GAIN correspond to high performance and vice versa. In Figure 8, we used the AUC-GAIN to measure the performance of the softmax response querying function on the CIFAR-10 dataset over all training budgets up to the maximal (50,000). We compare the performance of this query function applied over two different architectures: the small architecture (A(B r , 1, 2), and Resnet-18 (A(B r , 2, 4). We note that it is unclear how to define AUC-GAIN for active-iNAS because it has a dynamically changing architecture. As can easily be seen, the small architecture dramatically outperforms Resnet-18 in the early stages. Later on, the AUC-GAIN curves switch, and Resnet-18 catches up and outperforms the small architecture. This result supports the intuition that improvements in the generalization power of an architecture tend to improve the effectiveness of the querying function. We hypothesize that the active-iNAS' outstanding results shown in Section 6.1 have been achieved not only by the improved generalization of every single model, but also by the effect of the optimized architecture on the querying function. Query Function Comparison In Section 6.1 We demonstrated that active-iNAS consistently outperformed direct active applications of three querying functions. Here, we compare the performance of the three active-iNAS methods, applied with those three functions: softmax response, MC-dropout and coreset. In Figure 6 we compare these three active-iNAS algorithms over the three datasets. In all three datasets, softmax response is among the top performers, whereas one or the other two querying functions is sometimes the worst. In this sense, softmax response achieves the best results. For example, on CIFAR-10 and SVHN, the MC-dropout is on par with softmax, but on CIFAR-100 MC-dropout is the worst. 3 The corresponding normalizations in those papers are defined using slightly different terminology, but are essentially identical to our definition. (Br, 2, 4)). The poor performance of MC-dropout over CIFAR-100 may be caused by the large number of classes, as pointed out by [10] in the context of selective prediction. In all cases, coreset is slightly behind the softmax response. This is in sharp contrast to the results presented by [25,9]. We conclude this section by emphasizing that our results indicate that the combination of softmax response with active-iNAS is the best active learning method. Concluding Remarks We presented active-iNAS, an algorithm that effectively integrates deep neural architecture optimization with active learning. The active algorithm performs a monotone search for the locally best architecture on the fly. Our experiments indicate that active-iNAS outperforms standard active learners that utilize suitable and commonly used fixed architecture (In the supplementary material we present comparisons to other choices of fixed architectures). In terms of absolute performance quality, to the best of our knowledge, the combination of active-iNAS and softmax response is the best active learner over the datasets we considered.
4,507
1811.07579
2901595091
We consider active learning of deep neural networks. Most active learning works in this context have focused on studying effective querying mechanisms and assumed that an appropriate network architecture is a priori known for the problem at hand. We challenge this assumption and propose a novel active strategy whereby the learning algorithm searches for effective architectures on the fly, while actively learning. We apply our strategy using three known querying techniques (softmax response, MC-dropout, and coresets) and show that the proposed approach overwhelmingly outperforms active learning using fixed architectures.
When considering NAS for fully-connected networks, @cite_30 proposed an algorithm that iteratively adds neurons to an existing layer or to initiate a new layer. Their algorithm iteratively optimizes the width and depth of a network. For a comprehensive survey on NAS techniques, see @cite_19 . To the best of our knowledge, no work has been done on architecture searches for active learning.
{ "abstract": [ "We present new algorithms for adaptively learning artificial neural networks. Our algorithms (AdaNet) adaptively learn both the structure of the network and its weights. They are based on a solid theoretical analysis, including data-dependent generalization guarantees that we prove and discuss in detail. We report the results of large-scale experiments with one of our algorithms on several binary classification tasks extracted from the CIFAR-10 dataset. The results demonstrate that our algorithm can automatically learn network structures with very competitive performance accuracies when compared with those achieved for neural networks found by standard approaches.", "Deep Learning has enabled remarkable progress over the last years on a variety of tasks, such as image recognition, speech recognition, and machine translation. One crucial aspect for this progress are novel neural architectures. Currently employed architectures have mostly been developed manually by human experts, which is a time-consuming and error-prone process. Because of this, there is growing interest in automated neural architecture search methods. We provide an overview of existing work in this field of research and categorize them according to three dimensions: search space, search strategy, and performance estimation strategy." ], "cite_N": [ "@cite_30", "@cite_19" ], "mid": [ "2464772092", "2885311373" ] }
Deep Active Learning with a Neural Architecture Search
Active learning allows a learning algorithm to control the learning process, by actively selecting the labeled training sample from a large pool of unlabeled instances. Theoretically, active learning has a huge potential, especially in cases where exponential speedup in sample complexity can be achieved [12,29,11]. Active learning becomes particularly important when considering supervised deep neural models, which are hungry for large and costly labeled training samples. For example, when considering supervised learning of medical diagnoses based on radiology images, the labeling of images must be performed by professional radiologists whose availability is scarce and consultation time is costly. In this paper, we focus on active learning of image classification with deep neural models. There are only a few works on this topic and, for the most part, they concentrate on one issue: How to select the subsequent instances to be queried. They are also mostly based on the uncertainty sampling principle in which querying uncertain instances tends to expedite the learning process. A drawback of most of these works is their heavy use of prior knowledge regarding the neural architecture. That is, they utilize an architecture already known to be useful for the classification problem at hand. When considering active learning of a new learning task, e.g., involving medical images or remote sensing, there is no known off-the-shelf working architecture. Moreover, even if one receives from an oracle the "correct" architecture for the passive learning problem (an architecture that induces the best performance if trained over a very large labeled training sample), it is unlikely that this architecture will be effective in the early stages of an active learning session. The reason is that a large and expressive architecture will tend to overfit when trained over a small sample and, consequently, its generalization performance and the induced querying function (from the overfit model) can be poor (we demonstrate this phenomenon in Section 6). To overcome this challenge, we propose to perform a neural architecture search (NAS) in every active learning round. We present a new algorithm, the incremental neural architecture search (iNAS), which can be integrated togethe with any active querying strategy. In iNAS, we perform an incremental search for the best architecture from a restricted set of candidate architectures. The motivating intuition is that the capacity of the architectural class should start small, with limited architectural capacity, and should be monotonically non-decreasing along the active learning process. The iNAS algorithm thus only allows for small architectural increments in each active round. We implement iNAS using a flexible architecture family consisting of changeable numbers of stacks, each consisting of a fluid number of Resnet blocks. The resulting active learning algorithm, which we term active-iNAS, consistently and significantly improves all known deep active learning algorithms. We demonstrate this advantage of active-iNAS with the above three querying functions over three image classification datasets: CIFAR-10, CIFAR-100, and SVHN. Problem Setting We first define a standard supervised learning problem. Let X be a feature space and Y be a label space. Let P (X, Y ) be an unknown underlying distribution, where X ∈ X , Y ∈ Y. Based on a labeled training set S m = {(x i , y i )} of m labeled training samples, the goal is to select a prediction function f ∈ F, f : X → Y, so as to minimize the risk R (f ) = E (X,Y ) [ (f (x), y)], where (·) ∈ R + is a given loss function. For any labeled set S (training or validation), the empirical risk over S is defined asr S (f ) = 1 |S| |S| i=1 (f (x i ), y i ). In the pool-based active learning setting, we are given a set U = {x 1 , x 2 , ...x u } of unlabeled samples. Typically, the acquisition of unlabeled instances is cheap and, therefore, U can be very large. The task of the active learner is to choose points from U to be labeled by an annotator so as to train an accurate model for a given labeling budget (number of labeled points). The points are selected by a query function denoted by Q. Query functions often select points based on information inferred from the current model f θ , the existing training set S, and the current pool U . In the mini-batch pool-based active learning setting, the n points to be labeled are queried in bundles that are called mini-batches such that a model is trained after each mini-batch. NAS is formulated as follows. Consider a class A of architectures, where each architecture A ∈ A represents a hypothesis class containing all models f θ ∈ A, where θ represents the parameter vector of the architecture A. The objective in NAS is to solve A = argmin A∈A min f θ ∈A|S (R (f )). ( Since R (f ) depends on an unknown distribution, it is typically proxied by an empirical quantity such asr S (f ) where S is a training or validation set. Deep Active Learning with a Neural Architecture Search In this section we define a neural architecture search space over which we apply a novel search algorithm. This search space together with the algorithm constitute a new NAS technique that drives our new active algorithm. Modular Architecture Search Space Modern neural network architectures are often modeled as a composition of one or several basic building blocks (sometimes referred to as "cells") containing several layers [13,16,31,30,14]. Stacks are composed of several blocks connected together. The full architecture is a sequence of stacks, where usually down-sampling and depth-expansion are performed between stacks. For example, consider the Resnet-18 architecture. This network begins with two initial layers and continues with four consecutive stacks, each consisting of two Resnet basic blocks, followed by an average pooling and ending with a softmax layer. The Resnet basic block contains two batch normalized 3 × 3 convolutional layers with a ReLU activation and a residual connection. Between every two stacks, the feature maps' resolution is reduced by a factor of 2 (using a strided convolution layer), and the width (number of feature maps in each layer, denoted as W ) is doubled, starting from 64 in the first block. This classic architecture has several variants, which differ by the number and type of blocks in each stack. In this work, we consider "homogenous" architectures composed of a single block type and with each stack containing the same number of blocks. We denote such an architecture by A(B, N blocks , N stacks ), where B is the building block, N blocks is the number of blocks in each stack, and N stacks is the number of stacks. For example, using this notation, Resnet-18 is A(B r , 2, 4) where B r is the Resnet basic block. Figure 1 depicts the proposed homogeneous architecture. For a given block B, we define a modular architecture search space as A = {A(B, i, j) : i ∈ {1, 2, 3, ..., N blocks }, j ∈ {1, 2, 3, ..., N stacks }}, which is simply all possible architectures spanned by the grid defined by the two corners A(B, 1, 1) and A(B, N blocks , N stacks ). Clearly, the space A is restricted in the sense that it only contains a limited subspace of architectures but nevertheless it contains N blocks × N stacks architectures with diversity in both numbers of layers and parameters. Search Space as an Acyclic Directed Graph (DAG) The main idea in our search strategy is to start from the smallest possible architecture (in the modular search space) and iteratively search for an optimal incremental architectural expansion within the modular search space. We define the depth of an architecture to be the number of layers in the architecture. We denote the depth of A(B, i, j) by |A(B, i, j)| = ijβ + α, where β is the number of layers in the block B and α is the number of layers in the initial block (all the layers appearing before the first block) plus the number of layers in the classification block (all the layers appearing after the last block). It is convenient to represent the architecture search space as a di- . . , N blocks } is the number of blocks in each stack, and j ∈ {1, 2, 3, . . . , N stacks } is the number of stacks. The edge set E is defined based on two incremental expansion steps. The first step, increases the depth of the network without changing the number of stacks (i.e., without affecting the width), and the second step increases the depth while also increasing the number of stacks (i.e., increasing the width). Both increment steps are defined so as to perform the minimum possible architectural expansion (within the search steps). Thus, when expanding A(B, i, j) using the first step, the resulting architecture is A(B, i + 1, j). When expanding A(B, i, j) using the second step, we reduce the number of blocks in each stack to perform a minimal expansion resulting in the architecture A(B, ij j+1 + 1, j + 1). The parameters of the latter architecture are obtained by rounding up the solution i of the following problem, i = argmin i >0 |A(B, i , j + 1)| s.t.|A(B, i , j + 1)| > |A(B, i, j)| . We conclude that each of these steps are indeed depthexpanding. In the first step, the expansion is only made along the depth dimension, while the second step affects the number of stacks and expands the width as well. In both steps, the incremental step is the smallest possible within the modular search space. In Figure 2, we depict the DAG G on a grid whose coordinates are i (blocks) and j (stacks). The modular search space in this example is all the architectures in the range A(B, 1, 1) to A(B, 5, 4). The arrows represents all edges in G. In this formulation, it is evident that every path starting from any architecture can be expanded up to the largest possible architecture. Moreover, every architecture is reachable when starting from the smallest architec- ture A(B, 1, 1). These two properties serve well our search strategy. Incremental Neural Architecture Search The proposed incremental neural architecture search (iNAS) procedure is described in Algorithm 1 and operates as follows. Given a small initial architecture A(B, i 0 , j 0 ), a training set S, and an architecture search space A, we first randomly partition the set S into training and validation subsets, S and V , respectively, S = S ∪ V . On iteration t, a set of candidate architectures is selected based on the edges of the search DAG (see Section 4.2) including the current architecture and the two connected vertices (lines 5-6). This step creates a candidate set, A , consisting of three models, A = {A(B, i, j), A(B, ij j+1 +1, j +1), A(B, i+ 1, j)}. In line 6, the best candidate in terms of validation performance is selected and denoted A t = A(B, i t , j t ). The optimization problem formulated in line 6 is an approximation of the NAS objective formulated in Equation (1). The algorithm terminates whenever A t = A t−1 , or a predefined maximum number of iterations is reached (in which case A t is the final output). Active Learning with iNAS The deep active learning with incremental neural architecture search (active-iNAS) technique is described in Algorithm 2 and works as follows. Given a pool U of unlabeled instances from X , a set of architectures A is induced for t=1:T iN AS do: 4: i ← i t−1 ; j ← j t−1 Return A(B, i t , j t ) 13: end function using a composition of basic building blocks B as shown in Section 4.1, an initial (small) architecture A 0 ∈ A, a query function Q, an initial (passively) labeled training set size k, and an active learning batch size b. We first sample uniformly at random k points from U to constitute the initial training set S 1 . We then iterate the following three steps. First, we search for an optimal neural architecture using the iNAS algorithm over the search space A with the current training set S t (line 6). The initial architecture for iNAS is chosen to be the selected architecture from the previous active round (A t−1 ), assuming that the architecture size is non-decreasing along the active learning process. The resulting architecture at iteration t is denoted A t . Next, we train a model f θ ∈ A t based on S t (line 7). Finally, if the querying budget allows, the algorithm requests b new points using Q(f θ , S t , U t , b) and updates S t+1 and U t+1 correspondingly. Otherwise the algorithm returns f θ (lines 8-14). 5: A = { A(B, i, j), A(B, ij j+1 + 1, j + 1), A(B, i + 1, j)} 6: A = A ∩ A 7: A(B, i t , j t ) = = argmin A∈A r V (argmin f θ ∈ArS (f θ )) 8: if A(B, i t , j t ) = A(B, i t−1 , j t−1 ) Motivation and Implementation Notes The iNAS algorithm is designed to exploit the prior knowledge gleaned from samples of increasing size, which is motivated from straightforward statistical learning arguments. iNAS starts with small capacity so as to avoid overfitting in early stages, and then allows for capacity increments as labeled data accumulates. Moreover, iNAS preserves small capacity by considering only two small incremental steps (at each iteration). Alternative approaches such as a full grid-search on each active round would not enjoy these benefits and will be prone to overfitting (note also that full-grid search could be computationally prohibitive). Turning now to analyze the run time of active-iNAS, when running with small active learning mini-batches, it is evident that the iNAS algorithm will only require one itera- 14: t ← t + 1 15: end while 16: end function tion at each round, resulting in only having to train three additional models at each round. In our implementation of iNAS, we apply "premature evaluation" as considered in [26]; our models are evaluated after T SGD /4 epochs where T SGD is the total number of epochs in each round. Our final active-iNAS implementation thus only takes 1.75T SGD for each active round. For example, in the CIFAR-10 experiment T SGD = 200 requires less than 2 GPU hours (on average) for an active learning round (Nvidia Titan-Xp GPU). S ← Q(f θ , S t , U t , b) 12: S t+1 ← S t ∪ S 13: U t+1 ← U t \S Experimental Design and Details Datasets CIFAR-10. The CIFAR-10 dataset [18] is an image classification dataset containing 50,000 training images and 10,000 test images that are classified into 10 categories. The image size is 32 × 32 × 3 pixels (RGB images). CIFAR-100. The CIFAR-100 dataset [18] is an image classification dataset containing 50,000 training images and 10,000 test images that are classified into 100 categories. The image size is 32 × 32 × 3 pixels (RGB images). Street View House Numbers (SVHN). The SVHN dataset [22] is an image classification dataset containing 73,257 training images and 26,032 test images classified into 10 classes representing digits. The images are digits of house numbers cropped and aligned, taken from the Google Street View service. Image size is 32 × 32 × 3 pixels. Architectures and Hyperparameters We used an architecture search space that is based on the Resnet architecture [13]. The initial block contains a convolutional layer with filter size of 3 × 3 and depth of 64, followed by a max-pooling layer having a spatial size of 3 × 3 and strides of 2. The basic block contains two convo-lutional layers of size 3 × 3 followed by a ReLU activation. A residual connection is added before the activation of the second convolutional layer, and a batch normalization [17] is used after each layer. The classification block contains an average pooling layer that reduces the spatial dimension to 1 × 1, and a fully connected classification layer followed by softmax. The search space is defined according to the formulation in Section 4.1, and spans all architectures in the range A (B r , 1, 1) to A(B r , 12, 5). As a baseline, we chose two fixed architectures. The first architecture was the one optimized for the first active round (optimized over the initial seed of labeled points), and which coincidentally happened to be A(B r , 1, 2) on all tested datasets. The second architecture was the wellknown Resnet-18, denoted as A(B r , 2, 4), which is some middle point in our search grid. We trained all models using stochastic gradient descent (SGD) with a batch size of 128 and momentum of 0.9 for 200 epochs. We used a learning rate of 0.1, with a learning rate multiplicative decay of 0.1 after epochs 100 and 150. Since we were dealing with different sizes of training sets along the active learning process, the epoch size kept changing. We fixed the size of an epoch to be 50,000 instances (by oversampling), regardless of the current size of the training set S t . A weight decay of 5e-4 was used, and standard data augmentation was applied containing horizontal flips, four pixel shifts and up to 15-degree rotations. The active learning was implemented with an initial labeled training seed (k) of 2000 instances. The active minibatch size (b) was initialized to 2000 instances and updated to 5000 after reaching 10000 labeled instances. The maximal budget was set to 50,000 for all datasets 1 . For time efficiency reasons, the iNAS algorithm was implemented with T iN AS = 1, and the training of new architectures in iNAS was early-stopped after 50 epochs, similar to what was done in [26]. Query Functions We applied the following three well known query functions. Softmax Response. The softmax response method (SR) simply estimates prediction confidence (the inverse of uncertainty) by the maximum softmax value of the instance. In the batch pool-based active learning setting that we consider here, we simply query labels for the least confident b points. MC-dropout [8,7]. The points in the pool are ordered based on their prediction uncertainty estimated by MCdropout and queried in that order. The MC-dropout was implemented with p = 0.5 for the dropout rate and 100 feed-forward iterations for each sample. Coreset [25,9]. The coreset method was implemented as follows. For a trained model f , we denote the output of the representation layer (the layer before the last) as φ f . For every sample x ∈ U , its coreset loss is measured as min x ∈S (d(φ f (x ), φ f (x)), where d is the l2 euclidean distance. We iteratively sampled the point with the highest coreset loss with respect to the latest set S, b times. Experimental Results We first compare active-iNAS to active learning performed with a fixed architecture over the three datasets, and apply the above three querying functions. Then we analyze the architectures learned by iNAS along the active process. We also empirically motivate the use of iNAS by showing how optimized architecture can improve the query function. Finally, we compare the resulting active learning algorithm obtained with the active-iNAS framework. Active-iNAS vs. Fixed Architecture The results of an active learning algorithm are often depicted by a learning curve measuring the trade-off between labeled points consumed (or a budget) vs. performance (accuracy in our case). For example, in Figure 3(a) we see the results obtained by active-iNAS and two fixed architectures for classifying CIFAR-10 images using the softmax response querying function. In black (solid), we see the curve for the active-iNAS method. The results of A(B r , 1, 2) and Resnet-18 (A(B r , 2, 4)) appear in (dashed) red and (dashed) blue, respectively. The X axis corresponds to the labeled points consumed, starting from k = 2000 (the initial seed size), and ending with 50,000 (the maximal budget). In each active learning curve, the tandard error of the mean over three random repetitions is shadowed. We present results for CIFAR-10, CIFAR-100 and SVHN. We first analyze the results for CIFAR-10 (Figure 3). Consider the graphs corresponding to the fixed architectures (red and blue). It is evident that for all query functions, the small architecture (red) outperforms the big one (Resnet-18 in blue) in the early stage of the active process. Later on, we see that the big and expressive Resnet-18 outperforms the small architecture. Active-iNAS, performance consistently and significantly outperforms both fixed architectures almost throughout the entire range. It is most striking that active-iNAS is better than each of the fixed architectures even when all are consuming the entire training budget. Later on we speculate about the reason for this phenomenon as well as the switch between the red and blue curves occuring roughly around 15,000 training points (in Figure 3(a)). Turning now to CIFAR-100 (Figure 4), we see qualitatively very similar behaviors and relations between the various active learners. We now see that the learning problem is considerably harder, as indicated by the smaller area under all the curves. Nevertheless, in this problem active-iNAS achieves a substantially greater advantage over the fixed architectures in all three query functions. Finally, in the SVHN digit classification task, which is known to be easier than both the CIFAR tasks, we again see qualitatively similar behaviors that are now much less pronounced, as all active learners are quite effective. On the other hand, in the SVHN task, active-iNAS impressively obtains almost maximal performance after consuming only 20% of the training budget. Analyzing the Learned Architectures In addition to standard performance results presented in Section 6.1, it is interesting to inspect the sequence of architectures that have been selected by iNas along the active learning process. In Figure 7 we depict this dynamics; for example, consider the CIFAR-10 dataset appearing in solid lines, where the blue curve represents the number of pa-rameters in the network and the black shows the number of layers in the architecture. Comparing CIFAR-10 (solid) and CIFAR-100 (dashed), we see that active-iNAS prefers, for CIFAR-100, deeper architectures compared to its choices for CIFAR-10. In contrast, in SVHN (dotted), active-iNAS gravitates to shallower but wider architectures, which result significantly larger numbers of parameters. The iNAS algorithm is relatively stable in the sense that in the vast majority of random repeats of the experiments, similar sequences of architectures have been learned (this result is not shown in the figure). A hypothesis that might explain the latter results is that CIFAR-100 contains a larger number of "concepts" requiring deeper hierarchy of learned CNN layers compared to CIFAR-10. The SVHN is a simpler and less noisy learning problem, and, therefore, larger architectures can play without significant risk of overfitting. Enhanced Querying with Active-iNAS In this section we argue and demonstrate that optimized architectures not only improve generalization at each step, but also enhance the query function quality 2 . In order to isolate the contribution of the query function, we normalize the active performance by the performance of a passive learner obtained with the same model. A common approach for this normalization has already been proposed in [15,2], and can be defined as follows. 3 Let the relative AUC gain be the relative reduction of area under the curve (AUC) of the 0-1 loss in the active learning curve, compared to the AUC of the passive learner (trained over the same number of random queries, at each round); namely, AUC-GAIN(P A, AC, m) = AU C m (P A) − AU C m (AC) AU C m (P A) , where AC is an active learning algorithm, P A is its passive application (with the same architecture), m is a labeling budget, and AU C m (·) is the area under the learning curve (0-1 loss) of the algorithm using m labeled examples. Clearly, high values of AUC-GAIN correspond to high performance and vice versa. In Figure 8, we used the AUC-GAIN to measure the performance of the softmax response querying function on the CIFAR-10 dataset over all training budgets up to the maximal (50,000). We compare the performance of this query function applied over two different architectures: the small architecture (A(B r , 1, 2), and Resnet-18 (A(B r , 2, 4). We note that it is unclear how to define AUC-GAIN for active-iNAS because it has a dynamically changing architecture. As can easily be seen, the small architecture dramatically outperforms Resnet-18 in the early stages. Later on, the AUC-GAIN curves switch, and Resnet-18 catches up and outperforms the small architecture. This result supports the intuition that improvements in the generalization power of an architecture tend to improve the effectiveness of the querying function. We hypothesize that the active-iNAS' outstanding results shown in Section 6.1 have been achieved not only by the improved generalization of every single model, but also by the effect of the optimized architecture on the querying function. Query Function Comparison In Section 6.1 We demonstrated that active-iNAS consistently outperformed direct active applications of three querying functions. Here, we compare the performance of the three active-iNAS methods, applied with those three functions: softmax response, MC-dropout and coreset. In Figure 6 we compare these three active-iNAS algorithms over the three datasets. In all three datasets, softmax response is among the top performers, whereas one or the other two querying functions is sometimes the worst. In this sense, softmax response achieves the best results. For example, on CIFAR-10 and SVHN, the MC-dropout is on par with softmax, but on CIFAR-100 MC-dropout is the worst. 3 The corresponding normalizations in those papers are defined using slightly different terminology, but are essentially identical to our definition. (Br, 2, 4)). The poor performance of MC-dropout over CIFAR-100 may be caused by the large number of classes, as pointed out by [10] in the context of selective prediction. In all cases, coreset is slightly behind the softmax response. This is in sharp contrast to the results presented by [25,9]. We conclude this section by emphasizing that our results indicate that the combination of softmax response with active-iNAS is the best active learning method. Concluding Remarks We presented active-iNAS, an algorithm that effectively integrates deep neural architecture optimization with active learning. The active algorithm performs a monotone search for the locally best architecture on the fly. Our experiments indicate that active-iNAS outperforms standard active learners that utilize suitable and commonly used fixed architecture (In the supplementary material we present comparisons to other choices of fixed architectures). In terms of absolute performance quality, to the best of our knowledge, the combination of active-iNAS and softmax response is the best active learner over the datasets we considered.
4,507
1906.10109
2950851765
In this paper we present CMRNet, a realtime approach based on a Convolutional Neural Network to localize an RGB image of a scene in a map built from LiDAR data. Our network is not trained on the working area, i.e. CMRNet does not learn the map. Instead it learns to match an image to the map. We validate our approach on the KITTI dataset, processing each frame independently without any tracking procedure. CMRNet achieves 0.26m and 1.05deg median localization accuracy on the sequence 00 of the odometry dataset, starting from a rough pose estimate displaced up to 3.5m and 17deg. To the best of our knowledge this is the first CNN-based approach that learns to match images from a monocular camera to a given, preexisting 3D LiDAR-map.
-map approaches The second category of localization techniques leverages existing maps, in order to solve the localization problem. In particular, two classes of approaches have been presented in the literature: geometry-based and projection-based methods. Caselitz al @cite_23 proposed a geometry-based method that solves the visual localization problem by comparing a set of 3D points, the point cloud reconstructed from a sequence of images and the existing map. Wolcott al @cite_30 , instead, developed a projection-based method that uses meshes built from intensity data associated to the 3D points of the maps, projected into an image plane, to perform a comparison with the camera image using the (NMI) measure. Neubert al @cite_14 proposed to use the similarity between depth images generated by synthetic views and the camera image as a score function for a particle filter, in order to localize the camera in indoor scenes.
{ "abstract": [ "", "Camera-based navigation within a given three-dimensional map enables heterogeneous robotic systems to share maps and use more abstract environment models like floor plans. This paper builds upon our previous work, and addresses the problem of how to combine 3D distance information from the map and the current visual image from the robot's camera in order to navigate within this map. The underlying assumption is that features which cause depth changes are also likely to create visual gradients. Based on this assumption, the similarity of visual image and depth images that are synthesized from the 3D map can be used to evaluate pose hypothesis. This paper integrates this idea into a Monte Carlo localization approach and additionally presents its application to path following. The presented approach is evaluated on a synthetic datasets that provides perfect knowledge of the ground truth, as well as two real-world datasets acquired by a heterogeneous robotic team: a proof-of-concept dataset in a scattered indoor environment, and a challenging corridor dataset.", "Localizing a camera in a given map is essential for vision-based navigation. In contrast to common methods for visual localization that use maps acquired with cameras, we propose a novel approach, which tracks the pose of monocular camera with respect to a given 3D LiDAR map. We employ a visual odometry system based on local bundle adjustment to reconstruct a sparse set of 3D points from image features. These points are continuously matched against the map to track the camera pose in an online fashion. Our approach to visual localization has several advantages. Since it only relies on matching geometry, it is robust to changes in the photometric appearance of the environment. Utilizing panoramic LiDAR maps additionally provides viewpoint invariance. Yet low-cost and lightweight camera sensors are used for tracking. We present real-world experiments demonstrating that our method accurately estimates the 6-DoF camera pose over long trajectories and under varying conditions." ], "cite_N": [ "@cite_30", "@cite_14", "@cite_23" ], "mid": [ "2054969198", "2772641132", "2567328166" ] }
CMRNet: Camera to LiDAR-Map Registration
Over the past few years, the effectiveness of scene understanding for self-driving cars has substantially increased both for object detection and vehicle navigation [1], [2]. Even though these improvements allowed for more advanced and sophisticated Advanced Driver Assistance Systems (ADAS) and maneuvers, the current state of the art is far from the SAE full-automation level, especially in complex scenarios such as urban areas. Most of these algorithms depend on very accurate localization estimates, which are often hard to obtain using common Global Navigation Satellite Systems (GNSSs), mainly for non-line-of-sight (NLOS) and multipath issues. Moreover, applications that require navigation in indoor areas, e.g., valet parking in underground areas, necessarily require complementary approaches. Different options have been investigated to solve the localization problem, including approaches based on both vision and Light Detection And Ranging (LiDAR); they share the exploitation of an a-priori knowledge of the environment in the localization process [3]- [5]. Localization approaches that utilize the same sensor for mapping and localization usually achieve good performances, as the map of the scene is matched to the same kind of data generated by the onboard sensor. However, their application is hampered by the need for a preliminary mapping of the working area, which represents a relevant issue in terms of effort both for building the maps as well as for their maintenance. On the one hand, some approaches try to perform the localization exploiting standard cartographic maps, such as OpenStreetMap or other topological maps, leveraging the † The work of A. L.Ballardini has been funded by European Union H2020, under GA Marie Skłodowska-Curie n. 754382 Got Energy. Fig. 1. A sketch of the proposed processing pipeline. Starting from a rough camera pose estimate (e.g., from a GNSS device), CMRNet compares an RGB image and a synthesized depth image projected from a LiDAR-map into a virtual image plane (red) to regress the 6DoF camera pose (in green). Image best viewed in color. road graph [6] or high-level features such as lane, roundabouts, and intersections [7]- [9]. On the other hand, companies in the established market of maps and related services, like e.g., HERE or TomTom, are nowadays already developing so-called High Definition maps (HD maps), which are built using LiDAR sensors [10]. This allows other players in the autonomous cars domain, to focus on the localization task. HD maps, which are specifically designed to support selfdriving vehicles, provide an accurate position of high-level features such as traffic signs, lane markings, etc. as well as a representation of the environment in terms of point clouds, with a density of points usually reaching 0.1m. In the following, we denote as LiDAR-maps the point clouds generated by processing data from LiDARs. Standard approaches to exploit such maps localize the observer by matching point clouds gathered by the on-board sensor to the LiDAR-map; solutions to this problem are known as point clouds registration algorithms. Currently, these approaches are hampered by the huge cost of LiDAR devices, the de-facto standard for accurate geometric reconstruction. In contrast, we here propose a novel method for registering an image from an on-board monocular RGB camera to a LiDAR-map of the area. This allows for the exploitation of the forthcoming market of LiDAR-maps embedded into HD maps using only a cheap camera-based sensor suite on the vehicle. In particular, we propose CMRNet, a CNN-based approach that achieves camera localization with sub-meter accuracy, basing on a rough initial pose estimate. The maps and images used for localization are not necessarily those used during the training of the network. To the best of our knowledge, this is the first work to tackle the localization problem without a localized CNN, i.e., a CNN trained in the working area [11]. CMRNet does not learn the map, instead, it learns to match images to the LiDAR-map. Extensive experimental evaluations performed on the KITTI datasets [12] show the feasibility of our approach. The remainder of the paper is organized as follows: Section II gives a short review of the most similar methods and the last achievements with DNN-based approaches. In Section III we present the details of the proposed system. In Section IV we show the effectiveness of the proposed approach, and Sections V and VI present our conclusions and future work. A. Camera-only approaches The first category of techniques deals with the 6-DoF estimate of the camera pose using a single image as input. On the one hand, traditional methods face this problem by means of a two-phase procedure that consists of a coarse localization, performed using a place recognition algorithm, followed by a second refining step that allows for a final accurate localization [13], [14]. On the other hand, the latest machine learning techniques, mainly based on deep learning approaches, face this task in a single step. These models are usually trained using a set of images taken from different points of view of the working environment, in which the system performs the localization. One of the most important approaches of this category, which inspired many subsequent works, is PoseNet [11]. It consists in a CNN trained for camera pose regression. Starting from this work, additional improvements have been proposed by introducing new geometric loss functions [15], by exploiting the uncertainty estimation of Bayesian CNNs [16], by including a data augmentation scheme based on synthetic depth information [17], or using the relative pose between two observations in a CNNs pipeline [18]. One of the many works that follow the idea presented in PoseNet is VLocNet++ [19]. Here the authors deal with the visual localization problem using a multi-learning task (MLT) approach. Specifically, they proved that training a CNN for different tasks at the same time yields better localization performances than single task learning. As for today, the literature still sees [19] as the best performing approach on the 7Scenes dataset [20]. Clark et al. [21] developed a CNN that exploits a sequence of images in order to improve the quality of the localization in urban environments. Brachmann et al., instead, integrated a differentiable version of RANSAC within a CNN-based approach in an end-to-end fashion [22], [23]. Another cameraonly localization is based on decision forests, which consists of a set of decision trees used for classification or regression problems. For instance, the approach proposed by Shotton et al. [20] exploits RGBD images and regression forests to perform indoor camera localization. The aforementioned techniques, thanks to the generalization capabilities of machine learning approaches, are more robust against challenging scene conditions like lighting variations, occlusions, and repetitive patterns, in comparison with methods based on hand-crafted descriptors, such as SIFT [24], or SURF [25]. However, all these methods cannot perform localization in environments that have not been exploited in the training phase, therefore these regression models need to be retrained for every new place. B. Camera and LiDAR-map approaches The second category of localization techniques leverages existing maps, in order to solve the localization problem. In particular, two classes of approaches have been presented in the literature: geometry-based and projection-based methods. Caselitz et al. [3] proposed a geometry-based method that solves the visual localization problem by comparing a set of 3D points, the point cloud reconstructed from a sequence of images and the existing map. Wolcott et al. [4], instead, developed a projection-based method that uses meshes built from intensity data associated to the 3D points of the maps, projected into an image plane, to perform a comparison with the camera image using the Normalized Mutual Information (NMI) measure. Neubert et al. [5] proposed to use the similarity between depth images generated by synthetic views and the camera image as a score function for a particle filter, in order to localize the camera in indoor scenes. The main advantage of these techniques is that they can be used in any environment for which a 3D map is available. In this way, they avoid one of the major drawbacks of machine learning approaches for localization, i.e., the necessity to train a new model for every specific environment. Despite these remarkable properties, their localization capabilities are still not robust enough in the presence of occlusions, lighting variations, and repetitive scene structures. The work presented in this paper has been inspired by Schneider et al. [26], which used 3D scans from a LiDAR and RGB images as the input of a novel CNN, RegNet. Their goal was to provide a CNN-based method for calibrating the extrinsic parameters of a camera w.r.t. a LiDAR sensor. Taking inspiration from that work, in this paper we propose a novel approach that has the advantages of both the categories described above. Differently from the aforementioned literature contribution, which exploits the data gathered from a synchronized single activation of a 3D LiDAR and a camera image, the inputs of our approach are a complete 3D LiDAR map of the environment, together with a single image and a rough initial guess of the camera pose. Eventually, the output consists of an accurate 6-DoF camera pose localization. It is worth to notice that having a single LiDAR scan taken at the same time as the image imply that the observed scene is exactly the same. In our case, instead, the 3D map usually depicts a different configuration, i.e., road users are not present, making the matching more challenging. Our approach combines the generalization capabilities of CNNs, with the ability to be used in any environment for which a LiDAR-map is available, without the need to retrain the network. III. PROPOSED APPROACH In this work, we aim at localizing a camera from a single image in a 3D LiDAR-map of an urban environment. We exploit recent developments in deep neural networks for both pose regression [11] and feature matching [27]. The pipeline of our approach is depicted in Fig. 1 and can be summarized as follows. First, we generate a synthesized depth image by projecting the map points into a virtual image plane, positioned at the initial guess of the camera pose. This is done using the intrinsic parameters of the camera. From now on, we will refer to this synthesized depth image as LiDAR-image. The LiDAR-image, together with the RGB image from the camera, are fed into the proposed CMRNet, which regresses the rigid body transformation H out between the two different points of view. From a technical perspective, applying H out to the initial pose H init allows us to obtain the 6-DoF camera localization. In order to represent a rigid body transformation, we use a (4, 4) homogeneous matrix: H = R (3,3) T (3,1) 0 (1,3) 1 ∈ SE(3)(1) Here, R is a (3, 3) rotation matrix and T is a (3, 1) translation vector, in cartesian coordinates. The rotation matrix is composed of nine elements, but, as it represents a rotation in the space, it only has three degrees of freedom. For this reason, the output of the network in terms of rotations is expressed using quaternions lying on the 3-sphere (S 3 ) manifold. On the one hand, even though normalized quaternions have one redundant parameter, they have better properties than Euler angles, i.e., gimbal lock avoidance and unique rotational representation (except that conjugate quaternions represent the same rotation). Moreover, they are composed of fewer elements than a rotation matrix, thus being better suited for machine learning regression approaches. The outputs of the network are then a translation vector T ∈ R 3 and a rotation quaternion q ∈ S 3 . For simplicity, we will refer to the output of the network as H out , implying that we convert T and q to the corresponding homogeneous transformation matrix, as necessary. A. LiDAR-Image Generation In order to generate the LiDAR-image for a given initial pose H init , we follow a two-step procedure. Map Projection. First, we project all the 3D points in the map into a virtual image plane placed at H init , i.e., compute the image coordinates p of every 3D point P . This mapping is shown in Equation (2), where K is the camera projection matrix. p i = K · H init · P i(2) The LiDAR-image is then computed using a z-buffer approach to determine the visibility of points along the same projection line. Since Equation (2) can be computationally expensive for large maps, we perform the projection only for a sub-region cropped around H init , ignoring also points that lay behind the virtual image plane. In Figure 2a is depicted an example of LiDAR-image. Occlusion Filtering. The projection of a point cloud into an image plane can produce unrealistic depth images. For instance, the projection of occluded points, e.g., laying behind a wall, is still possible due to the sparsity nature of point clouds. To avoid this problem, we adopt the point clouds occlusion estimation filter presented in [28]; an example of the effect of this approach is depicted in Figure 2b. For every point P i , we can build a cone, about the projection line towards the camera, that does not intersect any other point. If the cone has an aperture larger than a certain threshold Th, the point P i is marked as visible. From a technical perspective, for each pixel with a non-zero depth p j in the LiDAR-image, we compute the normalized vector v from the relative 3D point P j to the pin-hole. Then, for any 3D point P i whose projection lays in a neighborhood (of size KxK) of p j , we compute the vector c = Pi−Pj Pi−Pj and the angle between the two vectors ϑ = arccos( v · c). This angle is used to assess the visibility of P j . Occluded pixels are then set to zero in the LiDAR-image. More detail is available in [28] B. Network Architecture PWC-Net [27] was used as baseline, and we then made some changes to its architecture. We chose this network because PWC-Net has been designed to predict the optical flow between a pair of images, i.e., to find matches between them. Starting from a rough camera localization estimate, our insight is to exploit the correlation layer of PWC-Net and its ability to match features from different points of view to regress the correct 6-DoF camera pose. We applied the following changes to the original architecture. • First, as our inputs are a depth and an RGB image (instead of two RGB images), we decoupled the feature pyramid extractors by removing the weights sharing. • Then, as we aim to perform pose regression, we removed the up-sampling layers, attaching the fully connected layers just after the first cost volume layer. Regarding the regression part, we added one fully connected layer with 512 neurons before the first optical flow estimation layer (conv6 4 in PWC-Net), followed by two branches for handling rotations and translations. Each branch is composed of two stacked fully connected layers, the first with 256 while the second with 3 or 4 neurons, for translation and rotation respectively. Given an input pair composed of a RGB image I and a LiDAR-image D, we used the following loss function in Equation (3) For the translation we used a smooth L1 loss [29]. Regarding the rotation loss, since the Euclidean distance does not provide a significant measure to describe the difference between two orientations, we used the angular distance between quaternions, as defined below: L q (I, D) = D a (q * inv(q)) (4) D a (m) = atan2( b 2 m + c 2 m + d 2 m , |a m |)(5) Here, q is the ground truth rotation,q represents the predicted normalized rotation, inv is the inverse operation for quaternions, {a m , b m , c m , d m } are the components of the quaternion m and * is the multiplicative operation of two quaternions. In order to use Equation (5) as a loss function, we need to ensure that it is differentiable for every possible output of the network. Recalling that atan2(y, x) is not differentiable for y = 0 ∧ x ≤ 0, and the fact that m is a unit quaternion, we can easily verify that Equation (5) is differentiable in S 3 . C. Iterative refinement When the initial pose strongly deviates with respect to the camera frame, the map projection produces a LiDARimage that shares just a few correspondences with the camera image. In this case, the camera pose prediction task is hard, because the CNN lacks the required information to compare the two points of view. It is therefore quite likely that the predicted camera pose is not accurate enough. Taking inspiration from [26], we propose an iterative refinement approach. In particular, we trained different CNNs by considering descending error ranges for both the translation and rotation components of the initial pose. Once a LiDARimage is obtained for a given camera pose, both the camera and the LiDAR-image are processed, starting from the CNN that has been trained with the largest error range. Then, a new projection of the map points is performed, and the process is repeated using a CNN trained with a reduced error range. Repeating this operation n times is possible to improve the accuracy of the final localization. The improvement is achieved thanks to the increasing overlap between the scene observed from the camera and the scene projected in the n th LiDAR-image. D. Training details We implemented CMRNet using the PyTorch library [30], and a slightly modified version of the official PWC-Net implementation. Regarding the activation function, we used a leaky RELU (REctified Linear Unit) with a negative slope of 0.1 as non-linearity. Finally, CMRNet was trained from scratch for 300 epochs using the ADAM optimizer with default parameters, a batch size of 24 and a learning rate of 1e −4 on a single NVidia GTX 1080ti. IV. EXPERIMENTAL RESULTS This section describes the evaluation procedure we adopted to validate CMRNet, including the used dataset, the assessed system components, the iterative refinements and finally the generalization capabilities. We wish to emphasize that, in order to assess the performance of CMRNet itself, in all the performed experiments each input was processed independently, i.e., without any tracking or temporal integration strategy. A. Dataset We tested the localization accuracy of our method on the KITTI odometry dataset. Specifically, we used the sequences from 03 to 09 for training (11697 frames) and the sequence 00 for validating (4541 frames). Note that the validation set is spatially separated from the train set, except for a very small sub-sequence (approx 200 frames), thus it is fair to say that the network is tested in scenes never seen during the training phase. Since the accuracy of the provided GPS-RTK ground truth is not sufficient for our task (the resulting map is not aligned nearby loop closures), we used a LiDARbased SLAM system to obtain consistent trajectories. The resulting poses are used to generate a down-sampled map with a resolution of 0.1m. This choice is the result of our expectations on the format of HD-maps that will be soon available from map providers [10]. Since the images from the KITTI dataset have different sizes (varying from 1224x370 to 1242x376), we padded all images to 1280x384, in order to match the CNN architecture requirement, i.e., width and height multiple of 64. Note that we first projected the map points into the LiDAR-image and then we padded both RGB and LiDAR-image, in order not to modify the camera projection parameters. To simulate a noisy initial pose estimate H init , we applied, independently for each input, a random translation, and rotation to the ground truth camera pose. In particular, for each component, we added a uniformly distributed noise in the range of [-2m, +2m] for the translation and [−10 • , +10 • ] for the rotation. Finally, we applied the following data augmentation scheme: first, we randomly changed the image brightness, contrast and saturation (all in the range [0.9, 1.1]). Then we randomly mirrored the image horizontally, and last we applied a random image rotation in the range [−5 • , +5 • ] along the optical axis. The 3D point cloud was transformed accordingly. Both data augmentation and the selection of H init take place at run-time, leading to different LiDAR-images for the same RGB image across epochs. B. System Components Evaluation We evaluated the performances of CMRNet by assessing the localization accuracy, varying different sub-components of the overall system. Among them, the most significative are shown in Table I, and derive from the following operational workflow. First, we evaluated the best CNN to be used as backbone, comparing the performances of state-of-the-art approaches, namely PWC-Net, ResNet18 and RegNet [26], [27], [31]. According to the performed experiments, PWC-Net maintained a remarkable superiority with respect to RegNet and ResNet18 and therefore was chosen as a starting point for further evaluation. Thereafter, we estimated the effects in modifying both inputs, i.e., camera images and LiDAR-images. In particular, we added a random image mirroring and experimented different parameter values influencing the effect of the occlusion filtering presented in Section III-A, i.e., size K and threshold Th. At last, the effectiveness of the rotation loss proposed in Section III-B was evaluated with respect to the commonly used L 1 loss. The proposed loss function achieved a relative decrease of rotation error of approx. 35%. The noise added to the poses in the validation set was kept fixed on all the experiments, allowing for a fair comparison of the performances. C. Iterative Refinement and Overall Assessment In order to improve the localization accuracy of our system, we tested the iterative approach explained in Section III-C. In particular, we trained three instances of CMRNet varying the maximum error ranges of the initial camera poses. To assess the robustness of CMRNet, we repeated the localization process for 10 times using different initial noises. Median localization error at each step of the iterative refinement averaged over 10 runs. The averaged results are shown in Table II together with the correspondent ranges used for training each network. Moreover, in order to compare the localization performances with the state-of-the-art monocular localization in LiDAR maps [3], we calculated mean and standard deviation for both rotation and translation components over 10 runs on the sequence 00 of the KITTI odometry dataset. Our approach shows comparable values for the translation component (0.33 ± 0.22m w.r.t. 0.30 ± 0.11m), with a lower rotation errors (1.07±0.77 • w.r.t. 1.65±0.91 • ). Nevertheless, it is worth to note that our approach still does not take advantage of any pose tracking procedure nor multi-frame analysis. Some qualitative examples of the localization capabilities of CMRNet with the aforementioned iteration scheme are depicted in Figure 3. In Figure 4 we illustrate the probability density functions (PDF) of the error, decomposed into the six components of the pose, for the three iterations of the aforementioned refinement. It can be noted that the PDF of even the first network iteration approximates a Gaussian distribution and following iterations further decrease the variance of the distributions. An analysis of the runtime performances using this configuration is shown in Table III. D. Generalization Capabilities In order to assess the generalization effectiveness of our approach, we evaluated its localization performance using a 3D LiDAR-map generated on a different day with respect to the camera images, yet still of the same environment. This allows us to have a completely different arrangement of parked cars and therefore to stress the localization capabilities. Unfortunately, there is only a short overlap between the sequences of the odometry dataset (approx. 200 frames), consisting of a small stretch of roads in common between sequences "00" and "07". Even though we cannot completely rely on the results of this limited set of frames, CMRNet achieved 0.57m and 0.9 • median localization accuracy on this test. Indeed, it is worth to notice that the network was trained with maps representing the same exact scene of the respective images, i.e., with cars parked in the same parking spots, and thus cannot learn to ignore cluttering scene elements. V. CONCLUSIONS In this work we have described CMRNet, a CNN based approach for camera to LiDAR-Map registration, using the KITTI dataset for both learning and validation purposes. The performances of the proposed approach allow multiple specialized CMRNet to be stacked as to improve the final camera localization, yet preserving realtime requirements. The results have shown that our proposal is able to localize the camera with a median of less than 0.27m and 1.07 • . Preliminary and not reported experiments on other datasets suggests there is room for improvement and the reason seems to be due to the limited vertical field-of-view available for the point clouds.Since our method does not learn the map but learn how to perform the registration, it is suitable for being used with large-scale HD-Maps. VI. FUTURE WORKS Even though our approach does not embed any information of specific maps, a dependency on the intrinsic camera calibration parameters still holds. As part of the future works we plan to increase the generalization capabilities so to not directly depend from a specific camera calibration. Finally, since the error distributions reveal a similarity with respect to Gaussian distributions, we expect to be able to benefit from standard filtering techniques aimed to probabilistically tackle the uncertainties over time.
4,207
1906.10109
2950851765
In this paper we present CMRNet, a realtime approach based on a Convolutional Neural Network to localize an RGB image of a scene in a map built from LiDAR data. Our network is not trained on the working area, i.e. CMRNet does not learn the map. Instead it learns to match an image to the map. We validate our approach on the KITTI dataset, processing each frame independently without any tracking procedure. CMRNet achieves 0.26m and 1.05deg median localization accuracy on the sequence 00 of the odometry dataset, starting from a rough pose estimate displaced up to 3.5m and 17deg. To the best of our knowledge this is the first CNN-based approach that learns to match images from a monocular camera to a given, preexisting 3D LiDAR-map.
The work presented in this paper has been inspired by Schneider al @cite_10 , which used 3D scans from a LIDAR and RGB images as the input of a novel CNN, RegNet. Their goal was to provide a CNN-based method for calibrating the extrinsic parameters of a camera a LIDAR sensor. Taking inspiration from that work, in this paper we propose a novel approach that has the advantages of both the categories described above. Differently from the aforementioned literature contribution, which exploits the data gathered from a synchronized single activation of a 3D LIDAR and a camera image, the inputs of our approach are a complete 3D LIDAR map of the environment, together with a single image and a rough initial guess of the camera pose. Eventually, the output consists of an accurate 6-DoF camera pose localization. It is worth to notice that having a single LIDAR scan taken at the same time as the image imply that the observed scene is exactly the same. In our case, instead, the 3D map usually depicts a different configuration, road users are not present, making the matching more challenging.
{ "abstract": [ "In this paper, we present RegNet, the first deep convolutional neural network (CNN) to infer a 6 degrees of freedom (DOF) extrinsic calibration between multimodal sensors, exemplified using a scanning LiDAR and a monocular camera. Compared to existing approaches, RegNet casts all three conventional calibration steps (feature extraction, feature matching and global regression) into a single real-time capable CNN. Our method does not require any human interaction and bridges the gap between classical offline and target-less online calibration approaches as it provides both a stable initial estimation as well as a continuous online correction of the extrinsic parameters. During training we randomly decalibrate our system in order to train RegNet to infer the correspondence between projected depth measurements and RGB image and finally regress the extrinsic calibration. Additionally, with an iterative execution of multiple CNNs, that are trained on different magnitudes of decalibration, our approach compares favorably to state-of-the-art methods in terms of a mean calibration error of 0.28° for the rotational and 6 cm for the translation components even for large decalibrations up to 1.5 m and 20°." ], "cite_N": [ "@cite_10" ], "mid": [ "2963270286" ] }
CMRNet: Camera to LiDAR-Map Registration
Over the past few years, the effectiveness of scene understanding for self-driving cars has substantially increased both for object detection and vehicle navigation [1], [2]. Even though these improvements allowed for more advanced and sophisticated Advanced Driver Assistance Systems (ADAS) and maneuvers, the current state of the art is far from the SAE full-automation level, especially in complex scenarios such as urban areas. Most of these algorithms depend on very accurate localization estimates, which are often hard to obtain using common Global Navigation Satellite Systems (GNSSs), mainly for non-line-of-sight (NLOS) and multipath issues. Moreover, applications that require navigation in indoor areas, e.g., valet parking in underground areas, necessarily require complementary approaches. Different options have been investigated to solve the localization problem, including approaches based on both vision and Light Detection And Ranging (LiDAR); they share the exploitation of an a-priori knowledge of the environment in the localization process [3]- [5]. Localization approaches that utilize the same sensor for mapping and localization usually achieve good performances, as the map of the scene is matched to the same kind of data generated by the onboard sensor. However, their application is hampered by the need for a preliminary mapping of the working area, which represents a relevant issue in terms of effort both for building the maps as well as for their maintenance. On the one hand, some approaches try to perform the localization exploiting standard cartographic maps, such as OpenStreetMap or other topological maps, leveraging the † The work of A. L.Ballardini has been funded by European Union H2020, under GA Marie Skłodowska-Curie n. 754382 Got Energy. Fig. 1. A sketch of the proposed processing pipeline. Starting from a rough camera pose estimate (e.g., from a GNSS device), CMRNet compares an RGB image and a synthesized depth image projected from a LiDAR-map into a virtual image plane (red) to regress the 6DoF camera pose (in green). Image best viewed in color. road graph [6] or high-level features such as lane, roundabouts, and intersections [7]- [9]. On the other hand, companies in the established market of maps and related services, like e.g., HERE or TomTom, are nowadays already developing so-called High Definition maps (HD maps), which are built using LiDAR sensors [10]. This allows other players in the autonomous cars domain, to focus on the localization task. HD maps, which are specifically designed to support selfdriving vehicles, provide an accurate position of high-level features such as traffic signs, lane markings, etc. as well as a representation of the environment in terms of point clouds, with a density of points usually reaching 0.1m. In the following, we denote as LiDAR-maps the point clouds generated by processing data from LiDARs. Standard approaches to exploit such maps localize the observer by matching point clouds gathered by the on-board sensor to the LiDAR-map; solutions to this problem are known as point clouds registration algorithms. Currently, these approaches are hampered by the huge cost of LiDAR devices, the de-facto standard for accurate geometric reconstruction. In contrast, we here propose a novel method for registering an image from an on-board monocular RGB camera to a LiDAR-map of the area. This allows for the exploitation of the forthcoming market of LiDAR-maps embedded into HD maps using only a cheap camera-based sensor suite on the vehicle. In particular, we propose CMRNet, a CNN-based approach that achieves camera localization with sub-meter accuracy, basing on a rough initial pose estimate. The maps and images used for localization are not necessarily those used during the training of the network. To the best of our knowledge, this is the first work to tackle the localization problem without a localized CNN, i.e., a CNN trained in the working area [11]. CMRNet does not learn the map, instead, it learns to match images to the LiDAR-map. Extensive experimental evaluations performed on the KITTI datasets [12] show the feasibility of our approach. The remainder of the paper is organized as follows: Section II gives a short review of the most similar methods and the last achievements with DNN-based approaches. In Section III we present the details of the proposed system. In Section IV we show the effectiveness of the proposed approach, and Sections V and VI present our conclusions and future work. A. Camera-only approaches The first category of techniques deals with the 6-DoF estimate of the camera pose using a single image as input. On the one hand, traditional methods face this problem by means of a two-phase procedure that consists of a coarse localization, performed using a place recognition algorithm, followed by a second refining step that allows for a final accurate localization [13], [14]. On the other hand, the latest machine learning techniques, mainly based on deep learning approaches, face this task in a single step. These models are usually trained using a set of images taken from different points of view of the working environment, in which the system performs the localization. One of the most important approaches of this category, which inspired many subsequent works, is PoseNet [11]. It consists in a CNN trained for camera pose regression. Starting from this work, additional improvements have been proposed by introducing new geometric loss functions [15], by exploiting the uncertainty estimation of Bayesian CNNs [16], by including a data augmentation scheme based on synthetic depth information [17], or using the relative pose between two observations in a CNNs pipeline [18]. One of the many works that follow the idea presented in PoseNet is VLocNet++ [19]. Here the authors deal with the visual localization problem using a multi-learning task (MLT) approach. Specifically, they proved that training a CNN for different tasks at the same time yields better localization performances than single task learning. As for today, the literature still sees [19] as the best performing approach on the 7Scenes dataset [20]. Clark et al. [21] developed a CNN that exploits a sequence of images in order to improve the quality of the localization in urban environments. Brachmann et al., instead, integrated a differentiable version of RANSAC within a CNN-based approach in an end-to-end fashion [22], [23]. Another cameraonly localization is based on decision forests, which consists of a set of decision trees used for classification or regression problems. For instance, the approach proposed by Shotton et al. [20] exploits RGBD images and regression forests to perform indoor camera localization. The aforementioned techniques, thanks to the generalization capabilities of machine learning approaches, are more robust against challenging scene conditions like lighting variations, occlusions, and repetitive patterns, in comparison with methods based on hand-crafted descriptors, such as SIFT [24], or SURF [25]. However, all these methods cannot perform localization in environments that have not been exploited in the training phase, therefore these regression models need to be retrained for every new place. B. Camera and LiDAR-map approaches The second category of localization techniques leverages existing maps, in order to solve the localization problem. In particular, two classes of approaches have been presented in the literature: geometry-based and projection-based methods. Caselitz et al. [3] proposed a geometry-based method that solves the visual localization problem by comparing a set of 3D points, the point cloud reconstructed from a sequence of images and the existing map. Wolcott et al. [4], instead, developed a projection-based method that uses meshes built from intensity data associated to the 3D points of the maps, projected into an image plane, to perform a comparison with the camera image using the Normalized Mutual Information (NMI) measure. Neubert et al. [5] proposed to use the similarity between depth images generated by synthetic views and the camera image as a score function for a particle filter, in order to localize the camera in indoor scenes. The main advantage of these techniques is that they can be used in any environment for which a 3D map is available. In this way, they avoid one of the major drawbacks of machine learning approaches for localization, i.e., the necessity to train a new model for every specific environment. Despite these remarkable properties, their localization capabilities are still not robust enough in the presence of occlusions, lighting variations, and repetitive scene structures. The work presented in this paper has been inspired by Schneider et al. [26], which used 3D scans from a LiDAR and RGB images as the input of a novel CNN, RegNet. Their goal was to provide a CNN-based method for calibrating the extrinsic parameters of a camera w.r.t. a LiDAR sensor. Taking inspiration from that work, in this paper we propose a novel approach that has the advantages of both the categories described above. Differently from the aforementioned literature contribution, which exploits the data gathered from a synchronized single activation of a 3D LiDAR and a camera image, the inputs of our approach are a complete 3D LiDAR map of the environment, together with a single image and a rough initial guess of the camera pose. Eventually, the output consists of an accurate 6-DoF camera pose localization. It is worth to notice that having a single LiDAR scan taken at the same time as the image imply that the observed scene is exactly the same. In our case, instead, the 3D map usually depicts a different configuration, i.e., road users are not present, making the matching more challenging. Our approach combines the generalization capabilities of CNNs, with the ability to be used in any environment for which a LiDAR-map is available, without the need to retrain the network. III. PROPOSED APPROACH In this work, we aim at localizing a camera from a single image in a 3D LiDAR-map of an urban environment. We exploit recent developments in deep neural networks for both pose regression [11] and feature matching [27]. The pipeline of our approach is depicted in Fig. 1 and can be summarized as follows. First, we generate a synthesized depth image by projecting the map points into a virtual image plane, positioned at the initial guess of the camera pose. This is done using the intrinsic parameters of the camera. From now on, we will refer to this synthesized depth image as LiDAR-image. The LiDAR-image, together with the RGB image from the camera, are fed into the proposed CMRNet, which regresses the rigid body transformation H out between the two different points of view. From a technical perspective, applying H out to the initial pose H init allows us to obtain the 6-DoF camera localization. In order to represent a rigid body transformation, we use a (4, 4) homogeneous matrix: H = R (3,3) T (3,1) 0 (1,3) 1 ∈ SE(3)(1) Here, R is a (3, 3) rotation matrix and T is a (3, 1) translation vector, in cartesian coordinates. The rotation matrix is composed of nine elements, but, as it represents a rotation in the space, it only has three degrees of freedom. For this reason, the output of the network in terms of rotations is expressed using quaternions lying on the 3-sphere (S 3 ) manifold. On the one hand, even though normalized quaternions have one redundant parameter, they have better properties than Euler angles, i.e., gimbal lock avoidance and unique rotational representation (except that conjugate quaternions represent the same rotation). Moreover, they are composed of fewer elements than a rotation matrix, thus being better suited for machine learning regression approaches. The outputs of the network are then a translation vector T ∈ R 3 and a rotation quaternion q ∈ S 3 . For simplicity, we will refer to the output of the network as H out , implying that we convert T and q to the corresponding homogeneous transformation matrix, as necessary. A. LiDAR-Image Generation In order to generate the LiDAR-image for a given initial pose H init , we follow a two-step procedure. Map Projection. First, we project all the 3D points in the map into a virtual image plane placed at H init , i.e., compute the image coordinates p of every 3D point P . This mapping is shown in Equation (2), where K is the camera projection matrix. p i = K · H init · P i(2) The LiDAR-image is then computed using a z-buffer approach to determine the visibility of points along the same projection line. Since Equation (2) can be computationally expensive for large maps, we perform the projection only for a sub-region cropped around H init , ignoring also points that lay behind the virtual image plane. In Figure 2a is depicted an example of LiDAR-image. Occlusion Filtering. The projection of a point cloud into an image plane can produce unrealistic depth images. For instance, the projection of occluded points, e.g., laying behind a wall, is still possible due to the sparsity nature of point clouds. To avoid this problem, we adopt the point clouds occlusion estimation filter presented in [28]; an example of the effect of this approach is depicted in Figure 2b. For every point P i , we can build a cone, about the projection line towards the camera, that does not intersect any other point. If the cone has an aperture larger than a certain threshold Th, the point P i is marked as visible. From a technical perspective, for each pixel with a non-zero depth p j in the LiDAR-image, we compute the normalized vector v from the relative 3D point P j to the pin-hole. Then, for any 3D point P i whose projection lays in a neighborhood (of size KxK) of p j , we compute the vector c = Pi−Pj Pi−Pj and the angle between the two vectors ϑ = arccos( v · c). This angle is used to assess the visibility of P j . Occluded pixels are then set to zero in the LiDAR-image. More detail is available in [28] B. Network Architecture PWC-Net [27] was used as baseline, and we then made some changes to its architecture. We chose this network because PWC-Net has been designed to predict the optical flow between a pair of images, i.e., to find matches between them. Starting from a rough camera localization estimate, our insight is to exploit the correlation layer of PWC-Net and its ability to match features from different points of view to regress the correct 6-DoF camera pose. We applied the following changes to the original architecture. • First, as our inputs are a depth and an RGB image (instead of two RGB images), we decoupled the feature pyramid extractors by removing the weights sharing. • Then, as we aim to perform pose regression, we removed the up-sampling layers, attaching the fully connected layers just after the first cost volume layer. Regarding the regression part, we added one fully connected layer with 512 neurons before the first optical flow estimation layer (conv6 4 in PWC-Net), followed by two branches for handling rotations and translations. Each branch is composed of two stacked fully connected layers, the first with 256 while the second with 3 or 4 neurons, for translation and rotation respectively. Given an input pair composed of a RGB image I and a LiDAR-image D, we used the following loss function in Equation (3) For the translation we used a smooth L1 loss [29]. Regarding the rotation loss, since the Euclidean distance does not provide a significant measure to describe the difference between two orientations, we used the angular distance between quaternions, as defined below: L q (I, D) = D a (q * inv(q)) (4) D a (m) = atan2( b 2 m + c 2 m + d 2 m , |a m |)(5) Here, q is the ground truth rotation,q represents the predicted normalized rotation, inv is the inverse operation for quaternions, {a m , b m , c m , d m } are the components of the quaternion m and * is the multiplicative operation of two quaternions. In order to use Equation (5) as a loss function, we need to ensure that it is differentiable for every possible output of the network. Recalling that atan2(y, x) is not differentiable for y = 0 ∧ x ≤ 0, and the fact that m is a unit quaternion, we can easily verify that Equation (5) is differentiable in S 3 . C. Iterative refinement When the initial pose strongly deviates with respect to the camera frame, the map projection produces a LiDARimage that shares just a few correspondences with the camera image. In this case, the camera pose prediction task is hard, because the CNN lacks the required information to compare the two points of view. It is therefore quite likely that the predicted camera pose is not accurate enough. Taking inspiration from [26], we propose an iterative refinement approach. In particular, we trained different CNNs by considering descending error ranges for both the translation and rotation components of the initial pose. Once a LiDARimage is obtained for a given camera pose, both the camera and the LiDAR-image are processed, starting from the CNN that has been trained with the largest error range. Then, a new projection of the map points is performed, and the process is repeated using a CNN trained with a reduced error range. Repeating this operation n times is possible to improve the accuracy of the final localization. The improvement is achieved thanks to the increasing overlap between the scene observed from the camera and the scene projected in the n th LiDAR-image. D. Training details We implemented CMRNet using the PyTorch library [30], and a slightly modified version of the official PWC-Net implementation. Regarding the activation function, we used a leaky RELU (REctified Linear Unit) with a negative slope of 0.1 as non-linearity. Finally, CMRNet was trained from scratch for 300 epochs using the ADAM optimizer with default parameters, a batch size of 24 and a learning rate of 1e −4 on a single NVidia GTX 1080ti. IV. EXPERIMENTAL RESULTS This section describes the evaluation procedure we adopted to validate CMRNet, including the used dataset, the assessed system components, the iterative refinements and finally the generalization capabilities. We wish to emphasize that, in order to assess the performance of CMRNet itself, in all the performed experiments each input was processed independently, i.e., without any tracking or temporal integration strategy. A. Dataset We tested the localization accuracy of our method on the KITTI odometry dataset. Specifically, we used the sequences from 03 to 09 for training (11697 frames) and the sequence 00 for validating (4541 frames). Note that the validation set is spatially separated from the train set, except for a very small sub-sequence (approx 200 frames), thus it is fair to say that the network is tested in scenes never seen during the training phase. Since the accuracy of the provided GPS-RTK ground truth is not sufficient for our task (the resulting map is not aligned nearby loop closures), we used a LiDARbased SLAM system to obtain consistent trajectories. The resulting poses are used to generate a down-sampled map with a resolution of 0.1m. This choice is the result of our expectations on the format of HD-maps that will be soon available from map providers [10]. Since the images from the KITTI dataset have different sizes (varying from 1224x370 to 1242x376), we padded all images to 1280x384, in order to match the CNN architecture requirement, i.e., width and height multiple of 64. Note that we first projected the map points into the LiDAR-image and then we padded both RGB and LiDAR-image, in order not to modify the camera projection parameters. To simulate a noisy initial pose estimate H init , we applied, independently for each input, a random translation, and rotation to the ground truth camera pose. In particular, for each component, we added a uniformly distributed noise in the range of [-2m, +2m] for the translation and [−10 • , +10 • ] for the rotation. Finally, we applied the following data augmentation scheme: first, we randomly changed the image brightness, contrast and saturation (all in the range [0.9, 1.1]). Then we randomly mirrored the image horizontally, and last we applied a random image rotation in the range [−5 • , +5 • ] along the optical axis. The 3D point cloud was transformed accordingly. Both data augmentation and the selection of H init take place at run-time, leading to different LiDAR-images for the same RGB image across epochs. B. System Components Evaluation We evaluated the performances of CMRNet by assessing the localization accuracy, varying different sub-components of the overall system. Among them, the most significative are shown in Table I, and derive from the following operational workflow. First, we evaluated the best CNN to be used as backbone, comparing the performances of state-of-the-art approaches, namely PWC-Net, ResNet18 and RegNet [26], [27], [31]. According to the performed experiments, PWC-Net maintained a remarkable superiority with respect to RegNet and ResNet18 and therefore was chosen as a starting point for further evaluation. Thereafter, we estimated the effects in modifying both inputs, i.e., camera images and LiDAR-images. In particular, we added a random image mirroring and experimented different parameter values influencing the effect of the occlusion filtering presented in Section III-A, i.e., size K and threshold Th. At last, the effectiveness of the rotation loss proposed in Section III-B was evaluated with respect to the commonly used L 1 loss. The proposed loss function achieved a relative decrease of rotation error of approx. 35%. The noise added to the poses in the validation set was kept fixed on all the experiments, allowing for a fair comparison of the performances. C. Iterative Refinement and Overall Assessment In order to improve the localization accuracy of our system, we tested the iterative approach explained in Section III-C. In particular, we trained three instances of CMRNet varying the maximum error ranges of the initial camera poses. To assess the robustness of CMRNet, we repeated the localization process for 10 times using different initial noises. Median localization error at each step of the iterative refinement averaged over 10 runs. The averaged results are shown in Table II together with the correspondent ranges used for training each network. Moreover, in order to compare the localization performances with the state-of-the-art monocular localization in LiDAR maps [3], we calculated mean and standard deviation for both rotation and translation components over 10 runs on the sequence 00 of the KITTI odometry dataset. Our approach shows comparable values for the translation component (0.33 ± 0.22m w.r.t. 0.30 ± 0.11m), with a lower rotation errors (1.07±0.77 • w.r.t. 1.65±0.91 • ). Nevertheless, it is worth to note that our approach still does not take advantage of any pose tracking procedure nor multi-frame analysis. Some qualitative examples of the localization capabilities of CMRNet with the aforementioned iteration scheme are depicted in Figure 3. In Figure 4 we illustrate the probability density functions (PDF) of the error, decomposed into the six components of the pose, for the three iterations of the aforementioned refinement. It can be noted that the PDF of even the first network iteration approximates a Gaussian distribution and following iterations further decrease the variance of the distributions. An analysis of the runtime performances using this configuration is shown in Table III. D. Generalization Capabilities In order to assess the generalization effectiveness of our approach, we evaluated its localization performance using a 3D LiDAR-map generated on a different day with respect to the camera images, yet still of the same environment. This allows us to have a completely different arrangement of parked cars and therefore to stress the localization capabilities. Unfortunately, there is only a short overlap between the sequences of the odometry dataset (approx. 200 frames), consisting of a small stretch of roads in common between sequences "00" and "07". Even though we cannot completely rely on the results of this limited set of frames, CMRNet achieved 0.57m and 0.9 • median localization accuracy on this test. Indeed, it is worth to notice that the network was trained with maps representing the same exact scene of the respective images, i.e., with cars parked in the same parking spots, and thus cannot learn to ignore cluttering scene elements. V. CONCLUSIONS In this work we have described CMRNet, a CNN based approach for camera to LiDAR-Map registration, using the KITTI dataset for both learning and validation purposes. The performances of the proposed approach allow multiple specialized CMRNet to be stacked as to improve the final camera localization, yet preserving realtime requirements. The results have shown that our proposal is able to localize the camera with a median of less than 0.27m and 1.07 • . Preliminary and not reported experiments on other datasets suggests there is room for improvement and the reason seems to be due to the limited vertical field-of-view available for the point clouds.Since our method does not learn the map but learn how to perform the registration, it is suitable for being used with large-scale HD-Maps. VI. FUTURE WORKS Even though our approach does not embed any information of specific maps, a dependency on the intrinsic camera calibration parameters still holds. As part of the future works we plan to increase the generalization capabilities so to not directly depend from a specific camera calibration. Finally, since the error distributions reveal a similarity with respect to Gaussian distributions, we expect to be able to benefit from standard filtering techniques aimed to probabilistically tackle the uncertainties over time.
4,207
1907.01046
2955907793
Detailed knowledge about the electrical power consumption in industrial production environments is a prerequisite to reduce and optimize their power consumption. Today's industrial production sites are equipped with a variety of sensors that, inter alia, monitor electrical power consumption in detail. However, these environments often lack an automated data collation and analysis. We present a system architecture that integrates different sensors and analyzes and visualizes the power consumption of devices, machines, and production plants. It is designed with a focus on scalability to support production environments of various sizes and to handle varying loads. We argue that a scalable architecture in this context must meet requirements for fault tolerance, extensibility, real-time data processing, and resource efficiency. As a solution, we propose a microservice-based architecture augmented by big data and stream processing techniques. Applying the fog computing paradigm, parts of it are deployed in an elastic, central cloud while other parts run directly, decentralized in the production environment. A prototype implementation of this architecture presents solutions how different kinds of sensors can be integrated and their measurements can be continuously aggregated. In order to make analyzed data comprehensible, it features a single-page web application that provides different forms of data visualization. We deploy this pilot implementation in the data center of a medium-sized enterprise, where we successfully monitor the power consumption of 16 servers. Furthermore, we show the scalability of our architecture with 20,000 simulated sensors.
Shrouf and Miragliotta @cite_2 report on different approaches for energy management enabled by Internet of Things (IoT) technologies. Based on literature, expert interviews, and reports of manufactures, they summarize different IoT architectures for power monitoring and present a general abstraction of them. The resulting architecture primarily focuses on network interconnections and integration of other systems. As in our approach, it respects real-time data processing and the challenge of integrating data of different sensors and data formats. However, data is only processed in a cloud or local server infrastructure and does not follow fog computing paradigms. The architecture represents a general approach and is therefore too abstract to offer a reference implementation.
{ "abstract": [ "Abstract In today's manufacturing scenario, rising energy prices, increasing ecological awareness, and changing consumer behaviors are driving decision-makers to prioritize green manufacturing. The Internet of Things paradigm promises to increase the visibility and awareness of energy consumption, thanks to smart sensors and smart meters at the machine and production line level. Consequently, real-time energy consumption data from manufacturing processes can be collected easily, and then analyzed, to improve energy-aware decision-making. Relying on a comprehensive literature review and on experts' insight, this paper contributes to the understanding of energy-efficient production management practices that are enhanced and enabled by the Internet of Things technology. In addition, it discusses the benefits that can be obtained thanks to adopting such management practices. Eventually, a framework is presented to support the integration of gathered energy data into a company's information technology tools and platforms. This is done with the ultimate goal of highlighting how operational and tactical decision-making processes could leverage on such data in order to improve energy efficiency, and therefore competitiveness, of manufacturing companies. With the outcomes of this paper, energy managers can approach the Internet of Things adoption in a benefit-driven manner, addressing those energy management practices that are more aligned with company maturity, measurable data and available information systems and tools." ], "cite_N": [ "@cite_2" ], "mid": [ "1968605868" ] }
A Scalable Architecture for Power Consumption Monitoring in Industrial Production Environments
Electrical power consumption is a relevant cost component for manufacturing enterprises. Besides economic motives, also legal as well as self-imposed regulations such as ISO 50001 [1] motivate enterprises to reduce and optimize their power consumption. In particular, load peaks should be reduced as those are significantly more expensive [2]. Due to the immense number of devices, machines, and production plants in such environments, a key challenge is to identify major consumers. Varying and simultaneous workloads on different machines complicate this identification. In order to discover saving potential, it is necessary to monitor all This research is funded by the Federal Ministry of Education and Research (BMBF, Germany) in the Titan project (https://www.industrial-devops.org, contract no. 01IS17084B). consumers and to visualize and analyze their consumption. The data should be monitored as detailed as possible in order to analyze individual consumers or the consumption at particular points in time intensively. However, also aggregated and preconfigured analyses are necessary to make data comprehensible and to allow for an immediate reaction. Current trends towards the Industrial Internet of Things and Industry 4.0 bring devices that are increasingly able to monitor their state and resource usage. Equipped with network capabilities, they provide these data to other hardware or software components [3]. Combining all sensors into one distributed hardware and software system promises to provide the necessary monitoring infrastructure to optimize power consumption [4]. Also older devices that do not offer monitoring mechanisms can be integrated using auxiliary devices such as monitoring power sockets. The following features are of particular relevance and should be provided by such a system: 1) Data Integration: Devices and machines in production environments usually come from different manufactures located in different business domains. Furthermore, they are likely to differ in their ages and originate from different generations of technological evolution [5]. This leads to the situation that also the way they supply data varies widely. Most notably, this is due to the protocols and data formats they use but also to the way they measure. Parameters such as precision, sampling rate, or measurement units may vary from domain to domain. In order to compare data of different sensors and to consider the data analysis from a higher level, data first have to be brought into a common format. This also includes converting measurement units or splitting up multiple measurements that are sent together. Moreover, it is likely that not all measurements are of interest and only specific values have to be selected. As the amount of data may be too large to be analyzed, it is often reasonable to first aggregate measurements. 2) Data Analysis: The individual consumption values of devices are often too detailed to draw conclusions about the entire production. Instead, it may often be more reasonable to evaluate data for an entire group of devices. This is even more significant in cases, where devices have more than one power supply, which are monitored individually. It is likely that in such cases, only the aggregated data are of interest. 3) Data Visualization: Visualization of monitored and analyzed data allows a user to draw conclusions about the current state of the overall production. Based on this, a user should be able to make decisions about the further operation. Contribution: In this paper, we make the following contributions: We define architectural requirements, which such a monitoring infrastructure has to meet in order to be generically applicable for different kinds and sizes of production environments (Section II). We present an architecture that meets these requirements (Section III) and that allows for different ways to deploy it (Section IV). In addition, with our open source pilot implementation 1 (Section V), we show how our approach can be deployed in a real production environment (Section VI) and we evaluate it in terms of scalability (Section VII). Finally, we discuss related work in Section VIII and conclude this paper in Section IX. II. ARCHITECTURAL REQUIREMENTS Infrastructures and requirements differ significantly among enterprises and between business sectors. These may change not only from business to business but also within the same application scenario, for example, if after an initial test period additional enterprise departments should be integrated. We aim for an architecture that can be deployed in small-scale production environments as well as in arbitrary large ones. In the following, we describe four key requirements that are of crucial importance for such an architecture. A. Data Processing in Real-Time while Scaling The data transmission, analysis, and visualization in our approach should be performed as quickly as possible in order to allow for an insight into the current infrastructure's status at any time. This is the only way to react to unexpected events or to evaluate the current production process. This requirement needs to be reflected in the architecture design such that, for example, batch processing techniques are not an option for the majority of the analyses. With a larger production environment, the volume of sensor data increases. This includes both the amount of data per sensor as well as the total number of sensors in the production. The requirement for real-time data processing should not be sacrificed if the amount of data increases. In addition, the architecture should also be able to handle varying loads during ongoing operation to avoid downtimes in which the production infrastructure would no longer have been monitored. Besides an increasing load, also a decreasing load should be able to be handled efficiently. B. Scalability and Resource Efficiency If the amount of sensor data grows, more computing power is necessary. To a certain degree this can be achieved by providing more powerful hardware (vertical scaling). However, one quickly reaches a limit where additional computing power can only be accomplished by adding further machines (horizontal scaling). According to Abbott and Fisher's Scale Cube 1 https://github.com/cau-se/titan-ccp [6], horizontal scaling can be obtained in three combinable dimensions: duplicate instances of the software system, split the managed and processed data, and decompose the software by functionalities. Our architecture has to be designed in a way that facilitates an operation on multiple machines and, furthermore, utilizes them efficiently. The amount of data that is recorded by a sensor is often larger than actually needed for analyses. In order to reduce network traffic and make optimal use of the existing hardware, the sensors (or devices located close to the sensors) should already process as much data as possible. However, those edge devices typically operate on limited hardware resources, which are usually not sufficient to execute complex analyses directly on them. Moreover, the given resource capacities are not or only limited extendable and, thus, impede scaling of the software. Therefore, an architecture design has to find a balance between optimal resource usage and respecting resource constrains of the edge devices. C. Scalability and Fault Tolerance A horizontally scalable system is inevitably a distributed system whose components communicate via the network. This implies that parts may temporarily become unavailable or fail. Therefore, the software architecture and a corresponding implementation must be designed to tolerate faults and those do not lead to a failure of the overall system. Supporting horizontal scaling via duplicating instances also assists in fault-tolerance as failed instances can be replaced by their duplicates. D. Scalability and Extensibility As the number of sensors increases, more data formats and protocols need to be integrated. Moreover, it is likely that large production environments require support for additional metrics. This also applies to analyses and visualizations. An increasing volume of measurement data requires more complex, automatic and, therefore, more domain-specific analyses to make the data understandable. Therefore, the architecture should be designed in an adaptable and extensible way. III. MICROSERVICE ARCHITECTURE Considering the architectural requirements described above, we designed a microservice-based architecture [7] for the desired monitoring infrastructure. Microservice architectures are an approach to modularize software. It divides software into small components, called microservices, that can be used and deployed independently of each other. The separation into microservices is based on business functions. Each service maps to an own business area and provides a complete implementation of it [8]. This makes it much easier to adapt the component to changing requirements that typically arise from the business area. Microservices are isolated from each other. They run in separate processes and do not share any state. Thus, they can independently be started, stopped, or replaced. In particular, microservices can be independently released to production so that a new version of one microservice does not require to update the others. Furthermore, they do not share any implementation or database schema but communicate via transaction-less protocols such as REST. This also facilitates an individual choice of programming language, database system and technology stack for each service. Loose coupling between microservices enables individual scaling of them and allows the system as a whole to scale more fine-grained [9]. This avoids wasting computing resources as only those components need to be scaled for which it is necessary. Since the individual services only require normal network connections between them, they can be deployed in different contexts. This offers a lot of flexibility in the operation of the software. The main drivers for microservice adoption are, depending on application domain, scalability and maintainability [10]. Furthermore, microservice architectures support agile architecture work [11]. Fig. 1 shows a graphical representation of our architecture. It contains the three microservices Record Bridge, which integrates sensors, History, which aggregates and stores sensor data for the long term, and Configuration, which manages the system's state. Whereas the Record Bridge solely contains application logic, the services History and Configuration additionally contain a data storage subcomponent. The Visualization component is not a typical microservice as it does not represent an own business function but instead serves as an integration of different business functions. It consists of two parts, a server-sided backend and a client-sided frontend. The services in our architecture communicate with each other in two different ways: first, synchronously using a request-reply API paradigm such as REST to read or modify the other services' states; second, via a messaging bus or system to publish events that may be asynchronously consumed by other services. Using both communication approaches to-gether is a common pattern when designing microservices [8]. The major task of our approach is stream processing of sensor measurement data. Fig. 2 shows the flow of measurement data among components starting from their integration via a Record Bridge to their visualization in a web browser. A. Record Bridge Sensors use different schemata, technologies, and transport mechanisms. Transportation can take place with high-level techniques such as HTTP but also on a low-level, for example, via serial data buses. Data can be encoded in text formats such as JSON or XML but also binary. And besides several standardized data schemata, there are also numerous proprietary ones. This requires to convert sensor measurements to a common format that is used inside our whole approach, before they will be further processed and analyzed. The Record Bridge fulfills this task. It receives the sensor data, transforms them, and then publishes them for other components by sending them to the messaging system. In our architecture design, it functions as a placeholder for arbitrary concrete Bridges, where each Record Bridge integrates a specific sensor type. As a sensor type, we consider a set of sensors that use the same schemata, formats, and transport mechanisms. Record Bridge services are primarily supposed to convert data from one format into another. They do not need to have any or only little knowledge about previous transformations. Therefore, they should be designed as stateless as possible since stateless components enable an arbitrary scaling. B. History The History service manages past sensor data and provides access to them. This includes real sensor measurements as well as aggregated data for groups of sensors. Thus, one task this component has to fulfill is the hierarchical aggregation of data. This should be done in realtime, which means: Whenever a sensor supplies a new measurement, all aggregated sensor groups that contain this sensor should obtain an update as well. Like for real sensors, this component creates a new record with the aggregated values and publishes it via the asynchronous event exchange system (bottom Fig. 1) for other services. In order to access past measurements, they first need to be permanently stored. Therefore, the History service has access to a database and when it receives new records (aggregated or not) it writes them to that database. For other services, the History service provides access to the database in the form of an API, which has various endpoints that return records or statistics on them. The application logic is separated from the actual data storage. Thus both parts can be scaled independently. When choosing a database management system (DBMS), it should also be considered how well it can be scaled-both in terms of accessibility and storage. Even if the data retention is segregated into a DBMS, the application logic still cannot be considered entirely stateless. This is due to the fact that multiple instances need to coordinate themselves when consuming data from the messaging system or aggregating them. C. Configuration The Configuration microservice manages the system-wide settings, such as a hierarchical model that specifies what sensors exist and how they could be aggregated. However, the Configuration service does not serve as a central place for all configurations of individual services. Settings that clearly belong to a specific service should be configurable directly in that service. An essential requirement of this service is the ability to handle reconfigurations during the execution. In other words, no restart should be needed whenever the configuration changes and other services will receive notifications about those changes. Therefore, the Configuration service provides an API to update or request the current configuration and propagates updates via the messaging system. Furthermore, this service contains a database to store the current configuration. It is the database's responsibility to store data in a reliable and perhaps redundant manner. Separating the database from the API logic also allows to scale both of them independently. D. Visualization Besides monitoring and analysis, our approach also includes an interactive, web-based visualization. Our architecture contains a Visualization component following the Backends for Frontends pattern [7]. As the name suggests, it consists of two parts: a frontend and a backend. The frontend is a single-page application running in the user's web browser. After compilation, it is a set of static files that are interpreted by the web browser. The actual data are dynamically requested and loaded at runtime from the corresponding microservices. The backend fulfills two purposes. Firstly, it acts as a static file server that delivers the single-page application. Secondly, it functions as an API gateway that provides all required interfaces for the frontend. When the frontend makes a request, it addresses it to the backend, which then forwards the request to the corresponding microservices. In this way, the backend abstracts and hides the internal division into microservices. IV. DISTRIBUTED DEPLOYMENT The proposed software architecture is designed to allow for an individual scaling of its components. In particular, this implies that multiple instances of components can be deployed and that the load is balanced among them. In this way, we expect that our approach is feasible for different sized production environments and, furthermore, we can react flexibly to changing loads and requirements. In addition to the software architecture and a corresponding implementation, however, the system must also be deployed in such a way that it can take advantage of the possibilities for scaling. For these reasons, large parts of the architecture are supposed to be deployed in a cloud environment. This does not necessarily have to be a public cloud of an external provider, a private cloud can also offer this. Cloud environments provide the infrastructure and platform demanded by the current load dynamically and as a service. This is sensible as hardware in the production is often not powerful enough and provision of additional hardware is time-consuming and costly. Therefore, architecture components that perform intensive computations, store data, or operate on the stored data are deployed in the cloud. However, it may also be reasonable to run particular parts directly in the production environment. Applying the ideas of fog computing [12] and edge computing [13], we can already reduce the monitoring data where it is recorded. That can be achieved by using appropriate filter or aggregate functions. In our architecture design, the Record Bridge can fulfill such tasks but the production environment may also already feature a dedicated edge controller for this. Thus, the following four deployment combinations are conceivable (see Fig. 3 the most future-oriented alternative if hardware gets more powerful and data transmission becomes the limiting factor. Depending on the edge controller, it may even be possible to execute both components on the same machine. As data are usually aggregated by the edge component, the Record Bridge solely serves for converting the measurements into a more efficient data format. Only if the aggregation is not configurable enough and an additional filtering of data is necessary, it is reasonable to perform further aggregations by the Record Bridge. These approaches can also be arbitrarily combined to adapt to the situation of the existing infrastructure, instead of adapting the production to our approach. Fig. 3 presents all four approaches within a hypothetical deployment. Containerization and orchestration techniques allow to virtualize the execution environment to flexibly assign components to machines. V. PILOT IMPLEMENTATION Based on the presented architecture, we developed a pilot implementation of it in the context of our Titan project on Industrial DevOps [14]. It covers all parts of the architecture including implementations for the individual services as well as the selection of suitable technologies, e.g., databases. In the following, we describe the most important implementation decisions. A. Communication between Services Most services offer REST interfaces, which can be used by other services to request information or to execute operations on them. In particular, the visualization in the web browser requests its data via these REST interfaces. For the asynchronous communication, our implementation uses the messaging system Apache Kafka [15]. Kafka can be operated in a distributed cluster of several brokers. Kafka messages consist of a key and a value and are written and read from topics. Topics can be partitioned and the individual partitions are then assigned to one (or, for redundancy, more) brokers. The key of a message is used to assign the message to a partition, which means, messages with the same key are always stored and transferred by the same partition. Primarily, we use Kafka to transfer sensor measurements. While the message's value is the actual measurement record, we use the identifying name of the corresponding sensor as key. This guarantees that records for the same sensor are always processed by the same Kafka instance, which enhances the scalability for further processing of measurements. For this prototype, we restrict our implementation to only integrate active power sensor data. Active power records, which we exchange between components, are defined in a data format consisting of an identifier of the sensor, a timestamp, and the measured active power in Watts. Furthermore, we allow to exchange aggregated active power records containing aggregation statistics (e.g., the sum) for a set of records. The software performance monitoring framework Kieker [16] offers a domain-specific language (DSL) [18] to define such records [17]. An associated generator creates program code and means to serialize and deserialize records for different programming languages and technologies. We apply Kieker's DSL to define the records. B. Integration of Physical Sensors The Record Bridge microservices integrate physical sensors by translating the data output of the sensors into the common internal data format. Hence, the architecture envisages a separate Record Bridge microservice for each sensor type. However, the tasks that are fulfilled by those services are largely equal. They have to start the application, load configuration parameters, run continuously, and write records into Kafka topics. They only differ in the way how they receive or request data and how they convert those into Kieker records. Therefore, we provide a Record Bridge framework that eliminates repetitive tasks as much as possible. The Record Bridge framework considers sensor data as continuous data streams and provides methods to filter and transform these data. A data stream and the operations on it are declaratively described in a Java-based internal domainspecific language (DSL) [18]. Using this DSL, one solely has to implement the individual steps that are specific for data formats and technologies. Internally, the stream processing declaration is mapped to a Pipe-and-Filter pipeline, which is interpreted and executed by the framework TeeTime [19]. Similar to other stream processing approaches or functional programming techniques, the source of a stream is a function that generates the elements of it. For example, this can be a web server that creates a stream element for each received HTTP message. A stream can be modified with the following higher-order functions: filter retains only specific elements, map maps each element of the stream to a new one, and flatMap maps each element to multiple new ones. Each of these functions returns a new stream, so that the functions can be concatenated as desired. C. Continuous Hierarchical Aggregation The History service uses the column-oriented database Apache Cassandra [20] to store records persistently. A web server provides the required REST interface to retrieve the stored data. Besides storing and reading, we also require to aggregate measurements of different sensors. One possibility would be to do this when reading records from the database. However, this would be highly computational intensive for frequent queries, in particular, if the records are stored distributed on several nodes. Therefore, we decided to aggregate the data continuously and store the aggregated consumption value along with the real, measured ones. In the following, we describe how the aggregation is computed and how we implemented it in a scalable manner. 1) Calculation Methodology: For an aggregated sensorŝ that should aggregate the sensor group S = {s 1 , . . . , s n }, its value vŝ(t) at time t is given by the sum of its child sensors' values at that time: vŝ(t) = s∈S v s (t) However, since measured data are only present for discrete points in time, v s (t) for s ∈ S is not known for many points in times. Furthermore, v s (t ) with t > t is not known since the value should be computed in real-time and thus t would be in the future. Therefore, it is not possible to perform a simple linear interpolation between the precedent and successive value. This means in effect, to compute v s (t) we can only rely on previous values. For our approach, we equate v s (t) to the latest measured value. For the interpretation of those data, this means that the time series of the single sensors are shifted towards the future, whereby the shifting interval is at most the temporal distance between measurements. If the data sources are measured frequently enough and the values do not fluctuate too much, this procedure should not influence the result notably. 2) Realization with Kafka Streams: In order to implement the calculation methodology described above, we designed a stream processing pipeline using Kafka Streams [21]. Kafka Streams is a stream processing framework build on top of Kafka. In Kafka Streams, processing steps are described in a MapReduce-like manner [22] to facilitate scalability and fault tolerance. In contrast to MapReduce however, Kafka Streams operates on continuous data streams. Fig. 4 illustrates this pipeline and pictures the individual steps, which we describe in detail below. The initial data source is the Kafka topic records (top left of Fig. 4). As described above, it contains key-value pairs with a normal active power record as value and its corresponding sensor identifier as key. This topic serves as an interface to the outside of this microservice since it gets its records form other services, namely the Record Bridge services. Our Kafka Streams configuration consumes the elements of this topic and then forwards them to a flatMap processing step. In this step, every record is copied for each aggregated sensor that should consider values of this record's sensor. This means, if a new record is processed, the tree of sensor groups is traversed bottom-up and all parents of the corresponding sensor (parent, grandparent, etc.) are collected in a list. For each entry of this list, the flatMap step emits a new key-value pair with the according parent as key and the active power record as value. Those key-value pairs are then forwarded to a groupByKey step, which groups records belonging together by serializing them to an internal Kafka topic. Thus, it ensures that all records with the same key are published to the same topic partition and, hence, are processed by the same processing instance in a following step. The subsequent aggregate step maintains an internal aggregation history for each aggregated sensor that is processed in the course of time. An aggregation history is a map belonging to an aggregated sensor that holds the last monitoring value for each of its child sensors. It only stores the value for its real child sensors, not for the aggregated ones. Whenever a record arrives with a key for which no aggregation history exists so far, a new one is created. For all successive records the aggregation history is updated by either replacing the last value to this sensor or by adding it if no value for this sensor exists so far. Finally, it is, firstly, stored to an internal key-value store to be used in the next aggregation step and, secondly, forwarded to the next processing step. Afterwards, the aggregation history is transformed to an aggregated active power record in a map step. This is done by calculating different statistics, such as average or sum, of the set of single monitoring values. These aggregated records are then written to the Kafka topic aggregated-records. This topic is again designed as an interface such that other services can consume those data, for instance, to perform data analyses on them. Besides these steps for the hierarchical aggregation, the pipeline also contains two forEach steps that asynchronously store the records from both topics records and aggregatedrecords to the Cassandra database. Whereas we declare the single steps of this data processing pipeline, the connection between the steps as well as the serialization to internal topics or databases is handled by Kafka Streams. If multiple instances of this application are started, Kafka Streams manages to balance the data processing subtasks appropriately. A fundamental principle of Kafka Streams is that partitions are always processed by the same instance since in this way no synchronization between reading instances is necessary. Thus, using this approach, we can create as many instances as there are partitions for the records and the aggregated-records topics. As the number of partitions is bounded by the number of different keys and the keys correspond to the connected sensors, we can start as many instances as there are different sensors and aggregated sensors. This sets a very high limit since the number of sensors will probably be much larger than the degree of parallelization with which the data is processed. D. Web-based Visualization The user interface of the visualization frontend 2 is divided into four views (dashboard, sensor details, comparison, and configuration), which can be accessed via the navigation bar on the left side. The dashboard contains various visualizations of the overall power consumption. In the upper area, it shows three arrows that indicate the trend of consumption in the last hour, 24 hours, or 7 days, respectively. Below them, a large time series chart spans over the entire width. It shows the measured consumption in relation to the point in time it was recorded. When new data arrives, the displayed time interval automatically moves forwards. The user can zoom into the chart or move the displayed interval forward and back. Below the time series chart, a histogram shows the frequency distribution of measured values. It serves for recognizing load peaks. Next to the histogram, a pie chart shows the contribution of each subconsumer. All visualizations update themselves continuously and automatically when new data are available. The sensor details views is similar to the dashboard view but provides navigation through all consumer and consumer groups so that the consumption of these can be observed in detail. The comparison view allows to compare multiple time series interactively. A user can select several time series to be displayed in one chart and, additionally, display multiple charts above each other. The configuration view provides a graphical user interface for the Configuration microservice. It allows to add, remove, or rearrange sensors in the hierarchical model via drag and drop. Research on how to efficiently visualize large data sets was conducted by Johanson et al. [23]. In order to provide this visualization, we utilized their library CanvasPlot [24], which we extended to include real-time functionalities, for our time series charts. Like our other visualizations, CanvasPlot is based on the data-visualization framework D 3 [25]. VI. PILOT DEPLOYMENT In a pilot deployment, we show that our architecture can be applied to a real industrial environment. For this purpose, we deployed the described prototype in a mediumsized enterprise 3 , where we monitored the power consumption of a part of the data center. The deployment includes all parts of our architecture and, thus, covers all aspects of our approach involving data collection, integration, analysis and visualization. The monitored part of the data center provides 16 servers that are power supplied by three power distribution units (PDUs). The PDUs have built-in control and monitoring capabilities and can be accessed via the network. Using their embedded web server, we configured them to record the power consumption of each server and push it to a Record Bridge every minute via HTTP. We developed an appropriate Record Bridge that integrates the PDU data using the presented Record Bridge framework. This Record Bridge features an embedded web server that accepts the push messages. A message is encoded in JSON and contains measurements for each PDU outlet, possibly also for several points in time. After receiving the message, the Record Bridge extracts the individual measured values and forwards them as separate records. Furthermore, it only filters the measurements for active power and discards others such as voltage. An aggregation of measurements is not required by this Record Bridge as it is already done by the PDU itself, in our deployment once per minute. We run this deployment over a period of three weeks and were able to observe that the measurement data successfully passed through all parts of our approach, from the recording of the PDUs to the visualization in the web browser. Also the operations on the data, such as the continuous aggregation, work as desired. 3 This server is used for desktop virtualization (VDI) for the employees. They mainly work from Monday to Friday, which means that the virtual desktops are used primarily then and remain idle during the weekend. This correlates with the server's power consumption. Every night at 3 o'clock a virus scanner runs on the virtual desktops, which explains the nightly increase in power consumption. VII. EXPERIMENTAL SCALABILITY EVALUATION In order to evaluate if the requirement for scalability is met, we examine whether our approach can handle an increasing amount of sensor data with more computing instances. For this scalability evaluation, we simulate a large number of sensors and process their measurements with our prototype implementation. Simultaneously, we measure the number of sensor records processed per second and test this for different numbers of instances. Thus, we determine how many records per second a certain number of service instances can process. As a result, we can determine how many instances are necessary to process the generated load. We perform this evaluation in a Kubernetes cluster operated in a private cloud infrastructure. It consists of four node servers each featuring 128 GB RAM and 2×8 CPU cores that provide 32 threads. The high degree of parallelism allows us to deploy numerous largely independent instances. The nodes and also the experiment are controlled by a dedicated cloud controller. We developed a simulating Record Bridge that does not integrate external sensors, but generates data itself. For the evaluation, we deployed 20 instances of these Record Bridges. Each of them simulates 1000 sensors that generate one measurement every second so that in total 20,000 records are generated per second. Since the History service is the component that is primarily involved in real-time data processing, we focus on deploying different numbers of History service instances. In order to better test parallization characteristics, we limit the computing capacity of each instance to half a CPU core. The Kafka and the Cassandra cluster each consist of three instances. The Kafka topic for the normal active power records contains 20 partitions. For each tested number of History service instances, we determine the average number of processed records per second, repeat this 100 times, and calculate the median as well as the interquartile range of all repetitions. Fig. 6 shows the amount of processed records per second in relation to the number of processing instances. The amount of processed records scales approximately linear with the number of History service instances. When deploying 12 instances, all measurements that are generated can be processed. Note, as we start the simulation before the processing, values greater than 20,000 are possible. Without the restriction to half a CPU core, significantly higher values would probably be possible since records can be processed faster. During the evaluation we periodically retrieved the CPU and memory utilization of the Kafka and Cassandra instances and verified that the load among instances is balanced evenly. Furthermore, we noticed that API queries (e.g., performed by the visualization) are evenly spread over the History services. IX. CONCLUSIONS AND FUTURE WORK Modern industrial production environments offer a number of means to measure resource consumption, such as electrical power, in detail. However, in order to gain knowledge from these data, it is necessary to integrate, analyze and visualize the raw data of the sensors. A software and hardware system that provides this in a scalable manner must be designed to a large extent for fault tolerance, extensibility, and efficient resource usage. For useful analyses, data processing should furthermore be carried out in real time. In this paper, we presented an architecture for such a system that meets these requirements. We apply the microservice architectural pattern that provides solutions to similar challenges in the field of Internet-scale systems. The architecture is intended for a distributed deployment with parts deployed in a cloud environment and parts directly running in the production environment. For a pilot implementation, we use common technologies for microservices and complement them by techniques and tools for big data processing. We successfully deployed this implementation in the computing center of a medium-sized enterprise and, moreover, were able to show its scalability by simulating 20,000 sensors. As future work, we plan to supplement our architecture by further microservices. These should primarily provide further and more complex analyses and visualizations, for instance, to automatically detect anomalies in the consumption. In order to provide deeper insights into the power consumption of individual production processes, we also work on integrating other consumption metrics as well as production and enterprise data, which can be correlated with electrical power consumption. Furthermore, we plan to conduct extensive evaluations, where we monitor larger production environments with different kinds of devices and machines.
5,908
1907.01046
2955907793
Detailed knowledge about the electrical power consumption in industrial production environments is a prerequisite to reduce and optimize their power consumption. Today's industrial production sites are equipped with a variety of sensors that, inter alia, monitor electrical power consumption in detail. However, these environments often lack an automated data collation and analysis. We present a system architecture that integrates different sensors and analyzes and visualizes the power consumption of devices, machines, and production plants. It is designed with a focus on scalability to support production environments of various sizes and to handle varying loads. We argue that a scalable architecture in this context must meet requirements for fault tolerance, extensibility, real-time data processing, and resource efficiency. As a solution, we propose a microservice-based architecture augmented by big data and stream processing techniques. Applying the fog computing paradigm, parts of it are deployed in an elastic, central cloud while other parts run directly, decentralized in the production environment. A prototype implementation of this architecture presents solutions how different kinds of sensors can be integrated and their measurements can be continuously aggregated. In order to make analyzed data comprehensible, it features a single-page web application that provides different forms of data visualization. We deploy this pilot implementation in the data center of a medium-sized enterprise, where we successfully monitor the power consumption of 16 servers. Furthermore, we show the scalability of our architecture with 20,000 simulated sensors.
We did not find any monitoring approaches for production environments designed in a microservices architecture. However, microservice-based approaches exist for other applications of the Internet of Things, such as @cite_22 and @cite_9 . As we propose in our architecture, these approaches intend to deploy microservices decentralized for flexibility and extendibility. Moreover, they use an asynchronous messaging bus for the exchange of sensor data as in our approach. Both approaches do not focus on scalability and, therefore, do not evaluate this.
{ "abstract": [ "The Internet of Things (IoT) is being adopted in different application domains and is recognized as one of the key enablers of the Smart City vision. Despite the standardization efforts and wide adoption of Web standards and cloud computing technologies, however, building large-scale Smart City IoT platforms in practice remains challenging. The dynamically changing IoT environment requires these systems to be able to scale and evolve over time adopting new technologies and requirements. In response to the similar challenges in building large-scale distributed applications and platforms on the Web, micro service architecture style has emerged and gained a lot of popularity in the industry in recent years. In this work, we share our early experience of applying the micro service architecture style to design a Smart City IoT platform. Our experience suggests significant benefits provided by this architectural style compared to the more generic Service-Oriented Architecture (SOA) approaches, as well as highlights some of the challenges it introduces.", "This paper presents challenges and issues in smart buildings and the Internet of Things (IoT), which we identified in years of research in real buildings. To tackle these challenges, a decentralized service-oriented architecture based on a message-oriented middleware has been implemented for the domain of smart buildings. It uses a network-transparent IoT message bus and provides the means for composing applications from auxiliary services, which facilitate device abstraction, protocol adaption, modularity, and maintainability. We demonstrate the flexibility of our architecture by describing how three distinct applications---privacy-enhancing energy data visualization, automated building energy management, and a generic user interface---can be integrated and operated simultaneously in our real smart building laboratory. We compare the advantages of our architecture to conventional ones and provide a best-practice solution for the Intranet of Things and Energy in smart buildings." ], "cite_N": [ "@cite_9", "@cite_22" ], "mid": [ "1957654261", "2565345357" ] }
A Scalable Architecture for Power Consumption Monitoring in Industrial Production Environments
Electrical power consumption is a relevant cost component for manufacturing enterprises. Besides economic motives, also legal as well as self-imposed regulations such as ISO 50001 [1] motivate enterprises to reduce and optimize their power consumption. In particular, load peaks should be reduced as those are significantly more expensive [2]. Due to the immense number of devices, machines, and production plants in such environments, a key challenge is to identify major consumers. Varying and simultaneous workloads on different machines complicate this identification. In order to discover saving potential, it is necessary to monitor all This research is funded by the Federal Ministry of Education and Research (BMBF, Germany) in the Titan project (https://www.industrial-devops.org, contract no. 01IS17084B). consumers and to visualize and analyze their consumption. The data should be monitored as detailed as possible in order to analyze individual consumers or the consumption at particular points in time intensively. However, also aggregated and preconfigured analyses are necessary to make data comprehensible and to allow for an immediate reaction. Current trends towards the Industrial Internet of Things and Industry 4.0 bring devices that are increasingly able to monitor their state and resource usage. Equipped with network capabilities, they provide these data to other hardware or software components [3]. Combining all sensors into one distributed hardware and software system promises to provide the necessary monitoring infrastructure to optimize power consumption [4]. Also older devices that do not offer monitoring mechanisms can be integrated using auxiliary devices such as monitoring power sockets. The following features are of particular relevance and should be provided by such a system: 1) Data Integration: Devices and machines in production environments usually come from different manufactures located in different business domains. Furthermore, they are likely to differ in their ages and originate from different generations of technological evolution [5]. This leads to the situation that also the way they supply data varies widely. Most notably, this is due to the protocols and data formats they use but also to the way they measure. Parameters such as precision, sampling rate, or measurement units may vary from domain to domain. In order to compare data of different sensors and to consider the data analysis from a higher level, data first have to be brought into a common format. This also includes converting measurement units or splitting up multiple measurements that are sent together. Moreover, it is likely that not all measurements are of interest and only specific values have to be selected. As the amount of data may be too large to be analyzed, it is often reasonable to first aggregate measurements. 2) Data Analysis: The individual consumption values of devices are often too detailed to draw conclusions about the entire production. Instead, it may often be more reasonable to evaluate data for an entire group of devices. This is even more significant in cases, where devices have more than one power supply, which are monitored individually. It is likely that in such cases, only the aggregated data are of interest. 3) Data Visualization: Visualization of monitored and analyzed data allows a user to draw conclusions about the current state of the overall production. Based on this, a user should be able to make decisions about the further operation. Contribution: In this paper, we make the following contributions: We define architectural requirements, which such a monitoring infrastructure has to meet in order to be generically applicable for different kinds and sizes of production environments (Section II). We present an architecture that meets these requirements (Section III) and that allows for different ways to deploy it (Section IV). In addition, with our open source pilot implementation 1 (Section V), we show how our approach can be deployed in a real production environment (Section VI) and we evaluate it in terms of scalability (Section VII). Finally, we discuss related work in Section VIII and conclude this paper in Section IX. II. ARCHITECTURAL REQUIREMENTS Infrastructures and requirements differ significantly among enterprises and between business sectors. These may change not only from business to business but also within the same application scenario, for example, if after an initial test period additional enterprise departments should be integrated. We aim for an architecture that can be deployed in small-scale production environments as well as in arbitrary large ones. In the following, we describe four key requirements that are of crucial importance for such an architecture. A. Data Processing in Real-Time while Scaling The data transmission, analysis, and visualization in our approach should be performed as quickly as possible in order to allow for an insight into the current infrastructure's status at any time. This is the only way to react to unexpected events or to evaluate the current production process. This requirement needs to be reflected in the architecture design such that, for example, batch processing techniques are not an option for the majority of the analyses. With a larger production environment, the volume of sensor data increases. This includes both the amount of data per sensor as well as the total number of sensors in the production. The requirement for real-time data processing should not be sacrificed if the amount of data increases. In addition, the architecture should also be able to handle varying loads during ongoing operation to avoid downtimes in which the production infrastructure would no longer have been monitored. Besides an increasing load, also a decreasing load should be able to be handled efficiently. B. Scalability and Resource Efficiency If the amount of sensor data grows, more computing power is necessary. To a certain degree this can be achieved by providing more powerful hardware (vertical scaling). However, one quickly reaches a limit where additional computing power can only be accomplished by adding further machines (horizontal scaling). According to Abbott and Fisher's Scale Cube 1 https://github.com/cau-se/titan-ccp [6], horizontal scaling can be obtained in three combinable dimensions: duplicate instances of the software system, split the managed and processed data, and decompose the software by functionalities. Our architecture has to be designed in a way that facilitates an operation on multiple machines and, furthermore, utilizes them efficiently. The amount of data that is recorded by a sensor is often larger than actually needed for analyses. In order to reduce network traffic and make optimal use of the existing hardware, the sensors (or devices located close to the sensors) should already process as much data as possible. However, those edge devices typically operate on limited hardware resources, which are usually not sufficient to execute complex analyses directly on them. Moreover, the given resource capacities are not or only limited extendable and, thus, impede scaling of the software. Therefore, an architecture design has to find a balance between optimal resource usage and respecting resource constrains of the edge devices. C. Scalability and Fault Tolerance A horizontally scalable system is inevitably a distributed system whose components communicate via the network. This implies that parts may temporarily become unavailable or fail. Therefore, the software architecture and a corresponding implementation must be designed to tolerate faults and those do not lead to a failure of the overall system. Supporting horizontal scaling via duplicating instances also assists in fault-tolerance as failed instances can be replaced by their duplicates. D. Scalability and Extensibility As the number of sensors increases, more data formats and protocols need to be integrated. Moreover, it is likely that large production environments require support for additional metrics. This also applies to analyses and visualizations. An increasing volume of measurement data requires more complex, automatic and, therefore, more domain-specific analyses to make the data understandable. Therefore, the architecture should be designed in an adaptable and extensible way. III. MICROSERVICE ARCHITECTURE Considering the architectural requirements described above, we designed a microservice-based architecture [7] for the desired monitoring infrastructure. Microservice architectures are an approach to modularize software. It divides software into small components, called microservices, that can be used and deployed independently of each other. The separation into microservices is based on business functions. Each service maps to an own business area and provides a complete implementation of it [8]. This makes it much easier to adapt the component to changing requirements that typically arise from the business area. Microservices are isolated from each other. They run in separate processes and do not share any state. Thus, they can independently be started, stopped, or replaced. In particular, microservices can be independently released to production so that a new version of one microservice does not require to update the others. Furthermore, they do not share any implementation or database schema but communicate via transaction-less protocols such as REST. This also facilitates an individual choice of programming language, database system and technology stack for each service. Loose coupling between microservices enables individual scaling of them and allows the system as a whole to scale more fine-grained [9]. This avoids wasting computing resources as only those components need to be scaled for which it is necessary. Since the individual services only require normal network connections between them, they can be deployed in different contexts. This offers a lot of flexibility in the operation of the software. The main drivers for microservice adoption are, depending on application domain, scalability and maintainability [10]. Furthermore, microservice architectures support agile architecture work [11]. Fig. 1 shows a graphical representation of our architecture. It contains the three microservices Record Bridge, which integrates sensors, History, which aggregates and stores sensor data for the long term, and Configuration, which manages the system's state. Whereas the Record Bridge solely contains application logic, the services History and Configuration additionally contain a data storage subcomponent. The Visualization component is not a typical microservice as it does not represent an own business function but instead serves as an integration of different business functions. It consists of two parts, a server-sided backend and a client-sided frontend. The services in our architecture communicate with each other in two different ways: first, synchronously using a request-reply API paradigm such as REST to read or modify the other services' states; second, via a messaging bus or system to publish events that may be asynchronously consumed by other services. Using both communication approaches to-gether is a common pattern when designing microservices [8]. The major task of our approach is stream processing of sensor measurement data. Fig. 2 shows the flow of measurement data among components starting from their integration via a Record Bridge to their visualization in a web browser. A. Record Bridge Sensors use different schemata, technologies, and transport mechanisms. Transportation can take place with high-level techniques such as HTTP but also on a low-level, for example, via serial data buses. Data can be encoded in text formats such as JSON or XML but also binary. And besides several standardized data schemata, there are also numerous proprietary ones. This requires to convert sensor measurements to a common format that is used inside our whole approach, before they will be further processed and analyzed. The Record Bridge fulfills this task. It receives the sensor data, transforms them, and then publishes them for other components by sending them to the messaging system. In our architecture design, it functions as a placeholder for arbitrary concrete Bridges, where each Record Bridge integrates a specific sensor type. As a sensor type, we consider a set of sensors that use the same schemata, formats, and transport mechanisms. Record Bridge services are primarily supposed to convert data from one format into another. They do not need to have any or only little knowledge about previous transformations. Therefore, they should be designed as stateless as possible since stateless components enable an arbitrary scaling. B. History The History service manages past sensor data and provides access to them. This includes real sensor measurements as well as aggregated data for groups of sensors. Thus, one task this component has to fulfill is the hierarchical aggregation of data. This should be done in realtime, which means: Whenever a sensor supplies a new measurement, all aggregated sensor groups that contain this sensor should obtain an update as well. Like for real sensors, this component creates a new record with the aggregated values and publishes it via the asynchronous event exchange system (bottom Fig. 1) for other services. In order to access past measurements, they first need to be permanently stored. Therefore, the History service has access to a database and when it receives new records (aggregated or not) it writes them to that database. For other services, the History service provides access to the database in the form of an API, which has various endpoints that return records or statistics on them. The application logic is separated from the actual data storage. Thus both parts can be scaled independently. When choosing a database management system (DBMS), it should also be considered how well it can be scaled-both in terms of accessibility and storage. Even if the data retention is segregated into a DBMS, the application logic still cannot be considered entirely stateless. This is due to the fact that multiple instances need to coordinate themselves when consuming data from the messaging system or aggregating them. C. Configuration The Configuration microservice manages the system-wide settings, such as a hierarchical model that specifies what sensors exist and how they could be aggregated. However, the Configuration service does not serve as a central place for all configurations of individual services. Settings that clearly belong to a specific service should be configurable directly in that service. An essential requirement of this service is the ability to handle reconfigurations during the execution. In other words, no restart should be needed whenever the configuration changes and other services will receive notifications about those changes. Therefore, the Configuration service provides an API to update or request the current configuration and propagates updates via the messaging system. Furthermore, this service contains a database to store the current configuration. It is the database's responsibility to store data in a reliable and perhaps redundant manner. Separating the database from the API logic also allows to scale both of them independently. D. Visualization Besides monitoring and analysis, our approach also includes an interactive, web-based visualization. Our architecture contains a Visualization component following the Backends for Frontends pattern [7]. As the name suggests, it consists of two parts: a frontend and a backend. The frontend is a single-page application running in the user's web browser. After compilation, it is a set of static files that are interpreted by the web browser. The actual data are dynamically requested and loaded at runtime from the corresponding microservices. The backend fulfills two purposes. Firstly, it acts as a static file server that delivers the single-page application. Secondly, it functions as an API gateway that provides all required interfaces for the frontend. When the frontend makes a request, it addresses it to the backend, which then forwards the request to the corresponding microservices. In this way, the backend abstracts and hides the internal division into microservices. IV. DISTRIBUTED DEPLOYMENT The proposed software architecture is designed to allow for an individual scaling of its components. In particular, this implies that multiple instances of components can be deployed and that the load is balanced among them. In this way, we expect that our approach is feasible for different sized production environments and, furthermore, we can react flexibly to changing loads and requirements. In addition to the software architecture and a corresponding implementation, however, the system must also be deployed in such a way that it can take advantage of the possibilities for scaling. For these reasons, large parts of the architecture are supposed to be deployed in a cloud environment. This does not necessarily have to be a public cloud of an external provider, a private cloud can also offer this. Cloud environments provide the infrastructure and platform demanded by the current load dynamically and as a service. This is sensible as hardware in the production is often not powerful enough and provision of additional hardware is time-consuming and costly. Therefore, architecture components that perform intensive computations, store data, or operate on the stored data are deployed in the cloud. However, it may also be reasonable to run particular parts directly in the production environment. Applying the ideas of fog computing [12] and edge computing [13], we can already reduce the monitoring data where it is recorded. That can be achieved by using appropriate filter or aggregate functions. In our architecture design, the Record Bridge can fulfill such tasks but the production environment may also already feature a dedicated edge controller for this. Thus, the following four deployment combinations are conceivable (see Fig. 3 the most future-oriented alternative if hardware gets more powerful and data transmission becomes the limiting factor. Depending on the edge controller, it may even be possible to execute both components on the same machine. As data are usually aggregated by the edge component, the Record Bridge solely serves for converting the measurements into a more efficient data format. Only if the aggregation is not configurable enough and an additional filtering of data is necessary, it is reasonable to perform further aggregations by the Record Bridge. These approaches can also be arbitrarily combined to adapt to the situation of the existing infrastructure, instead of adapting the production to our approach. Fig. 3 presents all four approaches within a hypothetical deployment. Containerization and orchestration techniques allow to virtualize the execution environment to flexibly assign components to machines. V. PILOT IMPLEMENTATION Based on the presented architecture, we developed a pilot implementation of it in the context of our Titan project on Industrial DevOps [14]. It covers all parts of the architecture including implementations for the individual services as well as the selection of suitable technologies, e.g., databases. In the following, we describe the most important implementation decisions. A. Communication between Services Most services offer REST interfaces, which can be used by other services to request information or to execute operations on them. In particular, the visualization in the web browser requests its data via these REST interfaces. For the asynchronous communication, our implementation uses the messaging system Apache Kafka [15]. Kafka can be operated in a distributed cluster of several brokers. Kafka messages consist of a key and a value and are written and read from topics. Topics can be partitioned and the individual partitions are then assigned to one (or, for redundancy, more) brokers. The key of a message is used to assign the message to a partition, which means, messages with the same key are always stored and transferred by the same partition. Primarily, we use Kafka to transfer sensor measurements. While the message's value is the actual measurement record, we use the identifying name of the corresponding sensor as key. This guarantees that records for the same sensor are always processed by the same Kafka instance, which enhances the scalability for further processing of measurements. For this prototype, we restrict our implementation to only integrate active power sensor data. Active power records, which we exchange between components, are defined in a data format consisting of an identifier of the sensor, a timestamp, and the measured active power in Watts. Furthermore, we allow to exchange aggregated active power records containing aggregation statistics (e.g., the sum) for a set of records. The software performance monitoring framework Kieker [16] offers a domain-specific language (DSL) [18] to define such records [17]. An associated generator creates program code and means to serialize and deserialize records for different programming languages and technologies. We apply Kieker's DSL to define the records. B. Integration of Physical Sensors The Record Bridge microservices integrate physical sensors by translating the data output of the sensors into the common internal data format. Hence, the architecture envisages a separate Record Bridge microservice for each sensor type. However, the tasks that are fulfilled by those services are largely equal. They have to start the application, load configuration parameters, run continuously, and write records into Kafka topics. They only differ in the way how they receive or request data and how they convert those into Kieker records. Therefore, we provide a Record Bridge framework that eliminates repetitive tasks as much as possible. The Record Bridge framework considers sensor data as continuous data streams and provides methods to filter and transform these data. A data stream and the operations on it are declaratively described in a Java-based internal domainspecific language (DSL) [18]. Using this DSL, one solely has to implement the individual steps that are specific for data formats and technologies. Internally, the stream processing declaration is mapped to a Pipe-and-Filter pipeline, which is interpreted and executed by the framework TeeTime [19]. Similar to other stream processing approaches or functional programming techniques, the source of a stream is a function that generates the elements of it. For example, this can be a web server that creates a stream element for each received HTTP message. A stream can be modified with the following higher-order functions: filter retains only specific elements, map maps each element of the stream to a new one, and flatMap maps each element to multiple new ones. Each of these functions returns a new stream, so that the functions can be concatenated as desired. C. Continuous Hierarchical Aggregation The History service uses the column-oriented database Apache Cassandra [20] to store records persistently. A web server provides the required REST interface to retrieve the stored data. Besides storing and reading, we also require to aggregate measurements of different sensors. One possibility would be to do this when reading records from the database. However, this would be highly computational intensive for frequent queries, in particular, if the records are stored distributed on several nodes. Therefore, we decided to aggregate the data continuously and store the aggregated consumption value along with the real, measured ones. In the following, we describe how the aggregation is computed and how we implemented it in a scalable manner. 1) Calculation Methodology: For an aggregated sensorŝ that should aggregate the sensor group S = {s 1 , . . . , s n }, its value vŝ(t) at time t is given by the sum of its child sensors' values at that time: vŝ(t) = s∈S v s (t) However, since measured data are only present for discrete points in time, v s (t) for s ∈ S is not known for many points in times. Furthermore, v s (t ) with t > t is not known since the value should be computed in real-time and thus t would be in the future. Therefore, it is not possible to perform a simple linear interpolation between the precedent and successive value. This means in effect, to compute v s (t) we can only rely on previous values. For our approach, we equate v s (t) to the latest measured value. For the interpretation of those data, this means that the time series of the single sensors are shifted towards the future, whereby the shifting interval is at most the temporal distance between measurements. If the data sources are measured frequently enough and the values do not fluctuate too much, this procedure should not influence the result notably. 2) Realization with Kafka Streams: In order to implement the calculation methodology described above, we designed a stream processing pipeline using Kafka Streams [21]. Kafka Streams is a stream processing framework build on top of Kafka. In Kafka Streams, processing steps are described in a MapReduce-like manner [22] to facilitate scalability and fault tolerance. In contrast to MapReduce however, Kafka Streams operates on continuous data streams. Fig. 4 illustrates this pipeline and pictures the individual steps, which we describe in detail below. The initial data source is the Kafka topic records (top left of Fig. 4). As described above, it contains key-value pairs with a normal active power record as value and its corresponding sensor identifier as key. This topic serves as an interface to the outside of this microservice since it gets its records form other services, namely the Record Bridge services. Our Kafka Streams configuration consumes the elements of this topic and then forwards them to a flatMap processing step. In this step, every record is copied for each aggregated sensor that should consider values of this record's sensor. This means, if a new record is processed, the tree of sensor groups is traversed bottom-up and all parents of the corresponding sensor (parent, grandparent, etc.) are collected in a list. For each entry of this list, the flatMap step emits a new key-value pair with the according parent as key and the active power record as value. Those key-value pairs are then forwarded to a groupByKey step, which groups records belonging together by serializing them to an internal Kafka topic. Thus, it ensures that all records with the same key are published to the same topic partition and, hence, are processed by the same processing instance in a following step. The subsequent aggregate step maintains an internal aggregation history for each aggregated sensor that is processed in the course of time. An aggregation history is a map belonging to an aggregated sensor that holds the last monitoring value for each of its child sensors. It only stores the value for its real child sensors, not for the aggregated ones. Whenever a record arrives with a key for which no aggregation history exists so far, a new one is created. For all successive records the aggregation history is updated by either replacing the last value to this sensor or by adding it if no value for this sensor exists so far. Finally, it is, firstly, stored to an internal key-value store to be used in the next aggregation step and, secondly, forwarded to the next processing step. Afterwards, the aggregation history is transformed to an aggregated active power record in a map step. This is done by calculating different statistics, such as average or sum, of the set of single monitoring values. These aggregated records are then written to the Kafka topic aggregated-records. This topic is again designed as an interface such that other services can consume those data, for instance, to perform data analyses on them. Besides these steps for the hierarchical aggregation, the pipeline also contains two forEach steps that asynchronously store the records from both topics records and aggregatedrecords to the Cassandra database. Whereas we declare the single steps of this data processing pipeline, the connection between the steps as well as the serialization to internal topics or databases is handled by Kafka Streams. If multiple instances of this application are started, Kafka Streams manages to balance the data processing subtasks appropriately. A fundamental principle of Kafka Streams is that partitions are always processed by the same instance since in this way no synchronization between reading instances is necessary. Thus, using this approach, we can create as many instances as there are partitions for the records and the aggregated-records topics. As the number of partitions is bounded by the number of different keys and the keys correspond to the connected sensors, we can start as many instances as there are different sensors and aggregated sensors. This sets a very high limit since the number of sensors will probably be much larger than the degree of parallelization with which the data is processed. D. Web-based Visualization The user interface of the visualization frontend 2 is divided into four views (dashboard, sensor details, comparison, and configuration), which can be accessed via the navigation bar on the left side. The dashboard contains various visualizations of the overall power consumption. In the upper area, it shows three arrows that indicate the trend of consumption in the last hour, 24 hours, or 7 days, respectively. Below them, a large time series chart spans over the entire width. It shows the measured consumption in relation to the point in time it was recorded. When new data arrives, the displayed time interval automatically moves forwards. The user can zoom into the chart or move the displayed interval forward and back. Below the time series chart, a histogram shows the frequency distribution of measured values. It serves for recognizing load peaks. Next to the histogram, a pie chart shows the contribution of each subconsumer. All visualizations update themselves continuously and automatically when new data are available. The sensor details views is similar to the dashboard view but provides navigation through all consumer and consumer groups so that the consumption of these can be observed in detail. The comparison view allows to compare multiple time series interactively. A user can select several time series to be displayed in one chart and, additionally, display multiple charts above each other. The configuration view provides a graphical user interface for the Configuration microservice. It allows to add, remove, or rearrange sensors in the hierarchical model via drag and drop. Research on how to efficiently visualize large data sets was conducted by Johanson et al. [23]. In order to provide this visualization, we utilized their library CanvasPlot [24], which we extended to include real-time functionalities, for our time series charts. Like our other visualizations, CanvasPlot is based on the data-visualization framework D 3 [25]. VI. PILOT DEPLOYMENT In a pilot deployment, we show that our architecture can be applied to a real industrial environment. For this purpose, we deployed the described prototype in a mediumsized enterprise 3 , where we monitored the power consumption of a part of the data center. The deployment includes all parts of our architecture and, thus, covers all aspects of our approach involving data collection, integration, analysis and visualization. The monitored part of the data center provides 16 servers that are power supplied by three power distribution units (PDUs). The PDUs have built-in control and monitoring capabilities and can be accessed via the network. Using their embedded web server, we configured them to record the power consumption of each server and push it to a Record Bridge every minute via HTTP. We developed an appropriate Record Bridge that integrates the PDU data using the presented Record Bridge framework. This Record Bridge features an embedded web server that accepts the push messages. A message is encoded in JSON and contains measurements for each PDU outlet, possibly also for several points in time. After receiving the message, the Record Bridge extracts the individual measured values and forwards them as separate records. Furthermore, it only filters the measurements for active power and discards others such as voltage. An aggregation of measurements is not required by this Record Bridge as it is already done by the PDU itself, in our deployment once per minute. We run this deployment over a period of three weeks and were able to observe that the measurement data successfully passed through all parts of our approach, from the recording of the PDUs to the visualization in the web browser. Also the operations on the data, such as the continuous aggregation, work as desired. 3 This server is used for desktop virtualization (VDI) for the employees. They mainly work from Monday to Friday, which means that the virtual desktops are used primarily then and remain idle during the weekend. This correlates with the server's power consumption. Every night at 3 o'clock a virus scanner runs on the virtual desktops, which explains the nightly increase in power consumption. VII. EXPERIMENTAL SCALABILITY EVALUATION In order to evaluate if the requirement for scalability is met, we examine whether our approach can handle an increasing amount of sensor data with more computing instances. For this scalability evaluation, we simulate a large number of sensors and process their measurements with our prototype implementation. Simultaneously, we measure the number of sensor records processed per second and test this for different numbers of instances. Thus, we determine how many records per second a certain number of service instances can process. As a result, we can determine how many instances are necessary to process the generated load. We perform this evaluation in a Kubernetes cluster operated in a private cloud infrastructure. It consists of four node servers each featuring 128 GB RAM and 2×8 CPU cores that provide 32 threads. The high degree of parallelism allows us to deploy numerous largely independent instances. The nodes and also the experiment are controlled by a dedicated cloud controller. We developed a simulating Record Bridge that does not integrate external sensors, but generates data itself. For the evaluation, we deployed 20 instances of these Record Bridges. Each of them simulates 1000 sensors that generate one measurement every second so that in total 20,000 records are generated per second. Since the History service is the component that is primarily involved in real-time data processing, we focus on deploying different numbers of History service instances. In order to better test parallization characteristics, we limit the computing capacity of each instance to half a CPU core. The Kafka and the Cassandra cluster each consist of three instances. The Kafka topic for the normal active power records contains 20 partitions. For each tested number of History service instances, we determine the average number of processed records per second, repeat this 100 times, and calculate the median as well as the interquartile range of all repetitions. Fig. 6 shows the amount of processed records per second in relation to the number of processing instances. The amount of processed records scales approximately linear with the number of History service instances. When deploying 12 instances, all measurements that are generated can be processed. Note, as we start the simulation before the processing, values greater than 20,000 are possible. Without the restriction to half a CPU core, significantly higher values would probably be possible since records can be processed faster. During the evaluation we periodically retrieved the CPU and memory utilization of the Kafka and Cassandra instances and verified that the load among instances is balanced evenly. Furthermore, we noticed that API queries (e.g., performed by the visualization) are evenly spread over the History services. IX. CONCLUSIONS AND FUTURE WORK Modern industrial production environments offer a number of means to measure resource consumption, such as electrical power, in detail. However, in order to gain knowledge from these data, it is necessary to integrate, analyze and visualize the raw data of the sensors. A software and hardware system that provides this in a scalable manner must be designed to a large extent for fault tolerance, extensibility, and efficient resource usage. For useful analyses, data processing should furthermore be carried out in real time. In this paper, we presented an architecture for such a system that meets these requirements. We apply the microservice architectural pattern that provides solutions to similar challenges in the field of Internet-scale systems. The architecture is intended for a distributed deployment with parts deployed in a cloud environment and parts directly running in the production environment. For a pilot implementation, we use common technologies for microservices and complement them by techniques and tools for big data processing. We successfully deployed this implementation in the computing center of a medium-sized enterprise and, moreover, were able to show its scalability by simulating 20,000 sensors. As future work, we plan to supplement our architecture by further microservices. These should primarily provide further and more complex analyses and visualizations, for instance, to automatically detect anomalies in the consumption. In order to provide deeper insights into the power consumption of individual production processes, we also work on integrating other consumption metrics as well as production and enterprise data, which can be correlated with electrical power consumption. Furthermore, we plan to conduct extensive evaluations, where we monitor larger production environments with different kinds of devices and machines.
5,908
1907.00420
2953607269
In this study, we investigated multi-modal approaches using images, descriptions, and title to categorize e-commerce products on this http URL. Specifically, we examined late fusion models, where the modalities are fused at the decision level. Products were each assigned multiple labels, and the hierarchy in the labels were flattened and filtered. For our individual baseline models, we modified a CNN architecture to classify the description and title, and then modified Keras' ResNet-50 to classify the images, achieving F1 scores of 77.0 , 82.7 , and 61.0 , respectively. In comparison, our tri-modal late fusion model can classify products more accurately than single modal models can, improving the F1 score to 88.2 . Each modality complemented the shortcomings of the other modalities, demonstrating that increasing the number of modalities can be an effective method for improving the accuracy of multi-label classification problems.
@cite_22 used a convolutional neural network (CNN) architecture based on the architecture of Kim @cite_21 to classify the title of the products. The first layer uses random word embedding. In addition they used a VGG network for image classification @cite_4 . While they experimented with both early and late fusion, only the late fusion resulted in an improvement in accuracy. The image and text classifiers were trained separately to achieve maximal performance individually before being combined by a policy network. The policy network which achieved the highest accuracy is a neural network with 2 fully connected layers and takes in the top-3 class probabilities from the image and text CNNs as input. Their dataset contained 1.2 million images and 2890 possible shelves. On average, each product falls in 3 shelves. Their model is considered accurate when the network correctly outputs one of the three shelves.
{ "abstract": [ "", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "Classifying products into categories precisely and efficiently is a major challenge in modern e-commerce. The high traffic of new products uploaded daily and the dynamic nature of the categories raise the need for machine learning models that can reduce the cost and time of human editors. In this paper, we propose a decision level fusion approach for multi-modal product classification using text and image inputs. We train input specific state-of-the-art deep neural networks for each input source, show the potential of forging them together into a multi-modal architecture and train a novel policy network that learns to choose between them. Finally, we demonstrate that our multi-modal network improves the top-1 accuracy @math over both networks on a real-world large-scale product classification dataset that we collected from Walmart.com. While we focus on image-text fusion that characterizes e-commerce domains, our algorithms can be easily applied to other modalities such as audio, video, physical sensors, etc." ], "cite_N": [ "@cite_21", "@cite_4", "@cite_22" ], "mid": [ "", "1686810756", "2559721862" ] }
Multi-Label Product Categorization Using Multi-Modal Fusion Models
Related Works Zahavy et al. [5] used a convolutional neural network (CNN) architecture based on the architecture of Kim [6] to classify the title of the products. The first layer uses random word embedding. In addition they used a VGG network for image classification [7]. While they experimented with both early and late fusion, only the late fusion resulted in an improvement in accuracy. The image and text classifiers were trained separately to achieve maximal performance individually before being combined by a policy network. The policy network which achieved the highest accuracy is a neural network with 2 fully connected layers and takes in the top-3 class probabilities from the image and text CNNs as input. Their dataset contained 1.2 million images and 2890 possible shelves. On average, each product falls in 3 shelves. Their model is considered accurate when the network correctly outputs one of the three shelves. Åberg [4] is one of the first authors to use the image, title, and description of an ad/product to classify products into single categories. Åberg concatenated the title and description, and used fastText (Joulin et al.) [8] as the baseline model for text classification, while using the Inception V3 for image classification. Åberg also explored a similar implementation of Kim's CNN architecture [6] but could not achieve the level of accuracy of fastText. The dataset contained 96,806 products belonging to 193 different classes. Note that each product was assigned to one class. Hence Åberg applied a softmax function in the final layer before outputting the class probabilities. Similar to Zahavy et al. [5], both late and early fusion were explored, and late fusion yielded better results. Both heuristic policies and network policies were explored. Heuristic policies refer to some static rule; as an example, the mean of the probabilities from different modals. Network policies refer to training a neural network that takes the output probabilities from different networks and produces a new probability vector. Dataset Our dataset comprises of Amazon products, which has been extracted by SNAP [9]. There are 9.4 million products in total. The class hierarchical information was not available, as the classes and subclasses were pre-flattened as given. We randomly sampled 119,073 products from this dataset, in which the first 90,000 products are kept for the training set. After pre-processing, there are 122 possible classes in which a product can belong to. Unlike many previous studies, here each product can be assigned multiple labels. Each product in the dataset contains the image, description, title, price, and co-purchasing network. Product categorization systems can be challenging to build due to the trade-off between the number of classes and accuracy. As an example, adding more classes and sub-classes to a product might make it easier to discover, but more classes would also increase the likelihood of an incorrect class being applied. To address this issue, some studies [5,10] reduced the number of sub-classes. One method is to create a shelf and categorize the products based on the shelves they are in. A shelf is a group of products presented together on the same e-commerce webpage, which usually contains products under the same categories [5]. Since our dataset does not contain the webpage information necessary to form shelves, our method was to remove the classes containing less than 400 products. Figure 2: The x-axis represents the number of products in a category, whereas the y-axis represents the number of categories with that number of products. On average, each product belongs to 3 categories after pre-processing. The maximum number of products in a category is 37,102 and the minimum number of products in a category is 558. On average, there are 2,919 products per category. In addition, we can see from Fig. 2 that the number of products per categories is not evenly distributed, which could introduce bias into the model. Baseline Models In order to understand how much we benefit from fusing the different modal classifiers, we report the baseline accuracy for each modal below. We evaluate our accuracy using the F 1 score (microaveraged), which is an accepted metric for multi-label classification and imbalanced datasets [11]. During training, for all classifiers, we used Adam [12] as our optimizer and categorical cross-entropy as our loss function. To accommodate multi-labeling, the final activation for all classifiers is a sigmoid function. Although both titles and descriptions are textual data, we leverage their different use-cases by treating them as different modalities, allowing us to perform different pre-processing steps as described below. Description Classifier The description was pre-processed to remove stop words, excessive whitespace, digits, punctuations, and words longer than 30 characters. In addition, sentences were truncated to 300 words. To classify the pre-processed descriptions, we slightly modified Kim's CNN architecture for sentence classification. Kim's architecture is a CNN with one layer of convolution on top of word vectors initialized using Word2Vec [6,13]. Max-pooling over time is then applied [14], which serves to capture the most important features. Finally, dropout is employed in the penultimate layer. Unlike Kim, we used GloVe as our embedding. Words not covered by GloVe were initialized randomly. For our dataset, GloVe covers only 61.0% of the vocabulary from the description. Our first convolution layer uses a kernel of size 5 with 200 filters. We then performed global max pooling, followed by a fully connected layer of 170 units with ReLU activations. Our final layer is another densely connected layer of 122 units with sigmoid activation. This model achieves 77.0% on the test set. Title Classifier Although an identical classifier to the description classifier was used for the title, the title data was pre-processed differently. For the title, we did not remove the stop words and limited or padded the text to 57 words. We again chose GloVe for the embedding, in which words not covered were initialized randomly. GloVe covers 77.0% of the vocabulary from the title. This model achieves 82.7% on the test set. Image Classifier We modified the ResNet-50 architecture from Keras by removing the final densely connected layer and adding a densely connected layer with 122 units to match the number of labels we have. In addition, we changed the final activation to be sigmoidal. ResNet-50 is based on the architecture of He et al. [15], which achieves competitive results compared to other state of the art models. We also used the pre-trained imagenet weights, which has been trained on the imagenet dataset [16], containing more than 14 million images. We kept the earlier layers frozen and trained only the deeper/later layers [17]. We experimented with the number of trainable layers, in which our top model was trained only the last 40 layers, achieving 61% accuracy on the test set. The results summarized in Table 1 underscore that the classifiers differ in discriminative powers as the title and description classifiers significantly outperform the image classifier. This result is consistent with Zahavy et al. as their result also demonstrated a significant difference between the image and title classifiers [5]. Moreover, we have shown that the description classifier also significantly outperforms the image classifier. Such results suggests that text can provide more information regarding a product's categories. Table 2 we can see that the top misclassified categories for each classifier generally reflect their inadequate representation in the dataset. Recall that the average number of products per category is 2,919. The Accessories category contains the most products (924) out of all the misclassified categories, but it is still far below the average. In addition, we can see that the top misclassified categories for each classifier seldom overlap between the modal classifiers. For the categories that the image classifier is classifying inaccurately, the description and title classifiers are classifying more accurately and vice versa. This suggests that we should be able to combine the classifiers to effectively complement each other's shortcomings for a more accurate result. Summary Error Analysis Multi-Modality As Åberg and Zahavy et al. found that late fusion models were more accurate than early-fusion models [5,4], here we focus our studies on improving late fusion. Predefined Policies Since both Åberg and Zahavy et al. experimented with predefined rules [5,4], we included predefined rules to compare with other non-static policies. We experimented with max policy and mean policy of the output from each of the classifiers. The max policy selects the highest output for each class prediction from among the image, label, and title classifiers. This can be represented as o max = max(o image , o title , o description ),(1) where o image , o title , o description ∈ R 122 represent the output from each classifier. The mean policy can be represented as o mean = o image + o title + o description 3 .(2) Both mean and max policy resulted in lower accuracies when compared to the top classifier, which is the title classifier. The mean policy yielded 81.7%, while the max yielded 78.8%. Intuitively, each classifier contributes equally to the mean policy. Therefore, we would expect that the average performance is less than that of the best performer. For the max policy, the erroneous maximal outputs from the low performing classifiers detriment the ultimate predictions. Linear Regression We trained a simple ridge linear regression model to fuse the individual classifiers into a single classifier. The model achieves 83.0% on the test set. The model can be written as follows min w ||wX − y|| 2 2 + α||w|| 2 2 ,(3) where y is the true label and X is the predicted label. Nevertheless, the simple non-static policy can outperform static policies above. Bi-Modal Fusion The work by Zahavy et al. involved two neural networks, one for classifying images and another for classifying titles, using late fusion [5]. For comparison purposes, we examined models developed from fusing two of the three modal networks in this study. The first fused network included the image classifier's output (as in Section 3.1) and the title classifier's output (as in Section 3.2). This is essentially the method by Zahavy et al. [5]. We then fused the title classifier's output and description classifier's output for the second fused network and fused the image classifier's output and description classifier's output for the third network. All three networks were fused the same way, using a three layer neural network to concatenate the outputs from each of the classifiers. The first, second, and third layers contained 200, 150 and 122 units, respectively. All the activations were sigmoidal. The image-description, image-title, and description-title fused networks yielded 82.0%, 85.0%, and 87.0% accuracies, respectively (Table 3). Tri-Modal Fusion Description CNN Image Title CNN Resnet 50 Output Output Output Feedforward Neural Network Prediction Figure 3: The proposed triple modals fusion architecture. The CNN is based on Yoon Kim's architecture [6]. Finally, we developed a tri-modal model to include the titles, images, and descriptions. To our knowledge, we are the first to fuse three classifiers/neural networks to categorize products. We fused the three classifiers (as in Sections 3.1, 3.2, and 3.3) using a policy network, which is an additional neural network that takes in the output of each of the classifiers. We varied the number of layers, activation functions, and units of the neural networks. Through hyperparameter optimization, we found that the top policy network consists of three layers. It uses the sigmoidal activation on the first and last layers and hyperbolic tangent activation on the middle layer. This fused model achieves 88.2% beating all of the previous methods. Table 2, the proportion of misclassified products has reduced significantly in Table 4. In examining Accessories, Horses, Clothing, and Shoes & Jewelry, we can see that the proposed method outperforms the individual classifiers by a considerable margin. However, the proposed method fails to significantly reduce the number of misclassified products on certain categories, such as Chew Toys. According to table 2, each of the individual classifiers performed poorly predicting products as Chew Toys. This suggests that there remains categories that are underserved across all classifiers. To address this shortcoming, more data or other modes could be considered in future work. On the other hand, the result also suggests that as long as one classifier performs well on some of the tasks, it is sufficient for the overall model. For example, the number of misclassified products in Clothing, Shoes & Jewelry dropped from 384 to 256. Overall, this method improves over the top individual classifier and the top two-modals fused network by 5.5% and 1.2%, respectively (Table 3). Conclusion We have shown that the title classifier can outperform the description classifier, and that the description classifier can outperform the image classifier. Moreover, a tri-modal fused network comprising of all three modalities outperformed any of the bi-modal fused networks. The performance improvements can be attributed to each of the classifiers addressing at least complementary portions of the tasks to account for the shortcomings of each individual classifier. While this study focused on late fusion, an early fusion approach can be explored in the future. In addition, more products, including products that may not fall under the predefined categories, can be added to reduce overfitting. A better text classifier can be built with contextualized word embeddings [18]. Transformers can be considered to replace CNNs and RNNs for both text and images [19,20,21]. Finally, one possible extension to our work could be to build a vector representation of the products. Just as how word embeddings enabled us to more accurately classify text, a product embedding can be useful for capturing the relationship between products. Such a product embedding could help discover products that are "similar" for recommendation purposes and be used as input to a model to predict categories.
2,267
1907.00420
2953607269
In this study, we investigated multi-modal approaches using images, descriptions, and title to categorize e-commerce products on this http URL. Specifically, we examined late fusion models, where the modalities are fused at the decision level. Products were each assigned multiple labels, and the hierarchy in the labels were flattened and filtered. For our individual baseline models, we modified a CNN architecture to classify the description and title, and then modified Keras' ResNet-50 to classify the images, achieving F1 scores of 77.0 , 82.7 , and 61.0 , respectively. In comparison, our tri-modal late fusion model can classify products more accurately than single modal models can, improving the F1 score to 88.2 . Each modality complemented the shortcomings of the other modalities, demonstrating that increasing the number of modalities can be an effective method for improving the accuracy of multi-label classification problems.
The dataset contained 96,806 products belonging to 193 different classes. Note that each product was assigned to one class. Hence @math berg applied a softmax function in the final layer before outputting the class probabilities. Similar to @cite_22 , both late and early fusion were explored, and late fusion yielded better results. Both heuristic policies and network policies were explored. Heuristic policies refer to some static rule; as an example, the mean of the probabilities from different modals. Network policies refer to training a neural network that takes the output probabilities from different networks and produces a new probability vector.
{ "abstract": [ "Classifying products into categories precisely and efficiently is a major challenge in modern e-commerce. The high traffic of new products uploaded daily and the dynamic nature of the categories raise the need for machine learning models that can reduce the cost and time of human editors. In this paper, we propose a decision level fusion approach for multi-modal product classification using text and image inputs. We train input specific state-of-the-art deep neural networks for each input source, show the potential of forging them together into a multi-modal architecture and train a novel policy network that learns to choose between them. Finally, we demonstrate that our multi-modal network improves the top-1 accuracy @math over both networks on a real-world large-scale product classification dataset that we collected from Walmart.com. While we focus on image-text fusion that characterizes e-commerce domains, our algorithms can be easily applied to other modalities such as audio, video, physical sensors, etc." ], "cite_N": [ "@cite_22" ], "mid": [ "2559721862" ] }
Multi-Label Product Categorization Using Multi-Modal Fusion Models
Related Works Zahavy et al. [5] used a convolutional neural network (CNN) architecture based on the architecture of Kim [6] to classify the title of the products. The first layer uses random word embedding. In addition they used a VGG network for image classification [7]. While they experimented with both early and late fusion, only the late fusion resulted in an improvement in accuracy. The image and text classifiers were trained separately to achieve maximal performance individually before being combined by a policy network. The policy network which achieved the highest accuracy is a neural network with 2 fully connected layers and takes in the top-3 class probabilities from the image and text CNNs as input. Their dataset contained 1.2 million images and 2890 possible shelves. On average, each product falls in 3 shelves. Their model is considered accurate when the network correctly outputs one of the three shelves. Åberg [4] is one of the first authors to use the image, title, and description of an ad/product to classify products into single categories. Åberg concatenated the title and description, and used fastText (Joulin et al.) [8] as the baseline model for text classification, while using the Inception V3 for image classification. Åberg also explored a similar implementation of Kim's CNN architecture [6] but could not achieve the level of accuracy of fastText. The dataset contained 96,806 products belonging to 193 different classes. Note that each product was assigned to one class. Hence Åberg applied a softmax function in the final layer before outputting the class probabilities. Similar to Zahavy et al. [5], both late and early fusion were explored, and late fusion yielded better results. Both heuristic policies and network policies were explored. Heuristic policies refer to some static rule; as an example, the mean of the probabilities from different modals. Network policies refer to training a neural network that takes the output probabilities from different networks and produces a new probability vector. Dataset Our dataset comprises of Amazon products, which has been extracted by SNAP [9]. There are 9.4 million products in total. The class hierarchical information was not available, as the classes and subclasses were pre-flattened as given. We randomly sampled 119,073 products from this dataset, in which the first 90,000 products are kept for the training set. After pre-processing, there are 122 possible classes in which a product can belong to. Unlike many previous studies, here each product can be assigned multiple labels. Each product in the dataset contains the image, description, title, price, and co-purchasing network. Product categorization systems can be challenging to build due to the trade-off between the number of classes and accuracy. As an example, adding more classes and sub-classes to a product might make it easier to discover, but more classes would also increase the likelihood of an incorrect class being applied. To address this issue, some studies [5,10] reduced the number of sub-classes. One method is to create a shelf and categorize the products based on the shelves they are in. A shelf is a group of products presented together on the same e-commerce webpage, which usually contains products under the same categories [5]. Since our dataset does not contain the webpage information necessary to form shelves, our method was to remove the classes containing less than 400 products. Figure 2: The x-axis represents the number of products in a category, whereas the y-axis represents the number of categories with that number of products. On average, each product belongs to 3 categories after pre-processing. The maximum number of products in a category is 37,102 and the minimum number of products in a category is 558. On average, there are 2,919 products per category. In addition, we can see from Fig. 2 that the number of products per categories is not evenly distributed, which could introduce bias into the model. Baseline Models In order to understand how much we benefit from fusing the different modal classifiers, we report the baseline accuracy for each modal below. We evaluate our accuracy using the F 1 score (microaveraged), which is an accepted metric for multi-label classification and imbalanced datasets [11]. During training, for all classifiers, we used Adam [12] as our optimizer and categorical cross-entropy as our loss function. To accommodate multi-labeling, the final activation for all classifiers is a sigmoid function. Although both titles and descriptions are textual data, we leverage their different use-cases by treating them as different modalities, allowing us to perform different pre-processing steps as described below. Description Classifier The description was pre-processed to remove stop words, excessive whitespace, digits, punctuations, and words longer than 30 characters. In addition, sentences were truncated to 300 words. To classify the pre-processed descriptions, we slightly modified Kim's CNN architecture for sentence classification. Kim's architecture is a CNN with one layer of convolution on top of word vectors initialized using Word2Vec [6,13]. Max-pooling over time is then applied [14], which serves to capture the most important features. Finally, dropout is employed in the penultimate layer. Unlike Kim, we used GloVe as our embedding. Words not covered by GloVe were initialized randomly. For our dataset, GloVe covers only 61.0% of the vocabulary from the description. Our first convolution layer uses a kernel of size 5 with 200 filters. We then performed global max pooling, followed by a fully connected layer of 170 units with ReLU activations. Our final layer is another densely connected layer of 122 units with sigmoid activation. This model achieves 77.0% on the test set. Title Classifier Although an identical classifier to the description classifier was used for the title, the title data was pre-processed differently. For the title, we did not remove the stop words and limited or padded the text to 57 words. We again chose GloVe for the embedding, in which words not covered were initialized randomly. GloVe covers 77.0% of the vocabulary from the title. This model achieves 82.7% on the test set. Image Classifier We modified the ResNet-50 architecture from Keras by removing the final densely connected layer and adding a densely connected layer with 122 units to match the number of labels we have. In addition, we changed the final activation to be sigmoidal. ResNet-50 is based on the architecture of He et al. [15], which achieves competitive results compared to other state of the art models. We also used the pre-trained imagenet weights, which has been trained on the imagenet dataset [16], containing more than 14 million images. We kept the earlier layers frozen and trained only the deeper/later layers [17]. We experimented with the number of trainable layers, in which our top model was trained only the last 40 layers, achieving 61% accuracy on the test set. The results summarized in Table 1 underscore that the classifiers differ in discriminative powers as the title and description classifiers significantly outperform the image classifier. This result is consistent with Zahavy et al. as their result also demonstrated a significant difference between the image and title classifiers [5]. Moreover, we have shown that the description classifier also significantly outperforms the image classifier. Such results suggests that text can provide more information regarding a product's categories. Table 2 we can see that the top misclassified categories for each classifier generally reflect their inadequate representation in the dataset. Recall that the average number of products per category is 2,919. The Accessories category contains the most products (924) out of all the misclassified categories, but it is still far below the average. In addition, we can see that the top misclassified categories for each classifier seldom overlap between the modal classifiers. For the categories that the image classifier is classifying inaccurately, the description and title classifiers are classifying more accurately and vice versa. This suggests that we should be able to combine the classifiers to effectively complement each other's shortcomings for a more accurate result. Summary Error Analysis Multi-Modality As Åberg and Zahavy et al. found that late fusion models were more accurate than early-fusion models [5,4], here we focus our studies on improving late fusion. Predefined Policies Since both Åberg and Zahavy et al. experimented with predefined rules [5,4], we included predefined rules to compare with other non-static policies. We experimented with max policy and mean policy of the output from each of the classifiers. The max policy selects the highest output for each class prediction from among the image, label, and title classifiers. This can be represented as o max = max(o image , o title , o description ),(1) where o image , o title , o description ∈ R 122 represent the output from each classifier. The mean policy can be represented as o mean = o image + o title + o description 3 .(2) Both mean and max policy resulted in lower accuracies when compared to the top classifier, which is the title classifier. The mean policy yielded 81.7%, while the max yielded 78.8%. Intuitively, each classifier contributes equally to the mean policy. Therefore, we would expect that the average performance is less than that of the best performer. For the max policy, the erroneous maximal outputs from the low performing classifiers detriment the ultimate predictions. Linear Regression We trained a simple ridge linear regression model to fuse the individual classifiers into a single classifier. The model achieves 83.0% on the test set. The model can be written as follows min w ||wX − y|| 2 2 + α||w|| 2 2 ,(3) where y is the true label and X is the predicted label. Nevertheless, the simple non-static policy can outperform static policies above. Bi-Modal Fusion The work by Zahavy et al. involved two neural networks, one for classifying images and another for classifying titles, using late fusion [5]. For comparison purposes, we examined models developed from fusing two of the three modal networks in this study. The first fused network included the image classifier's output (as in Section 3.1) and the title classifier's output (as in Section 3.2). This is essentially the method by Zahavy et al. [5]. We then fused the title classifier's output and description classifier's output for the second fused network and fused the image classifier's output and description classifier's output for the third network. All three networks were fused the same way, using a three layer neural network to concatenate the outputs from each of the classifiers. The first, second, and third layers contained 200, 150 and 122 units, respectively. All the activations were sigmoidal. The image-description, image-title, and description-title fused networks yielded 82.0%, 85.0%, and 87.0% accuracies, respectively (Table 3). Tri-Modal Fusion Description CNN Image Title CNN Resnet 50 Output Output Output Feedforward Neural Network Prediction Figure 3: The proposed triple modals fusion architecture. The CNN is based on Yoon Kim's architecture [6]. Finally, we developed a tri-modal model to include the titles, images, and descriptions. To our knowledge, we are the first to fuse three classifiers/neural networks to categorize products. We fused the three classifiers (as in Sections 3.1, 3.2, and 3.3) using a policy network, which is an additional neural network that takes in the output of each of the classifiers. We varied the number of layers, activation functions, and units of the neural networks. Through hyperparameter optimization, we found that the top policy network consists of three layers. It uses the sigmoidal activation on the first and last layers and hyperbolic tangent activation on the middle layer. This fused model achieves 88.2% beating all of the previous methods. Table 2, the proportion of misclassified products has reduced significantly in Table 4. In examining Accessories, Horses, Clothing, and Shoes & Jewelry, we can see that the proposed method outperforms the individual classifiers by a considerable margin. However, the proposed method fails to significantly reduce the number of misclassified products on certain categories, such as Chew Toys. According to table 2, each of the individual classifiers performed poorly predicting products as Chew Toys. This suggests that there remains categories that are underserved across all classifiers. To address this shortcoming, more data or other modes could be considered in future work. On the other hand, the result also suggests that as long as one classifier performs well on some of the tasks, it is sufficient for the overall model. For example, the number of misclassified products in Clothing, Shoes & Jewelry dropped from 384 to 256. Overall, this method improves over the top individual classifier and the top two-modals fused network by 5.5% and 1.2%, respectively (Table 3). Conclusion We have shown that the title classifier can outperform the description classifier, and that the description classifier can outperform the image classifier. Moreover, a tri-modal fused network comprising of all three modalities outperformed any of the bi-modal fused networks. The performance improvements can be attributed to each of the classifiers addressing at least complementary portions of the tasks to account for the shortcomings of each individual classifier. While this study focused on late fusion, an early fusion approach can be explored in the future. In addition, more products, including products that may not fall under the predefined categories, can be added to reduce overfitting. A better text classifier can be built with contextualized word embeddings [18]. Transformers can be considered to replace CNNs and RNNs for both text and images [19,20,21]. Finally, one possible extension to our work could be to build a vector representation of the products. Just as how word embeddings enabled us to more accurately classify text, a product embedding can be useful for capturing the relationship between products. Such a product embedding could help discover products that are "similar" for recommendation purposes and be used as input to a model to predict categories.
2,267
1907.00678
2954440353
Machines learning techniques plays a preponderant role in dealing with massive amount of data and are employed in almost every possible domain. Building a high quality machine learning model to be deployed in production is a challenging task, from both, the subject matter experts and the machine learning practitioners. For a broader adoption and scalability of machine learning systems, the construction and configuration of machine learning workflow need to gain in automation. In the last few years, several techniques have been developed in this direction, known as autoML. In this paper, we present a two-stage optimization process to build data pipelines and configure machine learning algorithms. First, we study the impact of data pipelines compared to algorithm configuration in order to show the importance of data preprocessing over hyperparameter tuning. The second part presents policies to efficiently allocate search time between data pipeline construction and algorithm configuration. Those policies are agnostic from the metaoptimizer. Last, we present a metric to determine if a data pipeline is specific or independent from the algorithm, enabling fine-grain pipeline pruning and meta-learning for the coldstart problem.
The difference between the various surrogate approaches lies into the model space assumption and the acquisition function. Sequential Model-based Algorithm Configuration ( ) @cite_19 @cite_7 is based on Random Forest, so are frameworks based on such as @cite_29 @cite_18 or @cite_6 . @cite_15 uses a Tree-structured Parzen Estimator (TPE) while @cite_20 is based on Gaussian processes (GP).
{ "abstract": [ "", "", "Many different machine learning algorithms exist; taking into account each algorithm's hyperparameters, there is a staggeringly large number of possible alternatives overall. We consider the problem of simultaneously selecting a learning algorithm and setting its hyperparameters, going beyond previous work that attacks these issues separately. We show that this problem can be addressed by a fully automated approach, leveraging recent innovations in Bayesian optimization. Specifically, we consider a wide range of feature selection techniques (combining 3 search and 8 evaluator methods) and all classification approaches implemented in WEKA's standard distribution, spanning 2 ensemble methods, 10 meta-methods, 27 base classifiers, and hyperparameter settings for each classifier. On each of 21 popular datasets from the UCI repository, the KDD Cup 09, variants of the MNIST dataset and CIFAR-10, we show classification performance often much better than using standard selection and hyperparameter optimization methods. We hope that our approach will help non-expert users to more effectively identify machine learning algorithms and hyperparameter settings appropriate to their applications, and hence to achieve improved performance.", "The success of machine learning in a broad range of applications has led to an ever-growing demand for machine learning systems that can be used off the shelf by non-experts. To be effective in practice, such systems need to automatically choose a good algorithm and feature preprocessing steps for a new dataset at hand, and also set their respective hyperparameters. Recent work has started to tackle this automated machine learning (AutoML) problem with the help of efficient Bayesian optimization methods. Building on this, we introduce a robust new AutoML system based on scikit-learn (using 15 classifiers, 14 feature preprocessing methods, and 4 data preprocessing methods, giving rise to a structured hypothesis space with 110 hyperparameters). This system, which we dub AUTO-SKLEARN, improves on existing AutoML methods by automatically taking into account past performance on similar datasets, and by constructing ensembles from the models evaluated during the optimization. Our system won the first phase of the ongoing ChaLearn AutoML challenge, and our comprehensive analysis on over 100 diverse datasets shows that it substantially outperforms the previous state of the art in AutoML. We also demonstrate the performance gains due to each of our contributions and derive insights into the effectiveness of the individual components of AUTO-SKLEARN.", "State-of-the-art algorithms for hard computational problems often expose many parameters that can be modified to improve empirical performance. However, manually exploring the resulting combinatorial space of parameter settings is tedious and tends to lead to unsatisfactory outcomes. Recently, automated approaches for solving this algorithm configuration problem have led to substantial improvements in the state of the art for solving various problems. One promising approach constructs explicit regression models to describe the dependence of target algorithm performance on parameter settings; however, this approach has so far been limited to the optimization of few numerical algorithm parameters on single instances. In this paper, we extend this paradigm for the first time to general algorithm configuration problems, allowing many categorical parameters and optimization for sets of instances. We experimentally validate our new algorithm configuration procedure by optimizing a local search and a tree search solver for the propositional satisfiability problem (SAT), as well as the commercial mixed integer programming (MIP) solver CPLEX. In these experiments, our procedure yielded state-of-the-art performance, and in many cases outperformed the previous best configuration approach.", "Sequential model-based optimization (also known as Bayesian optimization) is one of the most efficient methods (per function evaluation) of function minimization. This efficiency makes it appropriate for optimizing the hyperparameters of machine learning algorithms that are slow to train. The Hyperopt library provides algorithms and parallelization infrastructure for performing hyperparameter optimization (model selection) in Python. This paper presents an introductory tutorial on the usage of the Hyperopt library, including the description of search spaces, minimization (in serial and parallel), and the analysis of the results collected in the course of minimization. This paper also gives an overview of Hyperopt-Sklearn, a software project that provides automatic algorithm configuration of the Scikit-learn machine learning library. Following Auto-Weka, we take the view that the choice of classifier and even the choice of preprocessing module can be taken together to represent a single large hyperparameter optimization problem. We use Hyperopt to define a search space that encompasses many standard components (e.g. SVM, RF, KNN, PCA, TFIDF) and common patterns of composing them together. We demonstrate, using search algorithms in Hyperopt and standard benchmarking data sets (MNIST, 20-newsgroups, convex shapes), that searching this space is practical and effective. In particular, we improve on best-known scores for the model space for both MNIST and convex shapes. The paper closes with some discussion of ongoing and future work.", "The use of machine learning algorithms frequently involves careful tuning of learning parameters and model hyperparameters. Unfortunately, this tuning is often a \"black art\" requiring expert experience, rules of thumb, or sometimes brute-force search. There is therefore great appeal for automatic approaches that can optimize the performance of any given learning algorithm to the problem at hand. In this work, we consider this problem through the framework of Bayesian optimization, in which a learning algorithm's generalization performance is modeled as a sample from a Gaussian process (GP). We show that certain choices for the nature of the GP, such as the type of kernel and the treatment of its hyperparameters, can play a crucial role in obtaining a good optimizer that can achieve expertlevel performance. We describe new algorithms that take into account the variable cost (duration) of learning algorithm experiments and that can leverage the presence of multiple cores for parallel experimentation. We show that these proposed algorithms improve on previous automatic procedures and can reach or surpass human expert-level optimization for many algorithms including latent Dirichlet allocation, structured SVMs and convolutional neural networks." ], "cite_N": [ "@cite_18", "@cite_7", "@cite_29", "@cite_6", "@cite_19", "@cite_15", "@cite_20" ], "mid": [ "", "", "2102539288", "2182361439", "60686164", "1437335841", "2131241448" ] }
0
1907.00678
2954440353
Machines learning techniques plays a preponderant role in dealing with massive amount of data and are employed in almost every possible domain. Building a high quality machine learning model to be deployed in production is a challenging task, from both, the subject matter experts and the machine learning practitioners. For a broader adoption and scalability of machine learning systems, the construction and configuration of machine learning workflow need to gain in automation. In the last few years, several techniques have been developed in this direction, known as autoML. In this paper, we present a two-stage optimization process to build data pipelines and configure machine learning algorithms. First, we study the impact of data pipelines compared to algorithm configuration in order to show the importance of data preprocessing over hyperparameter tuning. The second part presents policies to efficiently allocate search time between data pipeline construction and algorithm configuration. Those policies are agnostic from the metaoptimizer. Last, we present a metric to determine if a data pipeline is specific or independent from the algorithm, enabling fine-grain pipeline pruning and meta-learning for the coldstart problem.
An acquisition function is used to determined the next configuration to be sampled. Most of those functions are based on Bayesian optimization @cite_32 @cite_16 . One popular strategy is to select @math such that it maximizes the expected improvement @cite_28 .
{ "abstract": [ "Many well known methods for seeking the extremum had been developed on the basis of quadratic approximation.", "Bayesian optimization is an approach to optimizing objective functions that take a long time (minutes or hours) to evaluate. It is best-suited for optimization over continuous domains of less than 20 dimensions, and tolerates stochastic noise in function evaluations. It builds a surrogate for the objective and quantifies the uncertainty in that surrogate using a Bayesian machine learning technique, Gaussian process regression, and then uses an acquisition function defined from this surrogate to decide where to sample. In this tutorial, we describe how Bayesian optimization works, including Gaussian process regression and three common acquisition functions: expected improvement, entropy search, and knowledge gradient. We then discuss more advanced techniques, including running multiple function evaluations in parallel, multi-fidelity and multi-information source optimization, expensive-to-evaluate constraints, random environmental conditions, multi-task Bayesian optimization, and the inclusion of derivative information. We conclude with a discussion of Bayesian optimization software and future research directions in the field. Within our tutorial material we provide a generalization of expected improvement to noisy evaluations, beyond the noise-free setting where it is more commonly applied. This generalization is justified by a formal decision-theoretic argument, standing in contrast to previous ad hoc modifications.", "Bayesian optimization is a sample-efficient approach for global optimization and relies on acquisition functions to guide the search process. Maximizing these functions is inherently complicated, especially in the parallel setting, where acquisition functions are routinely non-convex, high-dimensional and intractable. We present two modern approaches for maximizing acquisition functions and show that 1) sample-path derivatives can be used to optimize acquisition functions and 2) parallel formulations of many acquisition functions are submodular and can therefore be efficiently maximized in greedy fashion with guaranteed near-optimality." ], "cite_N": [ "@cite_28", "@cite_16", "@cite_32" ], "mid": [ "1529817821", "2873705236", "2803167060" ] }
0
1907.00678
2954440353
Machines learning techniques plays a preponderant role in dealing with massive amount of data and are employed in almost every possible domain. Building a high quality machine learning model to be deployed in production is a challenging task, from both, the subject matter experts and the machine learning practitioners. For a broader adoption and scalability of machine learning systems, the construction and configuration of machine learning workflow need to gain in automation. In the last few years, several techniques have been developed in this direction, known as autoML. In this paper, we present a two-stage optimization process to build data pipelines and configure machine learning algorithms. First, we study the impact of data pipelines compared to algorithm configuration in order to show the importance of data preprocessing over hyperparameter tuning. The second part presents policies to efficiently allocate search time between data pipeline construction and algorithm configuration. Those policies are agnostic from the metaoptimizer. Last, we present a metric to determine if a data pipeline is specific or independent from the algorithm, enabling fine-grain pipeline pruning and meta-learning for the coldstart problem.
As an alternative to Bayesian optimization, @cite_26 proposes to use Monte-Carlo Tree Search to iteratively explore a tree-structured search space while pruning the less promising configurations.
{ "abstract": [ "The sensitivity of machine learning (ML) algorithms w.r.t. their hyper-parameters and the difficulty of finding the ML algorithm and hyper-parameter setting best suited to a given dataset has led to the rapidly developing field of automated machine learning (AutoML), at the crossroad of meta-learning and structured optimization. Several international AutoML challenges have been organized since 2015, motivating the development of the Bayesian optimization-based approach Auto-Sklearn (, 2015) and the Bandit-based approach Hyperband (, 2016). In this paper, a new approach, called Monte Carlo Tree Search for Algorithm Configuration (Mosaic), is presented, fully exploiting the tree structure of the algorithm portfolio and hyper-parameter search space. Experiments (on 133 datasets of the OpenML repository) show that Mosaic performances match that of Auto-Sklearn." ], "cite_N": [ "@cite_26" ], "mid": [ "2908381384" ] }
0
1907.00678
2954440353
Machines learning techniques plays a preponderant role in dealing with massive amount of data and are employed in almost every possible domain. Building a high quality machine learning model to be deployed in production is a challenging task, from both, the subject matter experts and the machine learning practitioners. For a broader adoption and scalability of machine learning systems, the construction and configuration of machine learning workflow need to gain in automation. In the last few years, several techniques have been developed in this direction, known as autoML. In this paper, we present a two-stage optimization process to build data pipelines and configure machine learning algorithms. First, we study the impact of data pipelines compared to algorithm configuration in order to show the importance of data preprocessing over hyperparameter tuning. The second part presents policies to efficiently allocate search time between data pipeline construction and algorithm configuration. Those policies are agnostic from the metaoptimizer. Last, we present a metric to determine if a data pipeline is specific or independent from the algorithm, enabling fine-grain pipeline pruning and meta-learning for the coldstart problem.
In @cite_9 , the authors uses a genetic algorithm to sample a subset of representative input vectors in order to speed-up the model training while increasing the model performances. Genetic algorithms are also used to search for the whole pipeline as in @cite_36 or @cite_2 .
{ "abstract": [ "As the field of data science continues to grow, there will be an ever-increasing demand for tools that make machine learning accessible to non-experts. In this paper, we introduce the concept of tree-based pipeline optimization for automating one of the most tedious parts of machine learning--pipeline design. We implement an open source Tree-based Pipeline Optimization Tool (TPOT) in Python and demonstrate its effectiveness on a series of simulated and real-world benchmark data sets. In particular, we show that TPOT can design machine learning pipelines that provide a significant improvement over a basic machine learning analysis while requiring little to no input nor prior knowledge from the user. We also address the tendency for TPOT to design overly complex pipelines by integrating Pareto optimization, which produces compact pipelines without sacrificing classification accuracy. As such, this work represents an important step toward fully automating machine learning pipeline design.", "Creating high-quality training sets is the first step in designing robust classifiers. However, it is fairly difficult in practice when the data quality is questionable (data is heterogeneous, noisy and or massively large). In this paper, we show how to apply a genetic algorithm for evolving training sets from data corpora, and exploit it for artificial neural networks (ANNs) alongside other state-of-the-art models. ANNs have been proved very successful in tackling a wide range of pattern recognition tasks. However, they suffer from several drawbacks, with selection of appropriate network topology and training sets being one of the most challenging in practice, especially when ANNs are trained using time-consuming back-propagation. Our experimental study (coupled with statistical tests), performed for both real-life and benchmark datasets, proved the applicability of a genetic algorithm to select training data for various classifiers which then generalize well to unseen data.", "In this work, an automatic machine learning (AutoML) modeling architecture called Autostacker is introduced. Autostacker combines an innovative hierarchical stacking architecture and an evolutionary algorithm (EA) to perform efficient parameter search without the need for prior domain knowledge about the data or feature preprocessing. Using EA, Autostacker quickly evolves candidate pipelines with high predictive accuracy. These pipelines can be used in their given form, or serve as a starting point for further augmentation and refinement by human experts. Autostacker finds innovative machine learning model combinations and structures, rather than selecting a single model and optimizing its hyperparameters. When its performance on fifteen datasets is compared with that of other AutoML systems, Autostacker produces superior or competitive results in terms of both test accuracy and time cost." ], "cite_N": [ "@cite_36", "@cite_9", "@cite_2" ], "mid": [ "2309832917", "2889423986", "2963855998" ] }
0
1907.00678
2954440353
Machines learning techniques plays a preponderant role in dealing with massive amount of data and are employed in almost every possible domain. Building a high quality machine learning model to be deployed in production is a challenging task, from both, the subject matter experts and the machine learning practitioners. For a broader adoption and scalability of machine learning systems, the construction and configuration of machine learning workflow need to gain in automation. In the last few years, several techniques have been developed in this direction, known as autoML. In this paper, we present a two-stage optimization process to build data pipelines and configure machine learning algorithms. First, we study the impact of data pipelines compared to algorithm configuration in order to show the importance of data preprocessing over hyperparameter tuning. The second part presents policies to efficiently allocate search time between data pipeline construction and algorithm configuration. Those policies are agnostic from the metaoptimizer. Last, we present a metric to determine if a data pipeline is specific or independent from the algorithm, enabling fine-grain pipeline pruning and meta-learning for the coldstart problem.
The intrinsic difficulty of building a machine learning pipeline lies in the nature of the search space: the objective is non-separable i.e., the marginal performance of an operator @math depends on all the operators in all the paths leading to @math from the source, within the configuration space of a specific operator @math , there might be some dependencies between the hyperparameters (e.g. for Neural Networks, the coefficients @math , @math and @math make sense only for Adam solver @cite_10 ). Therefore, building a machine learning pipeline is a mix between selecting a proper sequence of operations and, for each operation, selecting the proper configuration in a structured and conditional space. On the contrary, most systems handle the problem by aggregating the whole search space, losing the sequential aspect of it. A notable exception is @cite_26 , inspired by , that explores the search space in terms of actions on operators (insertion, deletion, etc.).
{ "abstract": [ "The sensitivity of machine learning (ML) algorithms w.r.t. their hyper-parameters and the difficulty of finding the ML algorithm and hyper-parameter setting best suited to a given dataset has led to the rapidly developing field of automated machine learning (AutoML), at the crossroad of meta-learning and structured optimization. Several international AutoML challenges have been organized since 2015, motivating the development of the Bayesian optimization-based approach Auto-Sklearn (, 2015) and the Bandit-based approach Hyperband (, 2016). In this paper, a new approach, called Monte Carlo Tree Search for Algorithm Configuration (Mosaic), is presented, fully exploiting the tree structure of the algorithm portfolio and hyper-parameter search space. Experiments (on 133 datasets of the OpenML repository) show that Mosaic performances match that of Auto-Sklearn.", "We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm." ], "cite_N": [ "@cite_26", "@cite_10" ], "mid": [ "2908381384", "1522301498" ] }
0
1907.00678
2954440353
Machines learning techniques plays a preponderant role in dealing with massive amount of data and are employed in almost every possible domain. Building a high quality machine learning model to be deployed in production is a challenging task, from both, the subject matter experts and the machine learning practitioners. For a broader adoption and scalability of machine learning systems, the construction and configuration of machine learning workflow need to gain in automation. In the last few years, several techniques have been developed in this direction, known as autoML. In this paper, we present a two-stage optimization process to build data pipelines and configure machine learning algorithms. First, we study the impact of data pipelines compared to algorithm configuration in order to show the importance of data preprocessing over hyperparameter tuning. The second part presents policies to efficiently allocate search time between data pipeline construction and algorithm configuration. Those policies are agnostic from the metaoptimizer. Last, we present a metric to determine if a data pipeline is specific or independent from the algorithm, enabling fine-grain pipeline pruning and meta-learning for the coldstart problem.
To the best of our knowledge, the only approach that uses a non-predetermined sequence of operators is @cite_36 , but it is not possible to add additional constraints.
{ "abstract": [ "As the field of data science continues to grow, there will be an ever-increasing demand for tools that make machine learning accessible to non-experts. In this paper, we introduce the concept of tree-based pipeline optimization for automating one of the most tedious parts of machine learning--pipeline design. We implement an open source Tree-based Pipeline Optimization Tool (TPOT) in Python and demonstrate its effectiveness on a series of simulated and real-world benchmark data sets. In particular, we show that TPOT can design machine learning pipelines that provide a significant improvement over a basic machine learning analysis while requiring little to no input nor prior knowledge from the user. We also address the tendency for TPOT to design overly complex pipelines by integrating Pareto optimization, which produces compact pipelines without sacrificing classification accuracy. As such, this work represents an important step toward fully automating machine learning pipeline design." ], "cite_N": [ "@cite_36" ], "mid": [ "2309832917" ] }
0
1907.00462
2946521116
We take interest in the early assessment of risk for depression in social media users. We focus on the eRisk 2018 dataset, which represents users as a sequence of their written online contributions. We implement four RNN-based systems to classify the users. We explore several aggregations methods to combine predictions on individual posts. Our best model reads through all writings of a user in parallel but uses an attention mechanism to prioritize the most important ones at each timestep.
@cite_1 used a more classical approach to classify Twitter users as being at risk of depression or not. They first manually crafted features that describe users' online behavior and characterize their speech. The measures were computed daily, so a user is represented as the time series of the features. Then, the training and predictions were done by a svm with PCA for dimensionality reduction.
{ "abstract": [ "Major depression constitutes a serious challenge in personal and public health. Tens of millions of people each year suffer from depression and only a fraction receives adequate treatment. We explore the potential to use social media to detect and diagnose major depressive disorder in individuals. We first employ crowdsourcing to compile a set of Twitter users who report being diagnosed with clinical depression, based on a standard psychometric instrument. Through their social media postings over a year preceding the onset of depression, we measure behavioral attributes relating to social engagement, emotion, language and linguistic styles, ego network, and mentions of antidepressant medications. We leverage these behavioral cues, to build a statistical classifier that provides estimates of the risk of depression, before the reported onset. We find that social media contains useful signals for characterizing the onset of depression in individuals, as measured through decrease in social activity, raised negative affect, highly clustered egonetworks, heightened relational and medicinal concerns, and greater expression of religious involvement. We believe our findings and methods may be useful in developing tools for identifying the onset of major depression, for use by healthcare agencies; or on behalf of individuals, enabling those suffering from depression to be more proactive about their mental health." ], "cite_N": [ "@cite_1" ], "mid": [ "2402700" ] }
Inter and Intra Document Attention for Depression Risk Assessment
In 2015, 4.9 million Canadians aged 15 and over experienced a need for mental health care; 1.6 million felt their needs were partially met or unmet [7]. In 2017, over a third of Ontario students, grades 7 to 12, reported having wanted to talk to someone about their mental health concerns but did not know who to turn to [6]. These numbers highlight a concerning but all too familiar notion: although highly prevalent, mental health concerns often go unheard. Nonetheless, mental disorders can shorten life expectancy by 7-24 years [9]. In particular, depression is a major cause of morbidity worldwide. Although prevalence varies widely, in most countries, the number of persons that would suffer from depression in their lifetime falls between 8 and 12% [15]. Access to proper diagnosis and care is overall lacking because of a variety of reasons, from the stigma surrounding seeking treatment [23] to a high rate of misdiagnosis [25]. These obstacles could be mitigated in some way among social media users by analyzing their output on these platforms to assess their risk of depression or other mental health afflictions. The analysis of user-generated content could give valuable insights into the users mental health, identify risks, and help provide them with better support [3,11]. To promote such analyses that could lead to the development of tools supporting mental health practitioners and forum moderators, the research community has put forward shared tasks like CLPsych [2] and the CLEF eRisk pilot task [1,18]. Participants must identify users at risk of mental health issues, such as eminent risk of depression, post traumatic stress disorder, or anorexia. These tasks provide participants with annotated data and a framework for testing the performance of their approaches. In this paper, we present a neural approach to identify social media users at risk of depression from their writings in a subreddit forum, in the context of the eRisk 2018 pilot task. From a technical standpoint, the principal interest of this investigation is the use of different aggregation methods for predictions on groups of documents. Using the power of Recurrent Neural Networks (RNNs) for the sequential treatment of documents, we explore several manners in which to combine predictions on documents to make a prediction on its author. Dataset The dataset from the eRisk2018 shared task [18] consists of the written production of reddit [22] English-speaking users. The dataset was built using the writings of 887 users, and was provided in whole at the beginning of the task. Users in the RISK class have admitted to having been diagnosed with depression; NO RISK users have not. It should be noted that the users' writings, or posts, may originate from different separate discussions on the website. The individual writings, however, are not labelled. Only the user as a whole is labelled as RISK or NO RISK. The two classes of users are highly imbalanced in the training set with the positive class only counting 135 users to 752 in the negative class. Table 1 presents some statistics on the task dataset. We use this dataset but consider a simple classification task, as opposed to the early-risk detection that was the object of the shared task. Models We represent users as sets of writings rather than sequences of writings. This is partly due to the intuition that the order of writings would not be significant in the context of forums, generally speaking. It is also due to the fact that treating writings sequentially would be cumbersome, especially if we consider training on all ten chunks. However, we do consider writings as sequences of words, as this is the main strength of RNNs. We therefore write a user u as the set of his m writings, u = {x (1) , . . . , x (m) }. A given writing x (j) , is then a sequence of words, x (j) = x (j) 1 , . . . , x (j) τ , with τ being the index of the last word. Thus, x (j) t is the t-th word of the j-th post for a given user. Aggregating predictions on writings Late Inter-Document Averaging We set out to put together an approach that aggregates predictions made individually and sequentially on the writings of a user. That is, we read the different writings of a user in parallel and take the average prediction on them. This is our first model, Late Inter-Document Averaging (LIDA). Using the RNN architecture of our choice, we read each word of a post and update its hidden state, h (j) t = f (x (j) t , h (j) t−1 ; θ post ). (1) f is the transition function of the chosen RNN architecture, θ post is the set of parameters of our particular RNN model and the initial state is set to zero, h 0 = 0. In practice, however, we take but a sample of users' writings and trim overlong writings (see Sec.5). LIDA averages over the final state of the RNN, h (j) τ , across writings, a = 1 m m j=1 h (j) τ(2) This average is then projected into a binary prediction for the user, p = σ(u ⊤ a 1 ),(3) using σ, the standard logistic sigmoid function, to normalize the output and a vector of parameters, u. By averaging over all writings, rather than taking the sum, we ensure that the number of writings does not influence the decision. However, we suspect that regularizing on the hidden state alone will not suffice, as the problem remains essentially the same: gradient correction information will have to travel the entire length of the writings regardless of the corrections made as a results of other writings. Continual Inter-Document Averaging Our second model, Continual Inter-Document Averaging (CIDA), therefore aggregates the hidden state across writings at every time step, as opposed to only the final one. A first RNN, represented by its hidden state h t , reads the writings as in Eq. 1. The resulting hidden states are averaged across writings and then fed as the input to a second RNN, represented by g t , a t = 1 m m j=1 h (j) t ,(4) g t = f (a t , g t−1 ; θ user ). (5) g τ is used to make a prediction similarly to Eq.3. Inter-document attention It stands to reason that averaging over the ongoing summary of each document would help in classifying a group of documents. Nonetheless, one would suspect that some documents would be more interesting than others to our task. Even if all documents were equally interesting, their interesting parts might not align well. Because we are reading them in parallel, we should try and prioritize the documents that are interesting at the current time step. CIDA does not offer this possibility, as no weighting of terms is put in place in Eq.4. Consequently, we turn to the attention mechanism [4] to provide this information. While several manners of both applying and computing the attention mechanism exist [19,8,26], we compute the variant known as general attention [19], which is both learned and content-dependent. In applying it, we introduce Inter-Document Attention (IDA), which will provide a weighted average to our previous model. The computation of h (j) t , the post-level hidden state, remains the same, i.e. Eq.1. However, these values are compared against the previous user-level hidden state to compute the relevant energy between them,α jt α (j) t = g t−1 W att h (j) t ,(6) where W att is a matrix of parameters that learns the compatibility between the hidden states of the two RNNs. The resulting energy scalars,α (j) t are mapped to probabilities by way of softmax normalization, α (j) t = eα (j) t m k=1 eα (k) t .(7) This probability is then used to weight the appropriate h t , a t = m j=1 α (j) t h (j) t .(8) g t is given by Eq.5. Through the use of this probability weighting, we can understand a t as an expected document summary at position t when grouping documents together. As in the previous model, a prediction on the user is made from g τ . Intra-document Attention We extend our use of the attention mechanism in the aggregation to the parsing of individual documents. Similarly to our weighting of documents in aggregation dependent on the current aggregation state, we compare the current input to past inputs to evince a context for it. This is known in the literature as selfattention [8]. We therefore modify the computation of h t from Eq.1 by adding a context vector, c t , corresponding to the ongoing context in document j at time t: h (j) t = f (x (j) t , c (j) t , h (j) t−1 ; θ post ).(9) This context vector is computed by comparing past inputs to the present documentlevel hidden state,α (j) t,t ′ = h (j) t W intra x (j) t ′ ,(10) This weighting is normalized by softmax and used in adding the previous inputs together. We refer to this model as Inter-and Intra-Document Attention (InIDA). This last attention mechanism arises from practical difficulties in learning long-range dependencies by RNNs. While RNNs are theoretically capable of summarizing sequences of arbitrary complexity in their hidden state, numerical considerations make learning this process through gradient descent difficult when the sequences are long or the state is too small [5]. This can be addressed in different manners, such as gating mechanisms [13,10] and the introduction of multiplicative interactions [24]. Self-Attention is one such mechanism where the context vector acts as a reminder of past inputs in the form of a learned expected context. It can be combined to other mechanisms with minimal parameter load. Preprocessing As previously mentioned, documents are broken into words. The representation of these words is learned from the entirety of the training documents, all chunks included, using the skip-gram algorithm [20]. All words were turned to lowercase. Only the 40k most frequent words were kept. The embedded representation learned is of size 40, using a window of size five. The embeddings are are shared by all models. Documents are trimmed at the end at a length of 66 words, which is longer than 90% of the posts in the dataset. The number of documents varies greatly across user classes. We take small random samples without replacement of 30 documents per user at every iteration (epoch). We contend that sampling the user at every iteration allows us to train for longer as it is harder for the models to overfit when the components that make up each instance keep changing. Model configurations We use the Multiplicative Long Short-Term Memory (mLSTM) [17] architecture as the post-level and user-level RNN, where applicable. The flexibility of the transition function in mLSTM has shown to be capable of arriving at highly abstract features on its own and can achieve competitive results in sentiment analysis [21]. Due to the limited number of examples, smaller models are required to avoid overfitting. We therefore set the embedded representation at 20 and the size of the hidden state of both RNNs to 80. Parameter counts are shown in Table 2. Training For our experiments, we reshuffle the original eRisk 2018 dataset, as the training and test sets do not have the same proportions among labels. To provide our models with more training examples, we divide the dataset 9:1, stratifying across labels. We use 10% of the training set as validation. We train the models using the Adam optimizer [16], making use of 10% of the training data for validation. Having posited random intra-user sampling as a means of training longer, we set the training time to 30 epochs, taking the best model on validation over all epochs. As noted, the two classes are highly imbalanced. We use inverse class weighting to counteract this. Evaluation The nature of the task, which is to prioritize finding positive users, and the class imbalance in the dataset, we use the f1-score as a first metric in validation and in the final testing phase. The f1-score is useful to assess the quality of classification between unbalanced two unbalanced classes, one of which is designated as the positive class. It is defined as the harmonic mean between precision precision = T P T P + F P (15) recall = T P T P + F N (16) f1-score = 2 × precision × recall precision + recall(17) We evaluate our models on the best result on a validation set of 10% of the training data. These best results are selected over 30 epochs. Results Our preliminary results in validation are in accordance with our hypotheses. That is, continual aggregation surpasses late aggregation but falls short of the more sophisticated attention model. Moreover, the noticeable difference in performance has little to no cost in terms of parameter count. Conclusion In this paper, we have put forward three RNN-based models that aggregate documents to make a prediction on their author. We applied this model to the eRisk 2018 dataset, which associates a user, as a sequence of online forum posts, to a binary label that identifies them as being at risk for depression or not. With the goal of using RNNs to read the individual documents, we tested four methods of combining the resulting predictions, LIDA, CIDA, IDA and InIDA. We also introduced the inter-document attention mechanism. Our preliminary results show promise and confirm the parameter efficiency of the attention mechanism. Future work could involve the use of dot-product alone, which, despite adding no parameters, has been found to be more effective for global attention [19]. An investigation into using late attention aggregation for all hidden states produced across all documents is also necessary.
2,301
1907.00462
2946521116
We take interest in the early assessment of risk for depression in social media users. We focus on the eRisk 2018 dataset, which represents users as a sequence of their written online contributions. We implement four RNN-based systems to classify the users. We explore several aggregations methods to combine predictions on individual posts. Our best model reads through all writings of a user in parallel but uses an attention mechanism to prioritize the most important ones at each timestep.
More similarly to our approach, @cite_4 used Hierarchical Attention Networks @cite_18 to represent user-generated documents. Sentence representations are learned using a rnn with an attention mechanism and are then used to learn the document's representation using the same network architecture. The computation of the attention weights they use is different from ours as it is non-parametric. Their equivalent of equation would be This means that the rnn learn the attention weights along with the representation of the sequences themselves. This attention function has been introduced in @cite_21 under that name of .
{ "abstract": [ "We propose a hierarchical attention network for document classification. Our model has two distinctive characteristics: (i) it has a hierarchical structure that mirrors the hierarchical structure of documents; (ii) it has two levels of attention mechanisms applied at the wordand sentence-level, enabling it to attend differentially to more and less important content when constructing the document representation. Experiments conducted on six large scale text classification tasks demonstrate that the proposed architecture outperform previous methods by a substantial margin. Visualization of the attention layers illustrates that the model selects qualitatively informative words and sentences.", "An attentional mechanism has lately been used to improve neural machine translation (NMT) by selectively focusing on parts of the source sentence during translation. However, there has been little work exploring useful architectures for attention-based NMT. This paper examines two simple and effective classes of attentional mechanism: a global approach which always attends to all source words and a local one that only looks at a subset of source words at a time. We demonstrate the effectiveness of both approaches over the WMT translation tasks between English and German in both directions. With local attention, we achieve a significant gain of 5.0 BLEU points over non-attentional systems which already incorporate known techniques such as dropout. Our ensemble model using different attention architectures has established a new state-of-the-art result in the WMT'15 English to German translation task with 25.9 BLEU points, an improvement of 1.0 BLEU points over the existing best system backed by NMT and an n-gram reranker.", "" ], "cite_N": [ "@cite_18", "@cite_21", "@cite_4" ], "mid": [ "2470673105", "2949335953", "2805917671" ] }
Inter and Intra Document Attention for Depression Risk Assessment
In 2015, 4.9 million Canadians aged 15 and over experienced a need for mental health care; 1.6 million felt their needs were partially met or unmet [7]. In 2017, over a third of Ontario students, grades 7 to 12, reported having wanted to talk to someone about their mental health concerns but did not know who to turn to [6]. These numbers highlight a concerning but all too familiar notion: although highly prevalent, mental health concerns often go unheard. Nonetheless, mental disorders can shorten life expectancy by 7-24 years [9]. In particular, depression is a major cause of morbidity worldwide. Although prevalence varies widely, in most countries, the number of persons that would suffer from depression in their lifetime falls between 8 and 12% [15]. Access to proper diagnosis and care is overall lacking because of a variety of reasons, from the stigma surrounding seeking treatment [23] to a high rate of misdiagnosis [25]. These obstacles could be mitigated in some way among social media users by analyzing their output on these platforms to assess their risk of depression or other mental health afflictions. The analysis of user-generated content could give valuable insights into the users mental health, identify risks, and help provide them with better support [3,11]. To promote such analyses that could lead to the development of tools supporting mental health practitioners and forum moderators, the research community has put forward shared tasks like CLPsych [2] and the CLEF eRisk pilot task [1,18]. Participants must identify users at risk of mental health issues, such as eminent risk of depression, post traumatic stress disorder, or anorexia. These tasks provide participants with annotated data and a framework for testing the performance of their approaches. In this paper, we present a neural approach to identify social media users at risk of depression from their writings in a subreddit forum, in the context of the eRisk 2018 pilot task. From a technical standpoint, the principal interest of this investigation is the use of different aggregation methods for predictions on groups of documents. Using the power of Recurrent Neural Networks (RNNs) for the sequential treatment of documents, we explore several manners in which to combine predictions on documents to make a prediction on its author. Dataset The dataset from the eRisk2018 shared task [18] consists of the written production of reddit [22] English-speaking users. The dataset was built using the writings of 887 users, and was provided in whole at the beginning of the task. Users in the RISK class have admitted to having been diagnosed with depression; NO RISK users have not. It should be noted that the users' writings, or posts, may originate from different separate discussions on the website. The individual writings, however, are not labelled. Only the user as a whole is labelled as RISK or NO RISK. The two classes of users are highly imbalanced in the training set with the positive class only counting 135 users to 752 in the negative class. Table 1 presents some statistics on the task dataset. We use this dataset but consider a simple classification task, as opposed to the early-risk detection that was the object of the shared task. Models We represent users as sets of writings rather than sequences of writings. This is partly due to the intuition that the order of writings would not be significant in the context of forums, generally speaking. It is also due to the fact that treating writings sequentially would be cumbersome, especially if we consider training on all ten chunks. However, we do consider writings as sequences of words, as this is the main strength of RNNs. We therefore write a user u as the set of his m writings, u = {x (1) , . . . , x (m) }. A given writing x (j) , is then a sequence of words, x (j) = x (j) 1 , . . . , x (j) τ , with τ being the index of the last word. Thus, x (j) t is the t-th word of the j-th post for a given user. Aggregating predictions on writings Late Inter-Document Averaging We set out to put together an approach that aggregates predictions made individually and sequentially on the writings of a user. That is, we read the different writings of a user in parallel and take the average prediction on them. This is our first model, Late Inter-Document Averaging (LIDA). Using the RNN architecture of our choice, we read each word of a post and update its hidden state, h (j) t = f (x (j) t , h (j) t−1 ; θ post ). (1) f is the transition function of the chosen RNN architecture, θ post is the set of parameters of our particular RNN model and the initial state is set to zero, h 0 = 0. In practice, however, we take but a sample of users' writings and trim overlong writings (see Sec.5). LIDA averages over the final state of the RNN, h (j) τ , across writings, a = 1 m m j=1 h (j) τ(2) This average is then projected into a binary prediction for the user, p = σ(u ⊤ a 1 ),(3) using σ, the standard logistic sigmoid function, to normalize the output and a vector of parameters, u. By averaging over all writings, rather than taking the sum, we ensure that the number of writings does not influence the decision. However, we suspect that regularizing on the hidden state alone will not suffice, as the problem remains essentially the same: gradient correction information will have to travel the entire length of the writings regardless of the corrections made as a results of other writings. Continual Inter-Document Averaging Our second model, Continual Inter-Document Averaging (CIDA), therefore aggregates the hidden state across writings at every time step, as opposed to only the final one. A first RNN, represented by its hidden state h t , reads the writings as in Eq. 1. The resulting hidden states are averaged across writings and then fed as the input to a second RNN, represented by g t , a t = 1 m m j=1 h (j) t ,(4) g t = f (a t , g t−1 ; θ user ). (5) g τ is used to make a prediction similarly to Eq.3. Inter-document attention It stands to reason that averaging over the ongoing summary of each document would help in classifying a group of documents. Nonetheless, one would suspect that some documents would be more interesting than others to our task. Even if all documents were equally interesting, their interesting parts might not align well. Because we are reading them in parallel, we should try and prioritize the documents that are interesting at the current time step. CIDA does not offer this possibility, as no weighting of terms is put in place in Eq.4. Consequently, we turn to the attention mechanism [4] to provide this information. While several manners of both applying and computing the attention mechanism exist [19,8,26], we compute the variant known as general attention [19], which is both learned and content-dependent. In applying it, we introduce Inter-Document Attention (IDA), which will provide a weighted average to our previous model. The computation of h (j) t , the post-level hidden state, remains the same, i.e. Eq.1. However, these values are compared against the previous user-level hidden state to compute the relevant energy between them,α jt α (j) t = g t−1 W att h (j) t ,(6) where W att is a matrix of parameters that learns the compatibility between the hidden states of the two RNNs. The resulting energy scalars,α (j) t are mapped to probabilities by way of softmax normalization, α (j) t = eα (j) t m k=1 eα (k) t .(7) This probability is then used to weight the appropriate h t , a t = m j=1 α (j) t h (j) t .(8) g t is given by Eq.5. Through the use of this probability weighting, we can understand a t as an expected document summary at position t when grouping documents together. As in the previous model, a prediction on the user is made from g τ . Intra-document Attention We extend our use of the attention mechanism in the aggregation to the parsing of individual documents. Similarly to our weighting of documents in aggregation dependent on the current aggregation state, we compare the current input to past inputs to evince a context for it. This is known in the literature as selfattention [8]. We therefore modify the computation of h t from Eq.1 by adding a context vector, c t , corresponding to the ongoing context in document j at time t: h (j) t = f (x (j) t , c (j) t , h (j) t−1 ; θ post ).(9) This context vector is computed by comparing past inputs to the present documentlevel hidden state,α (j) t,t ′ = h (j) t W intra x (j) t ′ ,(10) This weighting is normalized by softmax and used in adding the previous inputs together. We refer to this model as Inter-and Intra-Document Attention (InIDA). This last attention mechanism arises from practical difficulties in learning long-range dependencies by RNNs. While RNNs are theoretically capable of summarizing sequences of arbitrary complexity in their hidden state, numerical considerations make learning this process through gradient descent difficult when the sequences are long or the state is too small [5]. This can be addressed in different manners, such as gating mechanisms [13,10] and the introduction of multiplicative interactions [24]. Self-Attention is one such mechanism where the context vector acts as a reminder of past inputs in the form of a learned expected context. It can be combined to other mechanisms with minimal parameter load. Preprocessing As previously mentioned, documents are broken into words. The representation of these words is learned from the entirety of the training documents, all chunks included, using the skip-gram algorithm [20]. All words were turned to lowercase. Only the 40k most frequent words were kept. The embedded representation learned is of size 40, using a window of size five. The embeddings are are shared by all models. Documents are trimmed at the end at a length of 66 words, which is longer than 90% of the posts in the dataset. The number of documents varies greatly across user classes. We take small random samples without replacement of 30 documents per user at every iteration (epoch). We contend that sampling the user at every iteration allows us to train for longer as it is harder for the models to overfit when the components that make up each instance keep changing. Model configurations We use the Multiplicative Long Short-Term Memory (mLSTM) [17] architecture as the post-level and user-level RNN, where applicable. The flexibility of the transition function in mLSTM has shown to be capable of arriving at highly abstract features on its own and can achieve competitive results in sentiment analysis [21]. Due to the limited number of examples, smaller models are required to avoid overfitting. We therefore set the embedded representation at 20 and the size of the hidden state of both RNNs to 80. Parameter counts are shown in Table 2. Training For our experiments, we reshuffle the original eRisk 2018 dataset, as the training and test sets do not have the same proportions among labels. To provide our models with more training examples, we divide the dataset 9:1, stratifying across labels. We use 10% of the training set as validation. We train the models using the Adam optimizer [16], making use of 10% of the training data for validation. Having posited random intra-user sampling as a means of training longer, we set the training time to 30 epochs, taking the best model on validation over all epochs. As noted, the two classes are highly imbalanced. We use inverse class weighting to counteract this. Evaluation The nature of the task, which is to prioritize finding positive users, and the class imbalance in the dataset, we use the f1-score as a first metric in validation and in the final testing phase. The f1-score is useful to assess the quality of classification between unbalanced two unbalanced classes, one of which is designated as the positive class. It is defined as the harmonic mean between precision precision = T P T P + F P (15) recall = T P T P + F N (16) f1-score = 2 × precision × recall precision + recall(17) We evaluate our models on the best result on a validation set of 10% of the training data. These best results are selected over 30 epochs. Results Our preliminary results in validation are in accordance with our hypotheses. That is, continual aggregation surpasses late aggregation but falls short of the more sophisticated attention model. Moreover, the noticeable difference in performance has little to no cost in terms of parameter count. Conclusion In this paper, we have put forward three RNN-based models that aggregate documents to make a prediction on their author. We applied this model to the eRisk 2018 dataset, which associates a user, as a sequence of online forum posts, to a binary label that identifies them as being at risk for depression or not. With the goal of using RNNs to read the individual documents, we tested four methods of combining the resulting predictions, LIDA, CIDA, IDA and InIDA. We also introduced the inter-document attention mechanism. Our preliminary results show promise and confirm the parameter efficiency of the attention mechanism. Future work could involve the use of dot-product alone, which, despite adding no parameters, has been found to be more effective for global attention [19]. An investigation into using late attention aggregation for all hidden states produced across all documents is also necessary.
2,301
1907.00462
2946521116
We take interest in the early assessment of risk for depression in social media users. We focus on the eRisk 2018 dataset, which represents users as a sequence of their written online contributions. We implement four RNN-based systems to classify the users. We explore several aggregations methods to combine predictions on individual posts. Our best model reads through all writings of a user in parallel but uses an attention mechanism to prioritize the most important ones at each timestep.
The @cite_21 is a simpler version of the that we used, that only takes into account the target hidden state. It is stated as such :
{ "abstract": [ "An attentional mechanism has lately been used to improve neural machine translation (NMT) by selectively focusing on parts of the source sentence during translation. However, there has been little work exploring useful architectures for attention-based NMT. This paper examines two simple and effective classes of attentional mechanism: a global approach which always attends to all source words and a local one that only looks at a subset of source words at a time. We demonstrate the effectiveness of both approaches over the WMT translation tasks between English and German in both directions. With local attention, we achieve a significant gain of 5.0 BLEU points over non-attentional systems which already incorporate known techniques such as dropout. Our ensemble model using different attention architectures has established a new state-of-the-art result in the WMT'15 English to German translation task with 25.9 BLEU points, an improvement of 1.0 BLEU points over the existing best system backed by NMT and an n-gram reranker." ], "cite_N": [ "@cite_21" ], "mid": [ "2949335953" ] }
Inter and Intra Document Attention for Depression Risk Assessment
In 2015, 4.9 million Canadians aged 15 and over experienced a need for mental health care; 1.6 million felt their needs were partially met or unmet [7]. In 2017, over a third of Ontario students, grades 7 to 12, reported having wanted to talk to someone about their mental health concerns but did not know who to turn to [6]. These numbers highlight a concerning but all too familiar notion: although highly prevalent, mental health concerns often go unheard. Nonetheless, mental disorders can shorten life expectancy by 7-24 years [9]. In particular, depression is a major cause of morbidity worldwide. Although prevalence varies widely, in most countries, the number of persons that would suffer from depression in their lifetime falls between 8 and 12% [15]. Access to proper diagnosis and care is overall lacking because of a variety of reasons, from the stigma surrounding seeking treatment [23] to a high rate of misdiagnosis [25]. These obstacles could be mitigated in some way among social media users by analyzing their output on these platforms to assess their risk of depression or other mental health afflictions. The analysis of user-generated content could give valuable insights into the users mental health, identify risks, and help provide them with better support [3,11]. To promote such analyses that could lead to the development of tools supporting mental health practitioners and forum moderators, the research community has put forward shared tasks like CLPsych [2] and the CLEF eRisk pilot task [1,18]. Participants must identify users at risk of mental health issues, such as eminent risk of depression, post traumatic stress disorder, or anorexia. These tasks provide participants with annotated data and a framework for testing the performance of their approaches. In this paper, we present a neural approach to identify social media users at risk of depression from their writings in a subreddit forum, in the context of the eRisk 2018 pilot task. From a technical standpoint, the principal interest of this investigation is the use of different aggregation methods for predictions on groups of documents. Using the power of Recurrent Neural Networks (RNNs) for the sequential treatment of documents, we explore several manners in which to combine predictions on documents to make a prediction on its author. Dataset The dataset from the eRisk2018 shared task [18] consists of the written production of reddit [22] English-speaking users. The dataset was built using the writings of 887 users, and was provided in whole at the beginning of the task. Users in the RISK class have admitted to having been diagnosed with depression; NO RISK users have not. It should be noted that the users' writings, or posts, may originate from different separate discussions on the website. The individual writings, however, are not labelled. Only the user as a whole is labelled as RISK or NO RISK. The two classes of users are highly imbalanced in the training set with the positive class only counting 135 users to 752 in the negative class. Table 1 presents some statistics on the task dataset. We use this dataset but consider a simple classification task, as opposed to the early-risk detection that was the object of the shared task. Models We represent users as sets of writings rather than sequences of writings. This is partly due to the intuition that the order of writings would not be significant in the context of forums, generally speaking. It is also due to the fact that treating writings sequentially would be cumbersome, especially if we consider training on all ten chunks. However, we do consider writings as sequences of words, as this is the main strength of RNNs. We therefore write a user u as the set of his m writings, u = {x (1) , . . . , x (m) }. A given writing x (j) , is then a sequence of words, x (j) = x (j) 1 , . . . , x (j) τ , with τ being the index of the last word. Thus, x (j) t is the t-th word of the j-th post for a given user. Aggregating predictions on writings Late Inter-Document Averaging We set out to put together an approach that aggregates predictions made individually and sequentially on the writings of a user. That is, we read the different writings of a user in parallel and take the average prediction on them. This is our first model, Late Inter-Document Averaging (LIDA). Using the RNN architecture of our choice, we read each word of a post and update its hidden state, h (j) t = f (x (j) t , h (j) t−1 ; θ post ). (1) f is the transition function of the chosen RNN architecture, θ post is the set of parameters of our particular RNN model and the initial state is set to zero, h 0 = 0. In practice, however, we take but a sample of users' writings and trim overlong writings (see Sec.5). LIDA averages over the final state of the RNN, h (j) τ , across writings, a = 1 m m j=1 h (j) τ(2) This average is then projected into a binary prediction for the user, p = σ(u ⊤ a 1 ),(3) using σ, the standard logistic sigmoid function, to normalize the output and a vector of parameters, u. By averaging over all writings, rather than taking the sum, we ensure that the number of writings does not influence the decision. However, we suspect that regularizing on the hidden state alone will not suffice, as the problem remains essentially the same: gradient correction information will have to travel the entire length of the writings regardless of the corrections made as a results of other writings. Continual Inter-Document Averaging Our second model, Continual Inter-Document Averaging (CIDA), therefore aggregates the hidden state across writings at every time step, as opposed to only the final one. A first RNN, represented by its hidden state h t , reads the writings as in Eq. 1. The resulting hidden states are averaged across writings and then fed as the input to a second RNN, represented by g t , a t = 1 m m j=1 h (j) t ,(4) g t = f (a t , g t−1 ; θ user ). (5) g τ is used to make a prediction similarly to Eq.3. Inter-document attention It stands to reason that averaging over the ongoing summary of each document would help in classifying a group of documents. Nonetheless, one would suspect that some documents would be more interesting than others to our task. Even if all documents were equally interesting, their interesting parts might not align well. Because we are reading them in parallel, we should try and prioritize the documents that are interesting at the current time step. CIDA does not offer this possibility, as no weighting of terms is put in place in Eq.4. Consequently, we turn to the attention mechanism [4] to provide this information. While several manners of both applying and computing the attention mechanism exist [19,8,26], we compute the variant known as general attention [19], which is both learned and content-dependent. In applying it, we introduce Inter-Document Attention (IDA), which will provide a weighted average to our previous model. The computation of h (j) t , the post-level hidden state, remains the same, i.e. Eq.1. However, these values are compared against the previous user-level hidden state to compute the relevant energy between them,α jt α (j) t = g t−1 W att h (j) t ,(6) where W att is a matrix of parameters that learns the compatibility between the hidden states of the two RNNs. The resulting energy scalars,α (j) t are mapped to probabilities by way of softmax normalization, α (j) t = eα (j) t m k=1 eα (k) t .(7) This probability is then used to weight the appropriate h t , a t = m j=1 α (j) t h (j) t .(8) g t is given by Eq.5. Through the use of this probability weighting, we can understand a t as an expected document summary at position t when grouping documents together. As in the previous model, a prediction on the user is made from g τ . Intra-document Attention We extend our use of the attention mechanism in the aggregation to the parsing of individual documents. Similarly to our weighting of documents in aggregation dependent on the current aggregation state, we compare the current input to past inputs to evince a context for it. This is known in the literature as selfattention [8]. We therefore modify the computation of h t from Eq.1 by adding a context vector, c t , corresponding to the ongoing context in document j at time t: h (j) t = f (x (j) t , c (j) t , h (j) t−1 ; θ post ).(9) This context vector is computed by comparing past inputs to the present documentlevel hidden state,α (j) t,t ′ = h (j) t W intra x (j) t ′ ,(10) This weighting is normalized by softmax and used in adding the previous inputs together. We refer to this model as Inter-and Intra-Document Attention (InIDA). This last attention mechanism arises from practical difficulties in learning long-range dependencies by RNNs. While RNNs are theoretically capable of summarizing sequences of arbitrary complexity in their hidden state, numerical considerations make learning this process through gradient descent difficult when the sequences are long or the state is too small [5]. This can be addressed in different manners, such as gating mechanisms [13,10] and the introduction of multiplicative interactions [24]. Self-Attention is one such mechanism where the context vector acts as a reminder of past inputs in the form of a learned expected context. It can be combined to other mechanisms with minimal parameter load. Preprocessing As previously mentioned, documents are broken into words. The representation of these words is learned from the entirety of the training documents, all chunks included, using the skip-gram algorithm [20]. All words were turned to lowercase. Only the 40k most frequent words were kept. The embedded representation learned is of size 40, using a window of size five. The embeddings are are shared by all models. Documents are trimmed at the end at a length of 66 words, which is longer than 90% of the posts in the dataset. The number of documents varies greatly across user classes. We take small random samples without replacement of 30 documents per user at every iteration (epoch). We contend that sampling the user at every iteration allows us to train for longer as it is harder for the models to overfit when the components that make up each instance keep changing. Model configurations We use the Multiplicative Long Short-Term Memory (mLSTM) [17] architecture as the post-level and user-level RNN, where applicable. The flexibility of the transition function in mLSTM has shown to be capable of arriving at highly abstract features on its own and can achieve competitive results in sentiment analysis [21]. Due to the limited number of examples, smaller models are required to avoid overfitting. We therefore set the embedded representation at 20 and the size of the hidden state of both RNNs to 80. Parameter counts are shown in Table 2. Training For our experiments, we reshuffle the original eRisk 2018 dataset, as the training and test sets do not have the same proportions among labels. To provide our models with more training examples, we divide the dataset 9:1, stratifying across labels. We use 10% of the training set as validation. We train the models using the Adam optimizer [16], making use of 10% of the training data for validation. Having posited random intra-user sampling as a means of training longer, we set the training time to 30 epochs, taking the best model on validation over all epochs. As noted, the two classes are highly imbalanced. We use inverse class weighting to counteract this. Evaluation The nature of the task, which is to prioritize finding positive users, and the class imbalance in the dataset, we use the f1-score as a first metric in validation and in the final testing phase. The f1-score is useful to assess the quality of classification between unbalanced two unbalanced classes, one of which is designated as the positive class. It is defined as the harmonic mean between precision precision = T P T P + F P (15) recall = T P T P + F N (16) f1-score = 2 × precision × recall precision + recall(17) We evaluate our models on the best result on a validation set of 10% of the training data. These best results are selected over 30 epochs. Results Our preliminary results in validation are in accordance with our hypotheses. That is, continual aggregation surpasses late aggregation but falls short of the more sophisticated attention model. Moreover, the noticeable difference in performance has little to no cost in terms of parameter count. Conclusion In this paper, we have put forward three RNN-based models that aggregate documents to make a prediction on their author. We applied this model to the eRisk 2018 dataset, which associates a user, as a sequence of online forum posts, to a binary label that identifies them as being at risk for depression or not. With the goal of using RNNs to read the individual documents, we tested four methods of combining the resulting predictions, LIDA, CIDA, IDA and InIDA. We also introduced the inter-document attention mechanism. Our preliminary results show promise and confirm the parameter efficiency of the attention mechanism. Future work could involve the use of dot-product alone, which, despite adding no parameters, has been found to be more effective for global attention [19]. An investigation into using late attention aggregation for all hidden states produced across all documents is also necessary.
2,301
1907.00462
2946521116
We take interest in the early assessment of risk for depression in social media users. We focus on the eRisk 2018 dataset, which represents users as a sequence of their written online contributions. We implement four RNN-based systems to classify the users. We explore several aggregations methods to combine predictions on individual posts. Our best model reads through all writings of a user in parallel but uses an attention mechanism to prioritize the most important ones at each timestep.
The function introduced in @cite_2 , has been improved in @cite_21 . use a concatenation layer to combine the information of the hidden state and the context vector.
{ "abstract": [ "An attentional mechanism has lately been used to improve neural machine translation (NMT) by selectively focusing on parts of the source sentence during translation. However, there has been little work exploring useful architectures for attention-based NMT. This paper examines two simple and effective classes of attentional mechanism: a global approach which always attends to all source words and a local one that only looks at a subset of source words at a time. We demonstrate the effectiveness of both approaches over the WMT translation tasks between English and German in both directions. With local attention, we achieve a significant gain of 5.0 BLEU points over non-attentional systems which already incorporate known techniques such as dropout. Our ensemble model using different attention architectures has established a new state-of-the-art result in the WMT'15 English to German translation task with 25.9 BLEU points, an improvement of 1.0 BLEU points over the existing best system backed by NMT and an n-gram reranker.", "Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition." ], "cite_N": [ "@cite_21", "@cite_2" ], "mid": [ "2949335953", "2133564696" ] }
Inter and Intra Document Attention for Depression Risk Assessment
In 2015, 4.9 million Canadians aged 15 and over experienced a need for mental health care; 1.6 million felt their needs were partially met or unmet [7]. In 2017, over a third of Ontario students, grades 7 to 12, reported having wanted to talk to someone about their mental health concerns but did not know who to turn to [6]. These numbers highlight a concerning but all too familiar notion: although highly prevalent, mental health concerns often go unheard. Nonetheless, mental disorders can shorten life expectancy by 7-24 years [9]. In particular, depression is a major cause of morbidity worldwide. Although prevalence varies widely, in most countries, the number of persons that would suffer from depression in their lifetime falls between 8 and 12% [15]. Access to proper diagnosis and care is overall lacking because of a variety of reasons, from the stigma surrounding seeking treatment [23] to a high rate of misdiagnosis [25]. These obstacles could be mitigated in some way among social media users by analyzing their output on these platforms to assess their risk of depression or other mental health afflictions. The analysis of user-generated content could give valuable insights into the users mental health, identify risks, and help provide them with better support [3,11]. To promote such analyses that could lead to the development of tools supporting mental health practitioners and forum moderators, the research community has put forward shared tasks like CLPsych [2] and the CLEF eRisk pilot task [1,18]. Participants must identify users at risk of mental health issues, such as eminent risk of depression, post traumatic stress disorder, or anorexia. These tasks provide participants with annotated data and a framework for testing the performance of their approaches. In this paper, we present a neural approach to identify social media users at risk of depression from their writings in a subreddit forum, in the context of the eRisk 2018 pilot task. From a technical standpoint, the principal interest of this investigation is the use of different aggregation methods for predictions on groups of documents. Using the power of Recurrent Neural Networks (RNNs) for the sequential treatment of documents, we explore several manners in which to combine predictions on documents to make a prediction on its author. Dataset The dataset from the eRisk2018 shared task [18] consists of the written production of reddit [22] English-speaking users. The dataset was built using the writings of 887 users, and was provided in whole at the beginning of the task. Users in the RISK class have admitted to having been diagnosed with depression; NO RISK users have not. It should be noted that the users' writings, or posts, may originate from different separate discussions on the website. The individual writings, however, are not labelled. Only the user as a whole is labelled as RISK or NO RISK. The two classes of users are highly imbalanced in the training set with the positive class only counting 135 users to 752 in the negative class. Table 1 presents some statistics on the task dataset. We use this dataset but consider a simple classification task, as opposed to the early-risk detection that was the object of the shared task. Models We represent users as sets of writings rather than sequences of writings. This is partly due to the intuition that the order of writings would not be significant in the context of forums, generally speaking. It is also due to the fact that treating writings sequentially would be cumbersome, especially if we consider training on all ten chunks. However, we do consider writings as sequences of words, as this is the main strength of RNNs. We therefore write a user u as the set of his m writings, u = {x (1) , . . . , x (m) }. A given writing x (j) , is then a sequence of words, x (j) = x (j) 1 , . . . , x (j) τ , with τ being the index of the last word. Thus, x (j) t is the t-th word of the j-th post for a given user. Aggregating predictions on writings Late Inter-Document Averaging We set out to put together an approach that aggregates predictions made individually and sequentially on the writings of a user. That is, we read the different writings of a user in parallel and take the average prediction on them. This is our first model, Late Inter-Document Averaging (LIDA). Using the RNN architecture of our choice, we read each word of a post and update its hidden state, h (j) t = f (x (j) t , h (j) t−1 ; θ post ). (1) f is the transition function of the chosen RNN architecture, θ post is the set of parameters of our particular RNN model and the initial state is set to zero, h 0 = 0. In practice, however, we take but a sample of users' writings and trim overlong writings (see Sec.5). LIDA averages over the final state of the RNN, h (j) τ , across writings, a = 1 m m j=1 h (j) τ(2) This average is then projected into a binary prediction for the user, p = σ(u ⊤ a 1 ),(3) using σ, the standard logistic sigmoid function, to normalize the output and a vector of parameters, u. By averaging over all writings, rather than taking the sum, we ensure that the number of writings does not influence the decision. However, we suspect that regularizing on the hidden state alone will not suffice, as the problem remains essentially the same: gradient correction information will have to travel the entire length of the writings regardless of the corrections made as a results of other writings. Continual Inter-Document Averaging Our second model, Continual Inter-Document Averaging (CIDA), therefore aggregates the hidden state across writings at every time step, as opposed to only the final one. A first RNN, represented by its hidden state h t , reads the writings as in Eq. 1. The resulting hidden states are averaged across writings and then fed as the input to a second RNN, represented by g t , a t = 1 m m j=1 h (j) t ,(4) g t = f (a t , g t−1 ; θ user ). (5) g τ is used to make a prediction similarly to Eq.3. Inter-document attention It stands to reason that averaging over the ongoing summary of each document would help in classifying a group of documents. Nonetheless, one would suspect that some documents would be more interesting than others to our task. Even if all documents were equally interesting, their interesting parts might not align well. Because we are reading them in parallel, we should try and prioritize the documents that are interesting at the current time step. CIDA does not offer this possibility, as no weighting of terms is put in place in Eq.4. Consequently, we turn to the attention mechanism [4] to provide this information. While several manners of both applying and computing the attention mechanism exist [19,8,26], we compute the variant known as general attention [19], which is both learned and content-dependent. In applying it, we introduce Inter-Document Attention (IDA), which will provide a weighted average to our previous model. The computation of h (j) t , the post-level hidden state, remains the same, i.e. Eq.1. However, these values are compared against the previous user-level hidden state to compute the relevant energy between them,α jt α (j) t = g t−1 W att h (j) t ,(6) where W att is a matrix of parameters that learns the compatibility between the hidden states of the two RNNs. The resulting energy scalars,α (j) t are mapped to probabilities by way of softmax normalization, α (j) t = eα (j) t m k=1 eα (k) t .(7) This probability is then used to weight the appropriate h t , a t = m j=1 α (j) t h (j) t .(8) g t is given by Eq.5. Through the use of this probability weighting, we can understand a t as an expected document summary at position t when grouping documents together. As in the previous model, a prediction on the user is made from g τ . Intra-document Attention We extend our use of the attention mechanism in the aggregation to the parsing of individual documents. Similarly to our weighting of documents in aggregation dependent on the current aggregation state, we compare the current input to past inputs to evince a context for it. This is known in the literature as selfattention [8]. We therefore modify the computation of h t from Eq.1 by adding a context vector, c t , corresponding to the ongoing context in document j at time t: h (j) t = f (x (j) t , c (j) t , h (j) t−1 ; θ post ).(9) This context vector is computed by comparing past inputs to the present documentlevel hidden state,α (j) t,t ′ = h (j) t W intra x (j) t ′ ,(10) This weighting is normalized by softmax and used in adding the previous inputs together. We refer to this model as Inter-and Intra-Document Attention (InIDA). This last attention mechanism arises from practical difficulties in learning long-range dependencies by RNNs. While RNNs are theoretically capable of summarizing sequences of arbitrary complexity in their hidden state, numerical considerations make learning this process through gradient descent difficult when the sequences are long or the state is too small [5]. This can be addressed in different manners, such as gating mechanisms [13,10] and the introduction of multiplicative interactions [24]. Self-Attention is one such mechanism where the context vector acts as a reminder of past inputs in the form of a learned expected context. It can be combined to other mechanisms with minimal parameter load. Preprocessing As previously mentioned, documents are broken into words. The representation of these words is learned from the entirety of the training documents, all chunks included, using the skip-gram algorithm [20]. All words were turned to lowercase. Only the 40k most frequent words were kept. The embedded representation learned is of size 40, using a window of size five. The embeddings are are shared by all models. Documents are trimmed at the end at a length of 66 words, which is longer than 90% of the posts in the dataset. The number of documents varies greatly across user classes. We take small random samples without replacement of 30 documents per user at every iteration (epoch). We contend that sampling the user at every iteration allows us to train for longer as it is harder for the models to overfit when the components that make up each instance keep changing. Model configurations We use the Multiplicative Long Short-Term Memory (mLSTM) [17] architecture as the post-level and user-level RNN, where applicable. The flexibility of the transition function in mLSTM has shown to be capable of arriving at highly abstract features on its own and can achieve competitive results in sentiment analysis [21]. Due to the limited number of examples, smaller models are required to avoid overfitting. We therefore set the embedded representation at 20 and the size of the hidden state of both RNNs to 80. Parameter counts are shown in Table 2. Training For our experiments, we reshuffle the original eRisk 2018 dataset, as the training and test sets do not have the same proportions among labels. To provide our models with more training examples, we divide the dataset 9:1, stratifying across labels. We use 10% of the training set as validation. We train the models using the Adam optimizer [16], making use of 10% of the training data for validation. Having posited random intra-user sampling as a means of training longer, we set the training time to 30 epochs, taking the best model on validation over all epochs. As noted, the two classes are highly imbalanced. We use inverse class weighting to counteract this. Evaluation The nature of the task, which is to prioritize finding positive users, and the class imbalance in the dataset, we use the f1-score as a first metric in validation and in the final testing phase. The f1-score is useful to assess the quality of classification between unbalanced two unbalanced classes, one of which is designated as the positive class. It is defined as the harmonic mean between precision precision = T P T P + F P (15) recall = T P T P + F N (16) f1-score = 2 × precision × recall precision + recall(17) We evaluate our models on the best result on a validation set of 10% of the training data. These best results are selected over 30 epochs. Results Our preliminary results in validation are in accordance with our hypotheses. That is, continual aggregation surpasses late aggregation but falls short of the more sophisticated attention model. Moreover, the noticeable difference in performance has little to no cost in terms of parameter count. Conclusion In this paper, we have put forward three RNN-based models that aggregate documents to make a prediction on their author. We applied this model to the eRisk 2018 dataset, which associates a user, as a sequence of online forum posts, to a binary label that identifies them as being at risk for depression or not. With the goal of using RNNs to read the individual documents, we tested four methods of combining the resulting predictions, LIDA, CIDA, IDA and InIDA. We also introduced the inter-document attention mechanism. Our preliminary results show promise and confirm the parameter efficiency of the attention mechanism. Future work could involve the use of dot-product alone, which, despite adding no parameters, has been found to be more effective for global attention [19]. An investigation into using late attention aggregation for all hidden states produced across all documents is also necessary.
2,301
1907.00462
2946521116
We take interest in the early assessment of risk for depression in social media users. We focus on the eRisk 2018 dataset, which represents users as a sequence of their written online contributions. We implement four RNN-based systems to classify the users. We explore several aggregations methods to combine predictions on individual posts. Our best model reads through all writings of a user in parallel but uses an attention mechanism to prioritize the most important ones at each timestep.
was developed as part of Neural Turing Machines @cite_23 , where the attention is focused on inputs that are similar to the values in memory.
{ "abstract": [ "We extend the capabilities of neural networks by coupling them to external memory resources, which they can interact with by attentional processes. The combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-to-end, allowing it to be efficiently trained with gradient descent. Preliminary results demonstrate that Neural Turing Machines can infer simple algorithms such as copying, sorting, and associative recall from input and output examples." ], "cite_N": [ "@cite_23" ], "mid": [ "2950527759" ] }
Inter and Intra Document Attention for Depression Risk Assessment
In 2015, 4.9 million Canadians aged 15 and over experienced a need for mental health care; 1.6 million felt their needs were partially met or unmet [7]. In 2017, over a third of Ontario students, grades 7 to 12, reported having wanted to talk to someone about their mental health concerns but did not know who to turn to [6]. These numbers highlight a concerning but all too familiar notion: although highly prevalent, mental health concerns often go unheard. Nonetheless, mental disorders can shorten life expectancy by 7-24 years [9]. In particular, depression is a major cause of morbidity worldwide. Although prevalence varies widely, in most countries, the number of persons that would suffer from depression in their lifetime falls between 8 and 12% [15]. Access to proper diagnosis and care is overall lacking because of a variety of reasons, from the stigma surrounding seeking treatment [23] to a high rate of misdiagnosis [25]. These obstacles could be mitigated in some way among social media users by analyzing their output on these platforms to assess their risk of depression or other mental health afflictions. The analysis of user-generated content could give valuable insights into the users mental health, identify risks, and help provide them with better support [3,11]. To promote such analyses that could lead to the development of tools supporting mental health practitioners and forum moderators, the research community has put forward shared tasks like CLPsych [2] and the CLEF eRisk pilot task [1,18]. Participants must identify users at risk of mental health issues, such as eminent risk of depression, post traumatic stress disorder, or anorexia. These tasks provide participants with annotated data and a framework for testing the performance of their approaches. In this paper, we present a neural approach to identify social media users at risk of depression from their writings in a subreddit forum, in the context of the eRisk 2018 pilot task. From a technical standpoint, the principal interest of this investigation is the use of different aggregation methods for predictions on groups of documents. Using the power of Recurrent Neural Networks (RNNs) for the sequential treatment of documents, we explore several manners in which to combine predictions on documents to make a prediction on its author. Dataset The dataset from the eRisk2018 shared task [18] consists of the written production of reddit [22] English-speaking users. The dataset was built using the writings of 887 users, and was provided in whole at the beginning of the task. Users in the RISK class have admitted to having been diagnosed with depression; NO RISK users have not. It should be noted that the users' writings, or posts, may originate from different separate discussions on the website. The individual writings, however, are not labelled. Only the user as a whole is labelled as RISK or NO RISK. The two classes of users are highly imbalanced in the training set with the positive class only counting 135 users to 752 in the negative class. Table 1 presents some statistics on the task dataset. We use this dataset but consider a simple classification task, as opposed to the early-risk detection that was the object of the shared task. Models We represent users as sets of writings rather than sequences of writings. This is partly due to the intuition that the order of writings would not be significant in the context of forums, generally speaking. It is also due to the fact that treating writings sequentially would be cumbersome, especially if we consider training on all ten chunks. However, we do consider writings as sequences of words, as this is the main strength of RNNs. We therefore write a user u as the set of his m writings, u = {x (1) , . . . , x (m) }. A given writing x (j) , is then a sequence of words, x (j) = x (j) 1 , . . . , x (j) τ , with τ being the index of the last word. Thus, x (j) t is the t-th word of the j-th post for a given user. Aggregating predictions on writings Late Inter-Document Averaging We set out to put together an approach that aggregates predictions made individually and sequentially on the writings of a user. That is, we read the different writings of a user in parallel and take the average prediction on them. This is our first model, Late Inter-Document Averaging (LIDA). Using the RNN architecture of our choice, we read each word of a post and update its hidden state, h (j) t = f (x (j) t , h (j) t−1 ; θ post ). (1) f is the transition function of the chosen RNN architecture, θ post is the set of parameters of our particular RNN model and the initial state is set to zero, h 0 = 0. In practice, however, we take but a sample of users' writings and trim overlong writings (see Sec.5). LIDA averages over the final state of the RNN, h (j) τ , across writings, a = 1 m m j=1 h (j) τ(2) This average is then projected into a binary prediction for the user, p = σ(u ⊤ a 1 ),(3) using σ, the standard logistic sigmoid function, to normalize the output and a vector of parameters, u. By averaging over all writings, rather than taking the sum, we ensure that the number of writings does not influence the decision. However, we suspect that regularizing on the hidden state alone will not suffice, as the problem remains essentially the same: gradient correction information will have to travel the entire length of the writings regardless of the corrections made as a results of other writings. Continual Inter-Document Averaging Our second model, Continual Inter-Document Averaging (CIDA), therefore aggregates the hidden state across writings at every time step, as opposed to only the final one. A first RNN, represented by its hidden state h t , reads the writings as in Eq. 1. The resulting hidden states are averaged across writings and then fed as the input to a second RNN, represented by g t , a t = 1 m m j=1 h (j) t ,(4) g t = f (a t , g t−1 ; θ user ). (5) g τ is used to make a prediction similarly to Eq.3. Inter-document attention It stands to reason that averaging over the ongoing summary of each document would help in classifying a group of documents. Nonetheless, one would suspect that some documents would be more interesting than others to our task. Even if all documents were equally interesting, their interesting parts might not align well. Because we are reading them in parallel, we should try and prioritize the documents that are interesting at the current time step. CIDA does not offer this possibility, as no weighting of terms is put in place in Eq.4. Consequently, we turn to the attention mechanism [4] to provide this information. While several manners of both applying and computing the attention mechanism exist [19,8,26], we compute the variant known as general attention [19], which is both learned and content-dependent. In applying it, we introduce Inter-Document Attention (IDA), which will provide a weighted average to our previous model. The computation of h (j) t , the post-level hidden state, remains the same, i.e. Eq.1. However, these values are compared against the previous user-level hidden state to compute the relevant energy between them,α jt α (j) t = g t−1 W att h (j) t ,(6) where W att is a matrix of parameters that learns the compatibility between the hidden states of the two RNNs. The resulting energy scalars,α (j) t are mapped to probabilities by way of softmax normalization, α (j) t = eα (j) t m k=1 eα (k) t .(7) This probability is then used to weight the appropriate h t , a t = m j=1 α (j) t h (j) t .(8) g t is given by Eq.5. Through the use of this probability weighting, we can understand a t as an expected document summary at position t when grouping documents together. As in the previous model, a prediction on the user is made from g τ . Intra-document Attention We extend our use of the attention mechanism in the aggregation to the parsing of individual documents. Similarly to our weighting of documents in aggregation dependent on the current aggregation state, we compare the current input to past inputs to evince a context for it. This is known in the literature as selfattention [8]. We therefore modify the computation of h t from Eq.1 by adding a context vector, c t , corresponding to the ongoing context in document j at time t: h (j) t = f (x (j) t , c (j) t , h (j) t−1 ; θ post ).(9) This context vector is computed by comparing past inputs to the present documentlevel hidden state,α (j) t,t ′ = h (j) t W intra x (j) t ′ ,(10) This weighting is normalized by softmax and used in adding the previous inputs together. We refer to this model as Inter-and Intra-Document Attention (InIDA). This last attention mechanism arises from practical difficulties in learning long-range dependencies by RNNs. While RNNs are theoretically capable of summarizing sequences of arbitrary complexity in their hidden state, numerical considerations make learning this process through gradient descent difficult when the sequences are long or the state is too small [5]. This can be addressed in different manners, such as gating mechanisms [13,10] and the introduction of multiplicative interactions [24]. Self-Attention is one such mechanism where the context vector acts as a reminder of past inputs in the form of a learned expected context. It can be combined to other mechanisms with minimal parameter load. Preprocessing As previously mentioned, documents are broken into words. The representation of these words is learned from the entirety of the training documents, all chunks included, using the skip-gram algorithm [20]. All words were turned to lowercase. Only the 40k most frequent words were kept. The embedded representation learned is of size 40, using a window of size five. The embeddings are are shared by all models. Documents are trimmed at the end at a length of 66 words, which is longer than 90% of the posts in the dataset. The number of documents varies greatly across user classes. We take small random samples without replacement of 30 documents per user at every iteration (epoch). We contend that sampling the user at every iteration allows us to train for longer as it is harder for the models to overfit when the components that make up each instance keep changing. Model configurations We use the Multiplicative Long Short-Term Memory (mLSTM) [17] architecture as the post-level and user-level RNN, where applicable. The flexibility of the transition function in mLSTM has shown to be capable of arriving at highly abstract features on its own and can achieve competitive results in sentiment analysis [21]. Due to the limited number of examples, smaller models are required to avoid overfitting. We therefore set the embedded representation at 20 and the size of the hidden state of both RNNs to 80. Parameter counts are shown in Table 2. Training For our experiments, we reshuffle the original eRisk 2018 dataset, as the training and test sets do not have the same proportions among labels. To provide our models with more training examples, we divide the dataset 9:1, stratifying across labels. We use 10% of the training set as validation. We train the models using the Adam optimizer [16], making use of 10% of the training data for validation. Having posited random intra-user sampling as a means of training longer, we set the training time to 30 epochs, taking the best model on validation over all epochs. As noted, the two classes are highly imbalanced. We use inverse class weighting to counteract this. Evaluation The nature of the task, which is to prioritize finding positive users, and the class imbalance in the dataset, we use the f1-score as a first metric in validation and in the final testing phase. The f1-score is useful to assess the quality of classification between unbalanced two unbalanced classes, one of which is designated as the positive class. It is defined as the harmonic mean between precision precision = T P T P + F P (15) recall = T P T P + F N (16) f1-score = 2 × precision × recall precision + recall(17) We evaluate our models on the best result on a validation set of 10% of the training data. These best results are selected over 30 epochs. Results Our preliminary results in validation are in accordance with our hypotheses. That is, continual aggregation surpasses late aggregation but falls short of the more sophisticated attention model. Moreover, the noticeable difference in performance has little to no cost in terms of parameter count. Conclusion In this paper, we have put forward three RNN-based models that aggregate documents to make a prediction on their author. We applied this model to the eRisk 2018 dataset, which associates a user, as a sequence of online forum posts, to a binary label that identifies them as being at risk for depression or not. With the goal of using RNNs to read the individual documents, we tested four methods of combining the resulting predictions, LIDA, CIDA, IDA and InIDA. We also introduced the inter-document attention mechanism. Our preliminary results show promise and confirm the parameter efficiency of the attention mechanism. Future work could involve the use of dot-product alone, which, despite adding no parameters, has been found to be more effective for global attention [19]. An investigation into using late attention aggregation for all hidden states produced across all documents is also necessary.
2,301
1907.00322
2770651296
Scalability is a major issue for Internet of Things (IoT) as the total amount of traffic data collected and or the number of sensors deployed grow. In some IoT applications such as healthcare, power consumption is also a key design factor for the IoT devices. In this paper, a multi-signal compression and encoding method based on Analog Joint Source Channel Coding (AJSCC) is proposed that works fully in the analog domain without the need for power-hungry Analog-to-Digital Converters (ADCs). Compression is achieved by quantizing all the input signals but one. While saving power, this method can also reduce the number of devices by combining one or more sensing functionalities into a single device (called 'AJSCC device'). Apart from analog encoding, AJSCC devices communicate to an aggregator node (FPMM receiver) using a novel Frequency Position Modulation and Multiplexing (FPMM) technique. Such joint modulation and multiplexing technique presents three mayor advantages&#x2014;it is robust to interference at particular frequency bands, it protects against eavesdropping, and it consumes low power due to a very low Signal-to-Noise Ratio (SNR) operating region at the receiver. Performance of the proposed multi-signal compression method and FPMM technique is evaluated via simulations in terms of Mean Square Error (MSE) and Miss Detection Rate (MDR), respectively.
In the IoT healthcare and BAN domain, all of the existing solutions do sensing and communication in the digital domain (using ADCs DACs Microprocessors), which needs more power than analog sensing and communication. For example, Yang's group @cite_18 developed a miniaturized node that incorporates wireless communication, on-board processing, nine-axis motion tracking, and other sensors @cite_0 . It also developed an e-AR sensor @cite_27 , a small device to be worn behind the ear that captures information about the balance of the wearer such as gait, posture, skelet al joint shock-wave transmission, and activity of the individual. Yuce's group developed techniques based on Ultra Wide Band (UWB) wireless technology to reduce the power consumption of body-worn sensors @cite_11 @cite_8 . Another important example is the activity recognition of the user using various body sensors viz., accelerometer and gyroscope data. The data from the sensors is digitally processed and stored into a wrist-band device that syncs the data with the mobile phone using Bluetooth technology @cite_19 . Unlike these approaches, we adopt an entirely different approach based on analog sensing and communication that does not use any power-hungry ADCs (see Table ).
{ "abstract": [ "Some patients, especially patients with chronic diseases such as heart disease, require continuous monitoring of their condition. Wearable devices for patient monitoring were introduced many years ago – for instance, wearable ambulatory ECG (electrocardiogram) recorders commonly known as Holter monitors (Figure 1) are used for monitoring cardiac patients. However, these monitors are quite bulky and can only record the signal for a limited time. Patients are often asked to wear a Holter monitor for a few days and then return to the clinic for diagnosis. This often overlooks transient but life-threatening events. In addition, we don’t know under what condition the signals are acquired, and this often leads to false alarms. For example, a sudden rise in heart rate may be caused by emotion, such as watching a horror movie, or by exercise, rather than by a heart condition. Body sensing To address these issues, the concept of Body Sensor Networks (BSN) was first proposed in 2002 by Prof. Guang-Zhong Yang from Imperial College London. The aim of the BSN is to provide a truly personalised monitoring platform that is pervasive, intelligent, and Figure 1 A patient wearing a Holter monitor", "", "Objective: This paper discusses the evolution of pervasive healthcare from its inception for activity recognition using wearable sensors to the future of sensing implant deployment and data processing. Methods: We provide an overview of some of the past milestones and recent developments, categorized into different generations of pervasive sensing applications for health monitoring. This is followed by a review on recent technological advances that have allowed unobtrusive continuous sensing combined with diverse technologies to reshape the clinical workflow for both acute and chronic disease management. We discuss the opportunities of pervasive health monitoring through data linkages with other health informatics systems including the mining of health records, clinical trial databases, multiomics data integration, and social media. Conclusion: Technical advances have supported the evolution of the pervasive health paradigm toward preventative, predictive, personalized, and participatory medicine. Significance: The sensing technologies discussed in this paper and their future evolution will play a key role in realizing the goal of sustainable healthcare systems.", "Body Sensor Networks aim to capture the state of the user and its environment by utilizing from information heterogeneous sensors, and allow continuous monitoring of numerous physiological signals, where these sensors are attached to the subject's body. This can be immensely useful in activity recognition for identity verification, health and ageing and sport and exercise monitoring applications. In this paper, the application of body sensor networks for automatic and intelligent daily activity monitoring for elderly people, using wireless body sensors and smartphone inertial sensors has been reported. The scheme uses information theory-based feature ranking algorithms and classifiers based on random forests, ensemble learning and lazy learning. Extensive experiments using different publicly available datasets of human activity show that the proposed approach can assist in the development of intelligent and automatic real time human activity monitoring technology for eHealth application scenarios for elderly, disabled and people with special needs.", "This paper analyses gait patterns of patients with Parkinson;s Disease (PD) based on the acceleration data given by an e-AR sensor. Ten PD patients wearing the e-AR sensor walked along a 7m walkway and each session contained 16 repeated trials. An iterative algorithm has been proposed to produce robust estimations in the case of measurement noise and short-duration of gait signals. Step-frequency as a gait parameter derived from the estimated heel-contacts is calculated and validated using the CODA motion-capture system. Intersession variability of step-frequency for each patient and the overall variability across patients demonstrate a good agreement between estimations from the e-AR and CODA systems.", "The basic requirement of wireless healthcare monitoring systems is to send physiological signals acquired from implantable or on-body sensor nodes to a remote location. Low-power consumption is required for wireless healthcare monitoring systems since most medical sensor nodes are battery powered. The emergence of new technologies in measuring physiological signals has increased the demand for high data rate transmission systems. Ultra-wide band (UWB) is a suitable wireless technology to achieve high data rates while keeping power consumption and form factors small. Although UWB transmitters are designed based on simple techniques, UWB receivers require complex hardware and consume comparatively higher power. In order to achieve reliable low power two-way communication, a sensor node can be constructed using a UWB transmitter and a narrow band receiver. This paper proposes a new medium access control (MAC) protocol based on a dual-band physical layer technology. Co-simulation models based on MATLAB and OPNET have been developed to analyze the performance of the proposed MAC protocol. We analyzed the performance of the MAC protocol for a realistic scenario where both implantable and wearable sensor nodes are involved in the data transmission. Priority-based packet transmission techniques have been used in the MAC protocol to serve different sensors according to their QoS requirements. Analysis is done with regard to important network parameters, such as packet loss ratio, packet delay, percentage throughput, and power consumption." ], "cite_N": [ "@cite_18", "@cite_8", "@cite_0", "@cite_19", "@cite_27", "@cite_11" ], "mid": [ "95821903", "", "2143221170", "2520465429", "1940704660", "2042956874" ] }
Analog Signal Compression and Multiplexing Techniques for Healthcare Internet of Things
0
1907.00322
2770651296
Scalability is a major issue for Internet of Things (IoT) as the total amount of traffic data collected and or the number of sensors deployed grow. In some IoT applications such as healthcare, power consumption is also a key design factor for the IoT devices. In this paper, a multi-signal compression and encoding method based on Analog Joint Source Channel Coding (AJSCC) is proposed that works fully in the analog domain without the need for power-hungry Analog-to-Digital Converters (ADCs). Compression is achieved by quantizing all the input signals but one. While saving power, this method can also reduce the number of devices by combining one or more sensing functionalities into a single device (called 'AJSCC device'). Apart from analog encoding, AJSCC devices communicate to an aggregator node (FPMM receiver) using a novel Frequency Position Modulation and Multiplexing (FPMM) technique. Such joint modulation and multiplexing technique presents three mayor advantages&#x2014;it is robust to interference at particular frequency bands, it protects against eavesdropping, and it consumes low power due to a very low Signal-to-Noise Ratio (SNR) operating region at the receiver. Performance of the proposed multi-signal compression method and FPMM technique is evaluated via simulations in terms of Mean Square Error (MSE) and Miss Detection Rate (MDR), respectively.
Shannon mapping has been applied in a number of applications such as Software-Defined Radio (SDR) systems @cite_6 , optical digital communications @cite_5 , Compressive Sensing (CS) @cite_16 , and digital video transmissions @cite_20 . All these applications use power-hungry ADCs and other digital components making such implementations unsuitable for healthcare and other low-power IoT solutions. Some works have studied the N:1 spiral-type mapping @cite_15 . The advantage of considering rectangular-type Shannon mapping is that there are existing low-power, all-analog circuit realizations for rectangular-type mapping (our previous work @cite_7 ). Using this approach, sensors can be designed using all-analog circuits that can compress multiple signals into one signal, thereby consuming less power. The signals from multiple sensors are multiplexed at different frequency locations in an interleaved pattern. Similar pattern has been studied for topics of pilot placement @cite_12 @cite_23 .
{ "abstract": [ "A low-complexity and low-power all-analog circuit is proposed to perform efficiently Analog Joint Source Channel Coding (AJSCC). The proposed idea is to adopt Voltage Controlled Voltage Source (VCVS) to realize the rectangular-type mapping in AJSCC. The proposal is verified by Spice simulations as well as via breadboard and Printed Circuit Board (PCB) implementations. Field testing results indicate that the design is feasible for low-complexity and low-power systems such as wireless sensor networks for environmental monitoring.", "Recently, analog joint source-channel coding has been proposed as a means of achieving near-optimum performance for high data rates with a very low complexity. However, no experimental evaluation showing the practical feasibility of this scheme has been performed to date. In this paper, we describe a software-defined radio implementation of an analog joint source-channel coded wireless transmission system. Experimental evaluation carried out in an indoor environment making use of a wireless testbed show that the performance perfectly matches that originally reported by simulations in additive white Gaussian noise channels for signal-to-noise ratio values below 20 dB.", "", "An analog joint source channel coding (JSCC) system is developed for wireless optical communications. Source symbols are mapped directly onto channel symbols using space filling curves and then a non-linear stretching function is used to reduce distortion. Different from digital systems, the proposed scheme does not require long block lengths to achieve good performance reducing the complexity of the decoder significantly. This paper focuses on intensity-modulated direct-detection (IM DD) optical wireless systems. First, a theoretical analysis of the IM DD wireless optical channel is presented and the prototype communication system designed to transmit data using analog JSCC is introduced. The nonlinearities of the real channel are studied and characterized. A novel technique to mitigate the channel nonlinearities is presented. The performance of the real system follows the simulations and closely approximates the theoretical limits. The proposed system is then used for image transmission by first taking samples of a set of images using compressive sensing and then encoding the measurements using analog JSCC. Both simulation and experimental results are shown.", "We consider the use of spatial diversity to improve the performance of analog joint source-channel coding in wireless fading channels. The communication system analyzed in this paper consists of discrete-time all-analog-processing joint source-channel coding where Maximum Likelihood (ML) and Minimum Mean Square Error (MMSE) detection are employed. By assuming a fast-fading Rayleigh channel, we show that MMSE performs much better than ML at high Channel Signal-to-Noise Ratios (CSNR) in single-antenna wireless systems. However, such performance gap can be significantly reduced by using multiple receive antennas, thus making low complexity ML decoding very attractive in the case of receive diversity. Moreover, we show that the analog scheme can be considerably robust to imperfect channel estimation. In addition, as an alternative to multiple antennas, we also consider spatial diversity through cooperative communications, and show that the application of the Amplify-and-Forward (AF) protocol with single antenna nodes leads to similar results than when two antennas are available at the receiver and Maximal Ratio Combining (MRC) is applied. Finally, we show that the MMSE implementation of the analog scheme performs very close to the unconstrained capacity of digital schemes using scalar quantization, while its complexity is much lower than that of capacity-approaching digital systems.", "We propose a low delay and low complexity sensor system based on the combination of Shannon-Kotel'nikov mapping and compressed sensing (CS). The proposed system uses nonlinear analog mappings on the CS measurements to increase their immunity against channel noise. Numerical results show that the proposed purely-analog system outperforms the state-of-the-art purely CS systems in terms of signal-to-distortion ratio. In addition to sparsity knowledge, we use a statistical characterization of the observed signal to further improve system performance.", "This paper describes a channel estimation scheme which utilizes interlaced transmit pilots to estimate multiple input multiple output (MIMO) channel responses in orthogonal frequency division multiplexing (OFDM) systems. This least square (LS) based channel estimation scheme adopts discrete Fourier transform (DFT) to improve the estimation accuracy. The constraint condition of the estimation scheme is obtained using frequency-domain sampling theorem. Simulation shows that the proposed scheme shows good stability when the constraint condition fails, which is much better than other two commonly-used MIMO-OFDM channel estimation schemes.", "Disclosed is a method, circuit and system for communicating data. A data value to be transmitted from a data source transmitter or transceiver to a downstream receiver or transceiver may be Shannon mapped, by functionally associated processing mapping logic, to a point on a shape within a higher dimensional plane. Different portions of the shape, for example branches of a spiral, may be designated by a portion or branch number. Coordinates of the Shannon mapping, or another descriptors, of the Shannon mapped point may be transmitted using analog transmission methods. A set of data values may be Shannon mapped and transmitted to a downstream receiver transceiver in series. For each set of mapped and transmitted data values, processing logic may calculate a branch ambiguity resolution factor. The branch ambiguity resolution factor for each set of values may be transmitted to the downstream receiver transceiver before, after or with the data values. Decoding logic associated with the downstream receiver transceiver may then use the branch ambiguity resolution factor to convert decode received coordinates associated with the set of values into the data values." ], "cite_N": [ "@cite_7", "@cite_6", "@cite_23", "@cite_5", "@cite_15", "@cite_16", "@cite_12", "@cite_20" ], "mid": [ "2509630996", "2156496710", "", "2102751195", "2031083855", "1966102612", "2126573333", "1539066163" ] }
Analog Signal Compression and Multiplexing Techniques for Healthcare Internet of Things
0
1907.00322
2770651296
Scalability is a major issue for Internet of Things (IoT) as the total amount of traffic data collected and or the number of sensors deployed grow. In some IoT applications such as healthcare, power consumption is also a key design factor for the IoT devices. In this paper, a multi-signal compression and encoding method based on Analog Joint Source Channel Coding (AJSCC) is proposed that works fully in the analog domain without the need for power-hungry Analog-to-Digital Converters (ADCs). Compression is achieved by quantizing all the input signals but one. While saving power, this method can also reduce the number of devices by combining one or more sensing functionalities into a single device (called 'AJSCC device'). Apart from analog encoding, AJSCC devices communicate to an aggregator node (FPMM receiver) using a novel Frequency Position Modulation and Multiplexing (FPMM) technique. Such joint modulation and multiplexing technique presents three mayor advantages&#x2014;it is robust to interference at particular frequency bands, it protects against eavesdropping, and it consumes low power due to a very low Signal-to-Noise Ratio (SNR) operating region at the receiver. Performance of the proposed multi-signal compression method and FPMM technique is evaluated via simulations in terms of Mean Square Error (MSE) and Miss Detection Rate (MDR), respectively.
This paper is the first work to propose this structure for multiplexing data of AJSCC sensors. Table compares the power numbers of our circuit @cite_7 with state-of-the-art wireless sensor motes (all of which are digital). We can notice that @math is possible with our circuit, which is essential in low-power applications. The existing circuit realizations of spiral-type mapping also are all based on digital circuits and systems @cite_6 . In this scenario, it is worth noting that the Hybrid Digital Analog (HDA) coding can also perform signal compression @cite_26 @cite_17 . However, the digital part still needs digitization of the signals. Contrary to all these approaches, we propose signal compression and encoding in the analog domain with no need for ADCs DACs Microprocessors. To show the feasibility of our vision (analog compression and encoding), we previously developed a novel circuit to compress two signals @cite_7 and verified its applicability to two pathological signals (molecular biomarkers and physiological signal) @cite_21 . In this paper, we extend the theory for N-dimensional signal compression and propose novel multiplexing techniques that address the above mentioned challenges of scalability and power in the context of healthcare IoT as one of the key applications.
{ "abstract": [ "In this paper, we study transmission of a memoryless Laplacian source over three types of channels: additive white Laplacian noise (AWLN), additive white Gaussian noise (AWGN), and slow flat-fading Rayleigh channels under both bandwidth compression and bandwidth expansion. For this purpose, we analyze two well-known hybrid digital-analog (HDA) joint source-channel coding schemes for bandwidth compression and one for bandwidth expansion. Then we obtain achievable (absolute-error) distortion regions of the HDA schemes for the matched signal-to-noise ratio (SNR) case as well as the mismatched SNR scenario. Using numerical examples, it is shown that these schemes can achieve a distortion very close to the provided lower bound (for the AWLN channel) and to the optimum performance theoretically attainable bound (for AWGN and Rayleigh fading channels) on mean-absolute error distortion under matched SNR conditions. In addition, a non-linear analog coding scheme is analyzed, and its performance is compared to the HDA schemes for bandwidth compression under both matched and mismatched SNR scenarios. The results show that the HDA schemes outperform the non-linear analog coding over the whole CSNR region.", "A low-complexity and low-power all-analog circuit is proposed to perform efficiently Analog Joint Source Channel Coding (AJSCC). The proposed idea is to adopt Voltage Controlled Voltage Source (VCVS) to realize the rectangular-type mapping in AJSCC. The proposal is verified by Spice simulations as well as via breadboard and Printed Circuit Board (PCB) implementations. Field testing results indicate that the design is feasible for low-complexity and low-power systems such as wireless sensor networks for environmental monitoring.", "A low-power wearable wireless sensor measuring both molecular biomarkers and physiological signals is proposed, where the former are measured by a microfluidic biosensing system while the latter are measured electrically. The low-power consumption of the sensor is achieved by an all-analog circuit implementing Analog Joint Source-Channel Coding (AJSCC) compression. The sensor is applicable to a wide range of biomedical applications that require real-time concurrent molecular biomarker and physiological signal monitoring.", "Recently, analog joint source-channel coding has been proposed as a means of achieving near-optimum performance for high data rates with a very low complexity. However, no experimental evaluation showing the practical feasibility of this scheme has been performed to date. In this paper, we describe a software-defined radio implementation of an analog joint source-channel coded wireless transmission system. Experimental evaluation carried out in an indoor environment making use of a wireless testbed show that the performance perfectly matches that originally reported by simulations in additive white Gaussian noise channels for signal-to-noise ratio values below 20 dB.", "We consider the problem of sending a bivariate Gaussian source S=(S1,S2) across a power-limited two-user Gaussian broadcast channel. User i (i=1,2) observes the transmitted signal corrupted by Gaussian noise with power σi2 and desires to estimate Si. We study hybrid digital-analog (HDA) joint source-channel coding schemes and analyze the region of (squared-error) distortion pairs that are simultaneously achievable. Two cases are considered: 1) broadcasting with bandwidth compression, and 2) broadcasting with bandwidth expansion. We modify and adapt HDA schemes of and , originally proposed for broadcasting a single common Gaussian source, in order to provide achievable distortion regions for broadcasting correlated Gaussian sources. For comparison, we also extend the outer bound of from the matched source-channel bandwidth case to the bandwidth mismatch case." ], "cite_N": [ "@cite_26", "@cite_7", "@cite_21", "@cite_6", "@cite_17" ], "mid": [ "2118907934", "2509630996", "2759591066", "2156496710", "2095565099" ] }
Analog Signal Compression and Multiplexing Techniques for Healthcare Internet of Things
0
1907.00069
2956033525
In this paper, a 1d convolutional neural network is designed for classification tasks of leaves with centroid contour distance curve (CCDC) as the single feature. With this classifier, simple feature as CCDC shows more discriminating power than people thought previously. The same architecture can also be applied for classifying 1 dimensional time series with little changes. Experiments on some benchmark datasets shows this architecture can provide classification accuracies that are higher than some existing methods. Code for the paper is available at this https URL Project.
On the side of shape features, they can be extracted based on botanical characteristics @cite_20 @cite_11 . These features may include: Aspect Ratio, Rectangularity, Convex Area, Ratio, Convex Perimeter Ratio, Sphericity, Circularity, Eccentricity, Form Factor, etc. @cite_1 discussed some other features applied on leave shapes and introduced two new multiscale triangle representations. There are also a lot of other work done with more in-depth design aiming for general shapes than just leaves. @cite_25 defines inner distance of shape contours to build shape descriptors. @cite_12 develops the visual descriptor called CENTRIST (CENsus TRansform hISTogram) for scene recognitions, it get good performance when applied to leave images. Authors of @cite_21 uses the transformation form shape contours to 1 dimensional time series and present the method of shapelet for shape recognition. @cite_24 describes a hierarchical representation for two dimensional objects that captures shape information at multiple levels of resolution for matching deformable shapes. Features coming from different method can be stacked together, these bagged features can usually help provide better performance as discussed in @cite_3 .
{ "abstract": [ "Classification of time series has been attracting great interest over the past decade. Recent empirical evidence has strongly suggested that the simple nearest neighbor algorithm is very difficult to beat for most time series problems. While this may be considered good news, given the simplicity of implementing the nearest neighbor algorithm, there are some negative consequences of this. First, the nearest neighbor algorithm requires storing and searching the entire dataset, resulting in a time and space complexity that limits its applicability, especially on resource-limited sensors. Second, beyond mere classification accuracy, we often wish to gain some insight into the data. In this work we introduce a new time series primitive, time series shapelets, which addresses these limitations. Informally, shapelets are time series subsequences which are in some sense maximally representative of a class. As we shall show with extensive empirical evaluations in diverse domains, algorithms based on the time series shapelet primitives can be interpretable, more accurate and significantly faster than state-of-the-art classifiers.", "In this paper we introduce a new multiscale shape-based approach for leaf image retrieval. The leaf is represented by local descriptors associated with margin sample points. Within this local description, we study four multiscale triangle representations: the well known triangle area representation (TAR), the triangle side lengths representation (TSL) and two new representations that we denote triangle oriented angles (TOA) and triangle side lengths and angle representation (TSLA). Unlike existing TAR approaches, where a global matching is performed, the similarity measure is based on a locality sensitive hashing of local descriptors. The proposed approach is invariant under translation, rotation and scale and robust under partial occlusion. Evaluations made on four public leaf datasets show that our shape-based approach achieves a high retrieval accuracy w.r.t. state-of-art methods.", "Plant species classification using leaf samples is a challenging and important problem to solve. This paper introduces a new data set of sixteen samples each of one-hundred plant species; and describes a method designed to work in conditions of small training set size and possibly incomplete extraction of features. This motivates a separate processing of three feature types: shape, texture, and margin; combined using a probabilistic framework. The texture and margin features use histogram accumulation, while a normalised description of contour is used for the shape. Two previously published methods are used to generate separate posterior probability vectors for each feature, using data associated with the k-Nearest Neighbour apparatus. The combined posterior estimates produce the final classification (where missing features could be omitted). We show that both density estimators achieved a 96 mean accuracy of classification when combining the three features in this way (training on 15 samples with unseen cross validation). In addition, the framework can provide an upper bound on the Bayes Risk of the classification problem, and thereby assess the accuracy of the density estimators. Lastly, the high performance of the method is demonstrated for small training set sizes: 91 accuracy is observed with only four training samples.", "We describe a new hierarchical representation for two-dimensional objects that captures shape information at multiple levels of resolution. This representation is based on a hierarchical description of an object's boundary and can be used in an elastic matching framework, both for comparing pairs of objects and for detecting objects in cluttered images. In contrast to classical elastic models, our representation explicitly captures global shape information. This leads to richer geometric models and more accurate recognition results. Our experiments demonstrate classification results that are significantly better than the current state-of-the-art in several shape datasets. We also show initial experiments in matching shapes to cluttered images.", "In this paper, an effective shape-based leaf image retrieval system is presented. A new contour descriptor is defined which reduces the number of points for the shape representation considerably. This shape representation is based on the curvature of the leaf contour and it deals with the scale factor in a novel and compact way. A two-step algorithm for retrieval is used. In a first step, the database is reduced using some geometrical features. Then a similarity measure between the contour representations is used to rank conveniently leaf images on the database. We implemented a prototype system based on these features and performed several experiments to show its effectiveness for plant species identification.", "Part structure and articulation are of fundamental importance in computer and human vision. We propose using the inner-distance to build shape descriptors that are robust to articulation and capture part structure. The inner-distance is defined as the length of the shortest path between landmark points within the shape silhouette. We show that it is articulation insensitive and more effective at capturing part structures than the Euclidean distance. This suggests that the inner-distance can be used as a replacement for the Euclidean distance to build more accurate descriptors for complex shapes, especially for those with articulated parts. In addition, texture information along the shortest path can be used to further improve shape classification. With this idea, we propose three approaches to using the inner-distance. The first method combines the inner-distance and multidimensional scaling (MDS) to build articulation invariant signatures for articulated shapes. The second method uses the inner-distance to build a new shape descriptor based on shape contexts. The third one extends the second one by considering the texture information along shortest paths. The proposed approaches have been tested on a variety of shape databases, including an articulated shape data set, MPEG7 CE-Shape-1, Kimia silhouettes, the ETH-80 data set, two leaf data sets, and a human motion silhouette data set. In all the experiments, our methods demonstrate effective performance compared with other algorithms", "CENsus TRansform hISTogram (CENTRIST), a new visual descriptor for recognizing topological places or scene categories, is introduced in this paper. We show that place and scene recognition, especially for indoor environments, require its visual descriptor to possess properties that are different from other vision domains (e.g., object recognition). CENTRIST satisfies these properties and suits the place and scene recognition task. It is a holistic representation and has strong generalizability for category recognition. CENTRIST mainly encodes the structural properties within an image and suppresses detailed textural information. Our experiments demonstrate that CENTRIST outperforms the current state of the art in several place and scene recognition data sets, compared with other descriptors such as SIFT and Gist. Besides, it is easy to implement and evaluates extremely fast.", "Plant has plenty use in foodstuff, medicine and industry. And it is also vitally important for environmental protection. However, it is an important and difficult task to recognize plant species on earth. Designing a convenient and automatic recognition system of plants is necessary and useful since it can facilitate fast classifying plants, and understanding and managing them. In this paper, a leaf database from different plants is firstly constructed. Then, a new classification method, referred to as move median centers (MMC) hypersphere classifier, for the leaf database based on digital morphological feature is proposed. The proposed method is more robust than the one based on contour features since those significant curvature points are hard to find. Finally, the efficiency and effectiveness of the proposed method in recognizing different plants is demonstrated by experiments." ], "cite_N": [ "@cite_21", "@cite_1", "@cite_3", "@cite_24", "@cite_20", "@cite_25", "@cite_12", "@cite_11" ], "mid": [ "2029438113", "1976814955", "2329838488", "2125310690", "2106950129", "2165414070", "2113855951", "2146364193" ] }
A 1d convolutional network for leaf and time series classification
Vast amount of plant species exists on earth, according to [1,2], there are about 220,000 to 420,000 different species just for flowering plants alone. The large number of plant species, together with the fact that large in-species variations and small cross-species variations make it a difficult and tedious work for identifying them by human, particularly for non-experts. As with the fast development in techniques of machine learning and deep learning methodologies as well as the growing power of computation, automatic recognition with these species become a more and more natural solution. From a descriptive point of view, plant identification are traditionally based on observations of its organs, such as flowers, leaves, seeds, etc. A large portion of species information is contained in leaves. It also appears for a considerable amount of time during plants' life cycle. This brings benefits for database construction. Traditionally, features from leaves can be roughly divided into three categories: shape, color and texture. Shape descriptors (especially the the contour) usually are more robust compared to the other two. For a single leaf, color descriptors may vary depending on lighting conditions, image format, etc. Texture descriptors can vary if there are worm holes on the leaf... Another advantage of a shape descriptor is that features like centroid center contour curve (CCDC) can be converted to time series [3], hence techniques in time series classification such as dynamic time warping (DTW) [4] can be applied. On the other hand, techniques that are suitable for leaf classification with this kind of shape descriptor can be easily modified to general time series classification tasks, which will result in a broader field of applications. Despite the differences of features, traditional classifiers in applications usually includes: support vector machines (SVM), k nearest neighbors (kNN), random forest ... Artificial neural networks, especially convolutional neural networks (CNN) [5] are not commonly seen in the field, though they have proven to be very effective tools in the field of computer vision and pattern recognition. In this paper, discussions are focused on features that are based on leaf shapes and argues that simple shape feature actually contains more discriminating power than people usually think, if an effective classifier such convolutional neural networks are used. The rest of the paper is organized as below: Section 2 gives some related work using shape features for classification. Section 3 presents the design of a 1d convolutional network as a classifier that can also be directly applied to tasks of classifying 1 dimensional time series. Section 4 tests the performance of this classifier on some benchmark data sets. Classifier Design In order to make proper classification, it is important that the classifier can learn features at different scales together and combine them into classification. Though this can be done by designing complicated hand-crafted features, applying convolutional kernels with different sizes and strides serves as one good option for this purpose. For a typical 1d convolutional mechanism, information flows to the next layer first by a convolutional operation and then processed by an activation function: Y = f (W * X + B), where * denotes the discrete convolution operation between the incoming signal X and a kernel W . A convolutional layer contains several different kernels, computes the convolution between the input and each kernel and then stack their result as its output. Figure 2 gives an illustration of this, the convolutional layer contains several kernels of length 3. During convolution, a sliding window of the same size will slide through the input with certain stride. During each stay of the window, it computes the inner product between the examined portion of input and the kernel itself. For example, when using kernel (3,-1,0) with stride 2 and no bias, the first output is 3 × 3 + 2 × (−1) + 4 × 0 = 7 and the second output is 4 × 3 + 1 × (−1) + 0 × 0 = 11. Based on this thought, a basic architecture used for classification is designed as in Figure 3. It looks like a naive module from Google's inception network [18] but is built for 1 dimensional input. The input is first processed by convolutional blocks of different configurations which responses to features of different scales. Their outputs are then concatenated together with original input before being fed into latter layers for classification. In the following experiment section, this network is used in two ways. The first approach is to use it as a classifier allowing informations flow from CCDC feature to species label directly. The other way is to use it as an automatic feature extractor in a "pretrain-retrain" style. During the training phase, the network is first pre-trained to certain extent with earlystopping or a checkpoint at best validating performance. In the testing phase, the model weights are frozen, the top layer is then taken off and its input as pretrained features are fed to a nonlinear classifier such as a SVM or a kNN classifier for final classification. It is like a transfer learning design, but the difference is in transfer learning, the model is not trained on the same dataset. The idea is from heuristic that a nonlinear classifier may performance better than the original linear classification performed by the top layer. Experiments done in the next sections shows this (referred as 1dConvNet+SVM) usually will help contribute a little more accuracy to the classification. Experiment Results Swedish Leaf Swedish leaf data set [20] contains leaves that are from 15 species. Within each species, 75 samples are provided. It is an challenging classification task due to its high inter-species similarity [8]. Table 1 lists some existing methods that uses leaf contours for classification. All listed methods in the table use leaf contours in a non-trivial way that involves more in-depth feature extraction than CCDC. Method Accuracy Method Accuarcy Söderkvist [21] 82.40% Spatial PACT [10] 90.61% SC + DP [9] 88.12% Shape-Tree [11] 96.28% IDSC + DP [9] 94.13% TSLA [8] 96.53% Table 1. Performance of different existing methods on leaf contours. While [8,9,10,11,21] uses 25 samples randomly selected from each species as the training set and the rest as test. The author decided to use a 10-fold cross validation to evaluate the proposed model in a more robust way. The other reason for this is the convoluational architecture may not be trained sufficiently with 25 samples per species as the training set. The mean performance and the corresponding standard deviation is summarized in Table 2. The actual parameters used are: Convolutional layers {conv1d(16, 8, 4) 1 , conv1d(24, 12, 6), conv1d(32, 16,8)}, Maxpooling layers (MP) are with window size 2 and stride 2, two fully connected layers are of unit 512 and 128, respectively. Relu activations [22] are used in convolutional layers and PRelu [23] activations are used for fully connected layers. To prevent overfitting, Gaussian noise (mean: 0, std: 0.01) layers are placed before each convolutional layer and a dropout layer [24] of intensity 0.5 is inserted before the classification layer. The whole model is trained using stochastic gradient descent algorithm with batch size 32, learning rate 0.005 and 10 −6 as the decay rate. 25 principal components from pretrained features are used if the top classification layer is a SVM. For other details, please check the actual code at [25]. The proposed network provides comparable accuracy with provides an explicit split of training/test set of this dataset and a list of performances from different time series classification methods, which allows a more direct comparison with the proposed 1d convolutional network. Table 3 lists the best performance reported on the website and results obtained by the proposed 1d ConvNet. The result is obtained by averaging the test accuracy among 5 independent runs with different random states. 20% of the training samples are used as validation for stopping the training process 2 . As seen in both comparisons, with top layers replaced by a SVM, the accuracy can be further improved. The reason may be the fact that if the network is already trained properly, information that flows into the top layer is almost linearly separable, hence a nonlinear classifier built on top will help increase the accuracy by correcting some mistakes made by a linear classifier. Figure 5 shows the TSNE embedding [28] with the outputs of the network before the last classification layer from the whole dataset. As one can see in this 2 dimensional feature projection, the 15 classes are almost separable. UCI's 100 leaf UCI's 100 leaf dataset [29] was first used in [12] in support of authors' probabilistic integration of shape, texture and margin features. It has 100 different species with 16 samples per species 3 . As for the feature vector, a 64 element vector is given per sample of leaf. These vectors are taken as a contigous descriptors (for shape) or histograms (for texture and margin). An mean accuracy of 62.13% (with PROP) and 61.88% (with WPROP) was reported by only using the shape feature(CCDC) from a 16-fold validation (10% of training data are hold as validation). The mean accuracy raised up to 96.81% and 96.69% if both three types of features are combined. Following the evaluation of 16-fold validation, the performance of using the 1d ConvNet is summarized in Table 4. For results by combing the 3 features, the author simply concatenates them together to form a 192 dimensional feature vector per sample. Again, the proposed network works better on both kinds of features. The 3-NN with pretrained features from the network did not perform better than the original network. Part of the reason may be because kNN classifier is more sensitive to changes in data and 3 may not be a good choice for k in this dataset which has 99 different classes. On some time series Classification The classifier does not only achieve good performance in classifying different leaves on single CCDC feature, it can also be directly used for classifying 1 dimensional time series data from end to end. In order to demonstrate this, the author selects four different data sets from UEA & UCR Time Series Classification Repository [26]: ChlorineConcentration, InsectWingbeatSound, Distal-PhalanXTW and ElectricDevices 4 for test. These data sets comes from different backgrounds with different data sizes and length of feature vectors. A good classification strategy usually requires some prior knowledge. With the help of convolutional architecture, the proposed network is able to help reduce such prior knowledge from human. This kind of prior knowledge is "learned" by the network during training. The current best performance reported on the website and performance achieved by this 1d convolutional net are compared in Tabel 5. For all the four datasets, the network's architecture and hyperparameters are the same as previous experiments with no extra hyperparameter tuning 5 . As summarized in Table 5, the proposed network outperforms the reported best methods in terms of mean accuracy. Conclusion This paper presents a simple 1 dimensional convolutional network architecture that allows classification tasks of plant leaves on single CCDC feature instead of further extracting more complicated features. The same architecture is directly applicable to classify 1 dimensional time series allowing an end-to-end training without complicated preprocessing of input data. Experiments of this classifier on some benchmark datasets show comparable or better performance than other existing methods.
1,878
1907.00069
2956033525
In this paper, a 1d convolutional neural network is designed for classification tasks of leaves with centroid contour distance curve (CCDC) as the single feature. With this classifier, simple feature as CCDC shows more discriminating power than people thought previously. The same architecture can also be applied for classifying 1 dimensional time series with little changes. Experiments on some benchmark datasets shows this architecture can provide classification accuracies that are higher than some existing methods. Code for the paper is available at this https URL Project.
Compared with methods mentioned above which tackles the difficulty in classification by designing complicated hand crafted deep features, convolutional neural networks (CNN) @cite_0 can take simple features as input and automatically abstracts useful features through its early convolutional blocks for later classification tasks @cite_8 . In this way, the difficulty is transferred into heavy computation where modern hardware now can provide sufficient support. It is more straightforward if we apply a CNN directly on leave images combining feature extraction task and classification task together, but this will make a model of unnecessary large size with a lot of parameters and they usually require a lot of data and time to be trained well with more risk of overfitting the data at hand. The key idea of this paper is to take the advantage of convolutional architecture, but apply it on the extracted single 1d CCDC feature to reduce the computational cost.
{ "abstract": [ "We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.", "Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark [18]. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we explore both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. Used in a diagnostic role, these visualizations allow us to find model architectures that outperform on the ImageNet classification benchmark. We also perform an ablation study to discover the performance contribution from different model layers. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets." ], "cite_N": [ "@cite_0", "@cite_8" ], "mid": [ "2163605009", "1849277567" ] }
A 1d convolutional network for leaf and time series classification
Vast amount of plant species exists on earth, according to [1,2], there are about 220,000 to 420,000 different species just for flowering plants alone. The large number of plant species, together with the fact that large in-species variations and small cross-species variations make it a difficult and tedious work for identifying them by human, particularly for non-experts. As with the fast development in techniques of machine learning and deep learning methodologies as well as the growing power of computation, automatic recognition with these species become a more and more natural solution. From a descriptive point of view, plant identification are traditionally based on observations of its organs, such as flowers, leaves, seeds, etc. A large portion of species information is contained in leaves. It also appears for a considerable amount of time during plants' life cycle. This brings benefits for database construction. Traditionally, features from leaves can be roughly divided into three categories: shape, color and texture. Shape descriptors (especially the the contour) usually are more robust compared to the other two. For a single leaf, color descriptors may vary depending on lighting conditions, image format, etc. Texture descriptors can vary if there are worm holes on the leaf... Another advantage of a shape descriptor is that features like centroid center contour curve (CCDC) can be converted to time series [3], hence techniques in time series classification such as dynamic time warping (DTW) [4] can be applied. On the other hand, techniques that are suitable for leaf classification with this kind of shape descriptor can be easily modified to general time series classification tasks, which will result in a broader field of applications. Despite the differences of features, traditional classifiers in applications usually includes: support vector machines (SVM), k nearest neighbors (kNN), random forest ... Artificial neural networks, especially convolutional neural networks (CNN) [5] are not commonly seen in the field, though they have proven to be very effective tools in the field of computer vision and pattern recognition. In this paper, discussions are focused on features that are based on leaf shapes and argues that simple shape feature actually contains more discriminating power than people usually think, if an effective classifier such convolutional neural networks are used. The rest of the paper is organized as below: Section 2 gives some related work using shape features for classification. Section 3 presents the design of a 1d convolutional network as a classifier that can also be directly applied to tasks of classifying 1 dimensional time series. Section 4 tests the performance of this classifier on some benchmark data sets. Classifier Design In order to make proper classification, it is important that the classifier can learn features at different scales together and combine them into classification. Though this can be done by designing complicated hand-crafted features, applying convolutional kernels with different sizes and strides serves as one good option for this purpose. For a typical 1d convolutional mechanism, information flows to the next layer first by a convolutional operation and then processed by an activation function: Y = f (W * X + B), where * denotes the discrete convolution operation between the incoming signal X and a kernel W . A convolutional layer contains several different kernels, computes the convolution between the input and each kernel and then stack their result as its output. Figure 2 gives an illustration of this, the convolutional layer contains several kernels of length 3. During convolution, a sliding window of the same size will slide through the input with certain stride. During each stay of the window, it computes the inner product between the examined portion of input and the kernel itself. For example, when using kernel (3,-1,0) with stride 2 and no bias, the first output is 3 × 3 + 2 × (−1) + 4 × 0 = 7 and the second output is 4 × 3 + 1 × (−1) + 0 × 0 = 11. Based on this thought, a basic architecture used for classification is designed as in Figure 3. It looks like a naive module from Google's inception network [18] but is built for 1 dimensional input. The input is first processed by convolutional blocks of different configurations which responses to features of different scales. Their outputs are then concatenated together with original input before being fed into latter layers for classification. In the following experiment section, this network is used in two ways. The first approach is to use it as a classifier allowing informations flow from CCDC feature to species label directly. The other way is to use it as an automatic feature extractor in a "pretrain-retrain" style. During the training phase, the network is first pre-trained to certain extent with earlystopping or a checkpoint at best validating performance. In the testing phase, the model weights are frozen, the top layer is then taken off and its input as pretrained features are fed to a nonlinear classifier such as a SVM or a kNN classifier for final classification. It is like a transfer learning design, but the difference is in transfer learning, the model is not trained on the same dataset. The idea is from heuristic that a nonlinear classifier may performance better than the original linear classification performed by the top layer. Experiments done in the next sections shows this (referred as 1dConvNet+SVM) usually will help contribute a little more accuracy to the classification. Experiment Results Swedish Leaf Swedish leaf data set [20] contains leaves that are from 15 species. Within each species, 75 samples are provided. It is an challenging classification task due to its high inter-species similarity [8]. Table 1 lists some existing methods that uses leaf contours for classification. All listed methods in the table use leaf contours in a non-trivial way that involves more in-depth feature extraction than CCDC. Method Accuracy Method Accuarcy Söderkvist [21] 82.40% Spatial PACT [10] 90.61% SC + DP [9] 88.12% Shape-Tree [11] 96.28% IDSC + DP [9] 94.13% TSLA [8] 96.53% Table 1. Performance of different existing methods on leaf contours. While [8,9,10,11,21] uses 25 samples randomly selected from each species as the training set and the rest as test. The author decided to use a 10-fold cross validation to evaluate the proposed model in a more robust way. The other reason for this is the convoluational architecture may not be trained sufficiently with 25 samples per species as the training set. The mean performance and the corresponding standard deviation is summarized in Table 2. The actual parameters used are: Convolutional layers {conv1d(16, 8, 4) 1 , conv1d(24, 12, 6), conv1d(32, 16,8)}, Maxpooling layers (MP) are with window size 2 and stride 2, two fully connected layers are of unit 512 and 128, respectively. Relu activations [22] are used in convolutional layers and PRelu [23] activations are used for fully connected layers. To prevent overfitting, Gaussian noise (mean: 0, std: 0.01) layers are placed before each convolutional layer and a dropout layer [24] of intensity 0.5 is inserted before the classification layer. The whole model is trained using stochastic gradient descent algorithm with batch size 32, learning rate 0.005 and 10 −6 as the decay rate. 25 principal components from pretrained features are used if the top classification layer is a SVM. For other details, please check the actual code at [25]. The proposed network provides comparable accuracy with provides an explicit split of training/test set of this dataset and a list of performances from different time series classification methods, which allows a more direct comparison with the proposed 1d convolutional network. Table 3 lists the best performance reported on the website and results obtained by the proposed 1d ConvNet. The result is obtained by averaging the test accuracy among 5 independent runs with different random states. 20% of the training samples are used as validation for stopping the training process 2 . As seen in both comparisons, with top layers replaced by a SVM, the accuracy can be further improved. The reason may be the fact that if the network is already trained properly, information that flows into the top layer is almost linearly separable, hence a nonlinear classifier built on top will help increase the accuracy by correcting some mistakes made by a linear classifier. Figure 5 shows the TSNE embedding [28] with the outputs of the network before the last classification layer from the whole dataset. As one can see in this 2 dimensional feature projection, the 15 classes are almost separable. UCI's 100 leaf UCI's 100 leaf dataset [29] was first used in [12] in support of authors' probabilistic integration of shape, texture and margin features. It has 100 different species with 16 samples per species 3 . As for the feature vector, a 64 element vector is given per sample of leaf. These vectors are taken as a contigous descriptors (for shape) or histograms (for texture and margin). An mean accuracy of 62.13% (with PROP) and 61.88% (with WPROP) was reported by only using the shape feature(CCDC) from a 16-fold validation (10% of training data are hold as validation). The mean accuracy raised up to 96.81% and 96.69% if both three types of features are combined. Following the evaluation of 16-fold validation, the performance of using the 1d ConvNet is summarized in Table 4. For results by combing the 3 features, the author simply concatenates them together to form a 192 dimensional feature vector per sample. Again, the proposed network works better on both kinds of features. The 3-NN with pretrained features from the network did not perform better than the original network. Part of the reason may be because kNN classifier is more sensitive to changes in data and 3 may not be a good choice for k in this dataset which has 99 different classes. On some time series Classification The classifier does not only achieve good performance in classifying different leaves on single CCDC feature, it can also be directly used for classifying 1 dimensional time series data from end to end. In order to demonstrate this, the author selects four different data sets from UEA & UCR Time Series Classification Repository [26]: ChlorineConcentration, InsectWingbeatSound, Distal-PhalanXTW and ElectricDevices 4 for test. These data sets comes from different backgrounds with different data sizes and length of feature vectors. A good classification strategy usually requires some prior knowledge. With the help of convolutional architecture, the proposed network is able to help reduce such prior knowledge from human. This kind of prior knowledge is "learned" by the network during training. The current best performance reported on the website and performance achieved by this 1d convolutional net are compared in Tabel 5. For all the four datasets, the network's architecture and hyperparameters are the same as previous experiments with no extra hyperparameter tuning 5 . As summarized in Table 5, the proposed network outperforms the reported best methods in terms of mean accuracy. Conclusion This paper presents a simple 1 dimensional convolutional network architecture that allows classification tasks of plant leaves on single CCDC feature instead of further extracting more complicated features. The same architecture is directly applicable to classify 1 dimensional time series allowing an end-to-end training without complicated preprocessing of input data. Experiments of this classifier on some benchmark datasets show comparable or better performance than other existing methods.
1,878
1812.00819
2902948489
Millimeter-wave (mmWave) communications rely on directional transmissions to overcome severe path loss. Nevertheless, the use of narrow beams complicates the initial access procedure and increase t ...
The need for the design of new initial cell-search phase in mmWave communication has brought great research interest in this field recently. In IEEE 802.11ad standard, a coarse-grained sector matching is followed by a second beam training stage that provides a further refinement of the beamforming vectors @cite_20 @cite_32 . The authors in @cite_34 proposed a similar hierarchical design with multi-resolution codebook based on ideas from compressive sensing. Reference @cite_6 provided a framework to evaluate the performance of mmWave IA using 3GPP new radio (NR) scenario configurations. In @cite_19 , the authors analyzed various design options for IA given different scanning and signaling procedures. Specifically, the synchronization and cell-search consists of a sending a series of directional pilots to enable a joint time-frequency-spatial synchronization to occur jointly. @cite_16 and @cite_11 investigated initial cell-search based on context information which uses external localization service to get positioning information. In @cite_0 , the performance of these schemes are summarized in terms of cell detection failure probability and latency. In contrast to the aforementioned link-level studies, @cite_14 and @cite_2 provide a system-level analysis of IA protocols in terms of cell-search latency under different user equipment (UE) status.
{ "abstract": [ "Initial access is the process which allows a mobile user to first connect to a cellular network. It consists of two main steps: cell search (CS) on the downlink and random access (RA) on the uplink. Millimeter wave (mm-wave) cellular systems typically must rely on directional beamforming (BF) in order to create a viable connection. The BF direction must, therefore, be learned—as well as used—in the initial access process for mm-wave cellular networks. This paper considers four simple but representative initial access protocols that use various combinations of directional BF and omnidirectional transmission and reception at the mobile and the BS, during the CS and RA phases. We provide a system-level analysis of the success probability for CS and RA for each one, as well as of the initial access delay and user-perceived downlink throughput (UPT). For a baseline exhaustive search protocol, we find the optimal BS beamwidth and observe that in terms of initial access delay it is decreasing as blockage becomes more severe, but is relatively constant (about @math ) for UPT. Of the considered protocols, the best tradeoff between initial access delay and UPT is achieved under a fast CS protocol.", "", "The millimeter wave (mmWave) frequencies offer the availability of huge bandwidths to provide unprecedented data rates to next-generation cellular mobile terminals. However, mmWave links are highly susceptible to rapid channel variations and suffer from severe free-space pathloss and atmospheric absorption. To address these challenges, the base stations and the mobile terminals will use highly directional antennas to achieve sufficient link budget in wide area networks. The consequence is the need for precise alignment of the transmitter and the receiver beams, an operation which may increase the latency of establishing a link, and has important implications for control layer procedures, such as initial access, handover and beam tracking. This tutorial provides an overview of recently proposed measurement techniques for beam and mobility management in mmWave cellular networks, and gives insights into the design of accurate, reactive and robust control schemes suitable for a 3GPP NR cellular network. We will illustrate that the best strategy depends on the specific environment in which the nodes are deployed, and give guidelines to inform the optimal choice as a function of the system parameters.", "Millimeter wave (mmWave) communication is envisioned as a cornerstone to fulfill the data rate requirements for fifth generation (5G) cellular networks. In mmWave communication, beamforming is considered as a key technology to combat the high path-loss, and unlike in conventional microwave communication, beamforming may be necessary even during initial access cell search. Among the proposed beamforming schemes for initial cell search, analog beamforming is a power efficient approach but suffers from its inherent search delay during initial access. In this work, we argue that analog beamforming can still be a viable choice when context information about mmWave base stations (BS) is available at the mobile station (MS). We then study how the performance of analog beamforming degrades in case of angular errors in the available context information. Finally, we present an analog beamforming receiver architecture that uses multiple arrays of Phase Shifters and a single RF chain to combat the effect of angular errors, showing that it can achieve the same performance as hybrid beamforming.", "The massive amounts of bandwidth available at millimeter-wave frequencies (above 10 GHz) have the potential to greatly increase the capacity of fifth generation cellular wireless systems. However, to overcome the high isotropic propagation loss experienced at these frequencies, highly directional antennas will be required at both the base station and the mobile terminal to achieve sufficient link budget in wide area networks. This reliance on directionality has important implications for control layer procedures. In particular, initial access can be significantly delayed due to the need for the base station and the user to find the proper alignment for directional transmission and reception. This article provides a survey of several recently proposed techniques for this purpose. A coverage and delay analysis is performed to compare various techniques including exhaustive and iterative search, and context-information-based algorithms. We show that the best strategy depends on the target SNR regime, and provide guidelines to characterize the optimal choice as a function of the system parameters.", "Millimeter wave (mmWave) bands have attracted considerable recent interest for next-generation cellular systems due to the massive available spectrum at these frequencies. However, a key challenge in designing mmWave cellular systems is initial access—the procedure by which a mobile device establishes an initial link-layer connection to a cell. MmWave communication relies on highly directional transmissions and the initial access procedure must thus provide a mechanism by which initial transmission directions can be searched in a potentially large angular space. Design options are compared considering different scanning and signaling procedures to evaluate access delay and system overhead. The channel structure and multiple access issues are also considered. The results of our analysis demonstrate significant benefits of low-resolution fully digital architectures in comparison with single stream analog beamforming.", "Cell search is the process for a user to detect its neighboring base stations (BSs) and make a cell selection decision. Due to the importance of beamforming in 5G cellular networks including both the millimeter wave and sub-6 GHz networks, there is a need for a better understanding of the directional cell search delay performance. A cellular network with fixed BS and user locations is considered, so as to take into account the strong temporal correlations that exist for the SINR experienced by each BS and user in this context. For Poisson cellular networks with Rayleigh fading channels, a closed-form expression for the spatially averaged mean cell search delay of all users is derived. This mean cell search delay for a noise-limited network is proved to be infinite whenever the non-line-of-sight path loss exponent is larger than two. For interference-limited networks, a phase transition for the mean cell search delay is shown to exist in terms of the number of BS beams @math : the mean cell search delay is infinite when @math is smaller than a threshold and finite otherwise. Beam-sweeping is also demonstrated to be effective in decreasing the cell search delay, especially for cell edge users.", "Cellular systems were designed for carrier frequencies in the microwave band (below 3 GHz) but will soon be operating in frequency bands up to 6 GHz. To meet the ever increasing demands for data, deployments in bands above 6 GHz, and as high as 75 GHz, are envisioned. However, as these systems migrate beyond the microwave band, certain channel characteristics can impact their deployment, especially the coverage range. To increase coverage, beamforming can be used but this role of beamforming is different than in current cellular systems, where its primary role is to improve data throughput. Because cellular procedures enable beamforming after a user establishes access with the system, new procedures are needed to enable beamforming during cell discovery and acquisition. This paper discusses several issues that must be resolved in order to use beamforming for access at millimeter wave (mmWave) frequencies, and presents solutions for initial access. Several approaches are verified by computer simulations, and it is shown that reliable network access and satisfactory coverage can be achieved in mmWave frequencies.", "With the ratification of the IEEE 802.11ad amendment to the 802.11 standard in December 2012, a major step has been taken to bring consumer wireless communication to the millimeter wave band. However, multi-gigabit-per-second throughput and small interference footprint come at the price of adverse signal propagation characteristics, and require a fundamental rethinking of Wi-Fi communication principles. This article describes the design assumptions taken into consideration for the IEEE 802.11ad standard and the novel techniques defined to overcome the challenges of mm-Wave communication. In particular, we study the transition from omnidirectional to highly directional communication and its impact on the design of IEEE 802.11ad.", "The exploitation of the mm-wave bands is one of the most promising solutions for 5G mobile radio networks. However, the use of mm-wave technologies in cellular networks is not straightforward due to mm-wave severe propagation conditions that limit access availability. In order to overcome this obstacle, hybrid network architectures are being considered where mm-wave small cells can exploit an overlay coverage layer based on legacy technology. The additional mm-wave layer can also take advantage of a functional split between control and user plane, that allows to delegate most of the signaling functions to legacy base stations and to gather context information from users for resource optimization. However, mm-wave technology requires multiple antennas and highly directional transmissions to compensate for high path loss and limited power. Directional transmissions must be also used for the cell discovery and synchronization process, and this can lead to a non negligible delay due to need to scan the cell area with multiple transmissions in different angles. In this paper, we propose to exploit the context information related to user position, provided by the separated control plane, to improve the cell discovery procedure and minimize delay. We investigate the fundamental trade-offs of the cell discovery process with directional antennas and the effects of the context information accuracy on its performance. Numerical results are provided to validate our observations." ], "cite_N": [ "@cite_14", "@cite_32", "@cite_6", "@cite_16", "@cite_0", "@cite_19", "@cite_2", "@cite_34", "@cite_20", "@cite_11" ], "mid": [ "2963363746", "", "2796047239", "2345466534", "2278865433", "2490299299", "2963954457", "1987804395", "2074011868", "2964273971" ] }
Fast and Reliable Initial Access with Random Beamforming for mmWave Networks
Millimeter-wave (mmWave) technology is one of the essential components of future wireless networks to support extremely high data rate services [1]- [3]. The mmWave frequency bands provides orders of magnitude more spectrum than the current congested bands below 6 GHz. It can offer much higher data rates and create a fertile ground for developing various new products and services [4]. However, mmWave communications are subject to high path-loss, noise power and penetration loss [5]. To address the challenges, mmWave systems rely on directional transmissions using large antenna arrays both at the transmitter and at the receiver [6]. Such directional transmission, albeit reduces the interference footprint and simplifies the scheduling task, complicates the initial synchronization and cell-search procedure which is a prerequisite to establish any connection in all cellular systems [7], [8]. The cell-search in initial access (IA) procedure of conventional cellular networks, e.g., LTE [9], is performed using omnidirectional antennas in the low frequency bands. However this is not applicable for mmWave communications due to the severe path-loss and the resulting mismatch in control and data-plane ranges [10]. Consequently, it is essential to develop directional cell-search and IA. B. Contributions In this paper, we provide a system-level framework to analyze the performance of IA based on random beamforming in a multi-cell mmWave network. We substantially extended our initial results [24] by considering 3GPP NR framework, non-line-of-sight (NLOS) communications, non-zero antenna sidelobes, data plane performance, and extended numerical comparisons to alternative approaches. The main contributions of this paper are: • An analytical framework for IA performance under random beamforming: Leveraging the tools from stochastic geometry, we derive the exact expression of the detection failure probability and expected latency for initial access. Different from the previous works, we carry out a system-level analysis incorporating both sidelobe effect and NLOS paths. The analysis is validated by extensive Monto Carlo simulations. • A detailed evaluation of the random beamforming for IA: We investigate the effect of BS density, environmental blockage and antenna beamwidth on IA performance. Meanwhile, we characterize the tradeoff between failure probability and expected latency. Through this, for any BS density, we find an optimal beamwidth that minimizes the expected latency subject to a detection failure constraint. • A comparison to the widely-used exhaustive search and iterative search schemes: We show the superior IA latency performance under the random beamforming compared to the exhaustive search, especially in dense mmWave networks. Furthermore, the control plane and data plane overall latency is investigated. The proposed scheme outperforms the existing ones and the performance gain becomes more prominent with lighter traffics or shorter packet sizes. C. Paper Organization The rest of the paper is organized as follows. In Section II, we describe the system model. In Section III, we present our initial access framework based on random beamforming. The analysis of detection failure probability and latency are presented in Section IV. Simulation results and comparison to other schemes are provided in Section V. Section VI concludes the paper. II. SYSTEM MODEL In this section, we present the network, channel, and antenna models for evaluating the performance in this paper. Table 1 summarizes the main notations used throughout the paper. A. Network Model We consider a large-scale downlink mmWave cellular network where the BSs are distributed according to a two-dimensional Poisson point processes (PPP) Φ {x i } with density λ. In [25], it has been shown that the PPP assumption can be viewed as incorporating BS locations and shadowing with sufficiently large variance. Therefore, we ignore the effect of shadowing as [26], [27] in this work. The UEs follow another independent PPP, from which the typical UE, located at the origin, is our focus according to the Slivnyak's theorem [28]. In this paper, we focus on outdoor BSs and UEs by assuming the independence of the outdoor and indoor devices and invoking the thinning theorem [28]. B. Channel Model To the typical UE, each BS, independently from others, is characterized by either LOS or NLOS propagation. Define the LOS probability function p LOS (r) as the probability that a link of distance r is in the LOS condition. We apply a stochastic exponential blockage model for the function where the obstacles are modeled by rectangle Boolean objects [29]. In that case, p LOS(r) = exp(−βr) where β is a parameter determined by the density and the average size of the obstacles, and β −1 represents the average length of a LOS link. For the tractability of analysis, we further assume independent LOS events among different links [29], [30] and among different time slots [19]. Given LOS probability p LOS (r), the path loss for a link with distance r is given by: ( x i ) =    (C( x i )) α L , if LOS; (C( x i )) α N , if NLOS (blocked),(1) where C( x i ) c/4π x i f c , c is the light speed, f c is the operating frequency, α L and α N represent the path loss exponent for LOS and NLOS links, and x i represents the Euclidean distance between x i ∈ Φ and the origin o. To ignore the possibility of communications in the NLOS conditions, we can set α N = ∞ and α L = α for simplicity. We assume that BSs and UEs are equipped with electronically-steered antenna arrays of M BS and M UE antennas respectively. Since mmWave channels are expected to have a limited number of scatterers [31], we employ a geometric channel model with a single path between typical UE and each BS for better analytical tractability. The single-path assumption was implicitly adopted and verified in [27], [32]. The channel matrix between BS b and UE u is given by H ub = ( x i ) h ub a (M UE , θ ub ) a H (M BS , φ ub ) ,(2) where h ub represent the small-scale fading between BS x i and the typical UE. We assume h i follows a unit-mean Rayleigh distribution. Compared to more realistic models for LOS paths such as Nakagami fading, Rayleigh fading provides very similar design insights while it leads to more tractable results [19]. θ ub ∈ [0, 2π) and φ ub ∈ [0, 2π) are the angle of arrival (AoA) and the angle of departure (AoD) at UE u and BS b respectively, (·) H is the conjugate transpose operator. Finally a(k, θ) ∈ C k is the unit-norm vector response function of the transmitter's and receiver's antenna arrays to the AoAs and AoDs, given in (4). C. Antenna Model We consider analog beamforming for initial cell-search because digital or hybrid beamforming does not suit due to the existence of many antenna elements and lack of prior channel knowledge, translated into the need for costly pilot transmission schemes. Two antenna models are applied in this work. For analytical simplicity, we first model the actual antenna patterns by a sectorized beam pattern (SBP) as in [10]. We also consider the uniform linear array (ULA) antenna model in the numerical evaluations due to two reasons: 1) verifying the analytical insights and performance trends, obtained by the SBP model, and 2) obtaining the SINR in data plane with beam refinement, as shown in Section III-C. 1) Sectorized Beam Pattern: We consider half-power beamwidths of θ BS and θ UE at the BSs and UEs, respectively, with the corresponding antenna gains G BS and G UE . In an ideal sectorized antenna pattern, the antenna gain G x , x ∈ {BS, UE}, as a function of beamwidth θ x is a constant in the main lobe and a smaller constant in the side lobe, given by G x (θ x ) =    2π−(2π−θx) θx , in the main lobe, , in the side lobe, where typically 1. For a given θ BS and θ UE , which are a non-increasing function of the number of antenna elements, BSs and UEs sweep the entire angular space by N BS = 2π/θ BS and N UE = 2π/θ UE beamforming vectors, respectively. Without loss of generality of the main conclusions, we assume that 2π/θ BS and 2π/θ UE are integers and drop · operator. It is worth noting that we neglect the sidelobe gain at the UE side as of [32] for mathematical tractability. 2) Uniform Linear Array: The array response vector can be expressed as a (K, θ) = 1 √ k 1 e jπ sin(θ) . . . e j(k−1)π sin(θ) H .(4) The parameters of the channel model depend both on the carrier frequency and on being in LOS or NLOS conditions and are given in [33, Table I]. To design the beamforming vectors (precoding at the BSs and combining at the UEs), we define f (k, K, θ) as f (k, K, θ) := 1 e jπ sin(θ) . . . e j(k−1)π sin(θ) 0 1×(K−k) H √ k ,(5) for integers k and K such that 0 < k ≤ K. 0 x is an all zero vector of size x. Let v c b and w c u be the precoding vector of BS b ∈ B and the combining vector of UE u ∈ U in mini-slot c of the cell-search phase. We define v c b = f (k c b , N BS , φ c b ) ,(6a)w c u = f (k c u , N UE , θ c u ) .(6b) The BSs and UEs can control the antenna boresight by changing φ c b and θ c u and control the antenna beamwidth by changing k c b and k c u . At each BS b, we keep a local codebook V c b that contains M BS precoding vectors. Each vector v c b ∈ V c b is of the form (6a) such that the codebook collectively spans all angular space. The cardinality of the codebook is based on the half-power beamwidth and determines the antenna gain and sidelobe interference caused by every beam. D. 3GPP NR Frame Model The 3GPP technical specification for NR introduces the concept of synchronization signal (SS) block and burst. An SS block spans four orthogonal frequency division multiplexing (OFDM) symbols in time and 240 subcarriers in frequency. See [14] and references therein. Each SS block is mapped to a certain angular direction. An SS burst is a group of several SS blocks, and the interval between consecutive SS bursts T SS can take {5, 10, 20, 40, 80, 160} ms. Higher values correspond to lower synchronization overhead. Within one frame of NR, there could be several pilots, called channel-state information reference signal (CSI-RS), to enable optimal beamforming design for the data-plane. III. IA FRAMEWORK UNDER RANDOM BEAMFORMING AND PERFORMANCE METRICS In this section, we introduce the initial access protocols and performance metrics. We assume the system time is divided into two phases within each coherence interval: 1) an initial access period comprising a cell search phase and a random access phase, 2) a data transmission period with beam alignment. The whole transmission frame is illustrated in Fig. 1. A. Cell Search Phase The cell-search period takes several mini-slots. Every BS independently and uniformly at random picks a direction out of N BS in each mini-slot. We define a scan cycle as the period within which every BS sends cell-search pilots to N BS directions, see Fig. 1. In each scan cycle, the UE antenna points to a random direction out of N UE , and the BS covers all non-overlapping N BS directions. Different from the exhaustive search and iterative search in which the BS and UE need to cover all the N BS N UE possible directions [19], the cell-search period of random beamforming can be dynamically adjusted. Once the UE received a pilot signal that meets a predefined SINR threshold, it is associated to the corresponding BS. Note that it may not be the final association of that UE, but once the UE is registered to the network, it can establish data plane, and the reassociation phase (to the best BS) could be executed smoothly without service interruption [10]. B. Random Access Phase Given a successful cell search phase, the UE acquires the direction where it receives the strongest signal. In the following random access phase, the UE initiates the connection to its desired serving BS by transmitting random access preambles to that direction. The BS will scan for the presence of the random access preamble and will also learn the BF direction at the BS side. If cell search fails, the UE will skip the random access phase and repeat the cell search in the next frame. In real systems, the UE picks the random access preamble from a certain number of orthogonal preamble sequences. The success of random access phase depends on: 1) no random access preamble collision for multiple UEs transmitting to the same BS; and 2) the SINR of the random access preamble signal exceeding a threshold. Since the main focus of this paper is random beamforming based cell search phase, the impact of random access performance is left in our future work. Therefore, we make the following assumptions: Assumption 1. There is no random access preamble collision for all UEs, i.e. the BS can detect all the preambles. Assumption 2. The random access phase and cell search phase share the same SINR threshold. Assumption 1 holds in general, as the probability that multiple UEs pick the same random access preamble to access the same BS on the same spatial channel is very small, thanks to the low interference footprint of mmWave networks. Moreover, since the cell search and random access occur within the same coherence interval, Assumption 2 implies that SINR in random access exceeds the threshold as long as the cell search phase succeeds. In general, under Assumption 1 and 2, we have simultaneous cell search and random access success or failure. C. Data Plane Analysis After the RA phase, the connection is established and the data transmission starts. To achieve more accurate performance evaluation, our data plane analysis is based on the ULA model. Let v dt ub and w dt u be the precoding vector of BS b when serving UE u and the combining vector of UE u in the data transmission phase of coherence interval t, respectively. We assume that after the initial access phase UE u and its serving BS b will exchange a series of directional pilots, thanks to the available directional information from the initial access phase, to establish the data-plane with the maximum link budget: maximize v dt ub ,w dt u w dt u H H t ub v dt ub 2 , (7a) subject to v dt ub ∈ V d b , (7b) w dt u ∈ W d u ,(7c) where V d b and W d u are the sets of feasible precoding and combining vectors: unit-norm, identical modulus, and of the form (5). Afterward, the SINR of UE u is SINR dt ub = w dt u H H t ub v dt ub 2 i∈B\{b} (w dt u ) H H t ui v dt ui 2 + ∆ d N 0 /p d BS .(8) and the achievable rate is R T = ∆ d log 1 + SINR dt ub , where p d BS and ∆ d are the BS transmit power and the signal bandwidth of the data transmission phase, respectively. Note that the use of analog beamforming for the initial access does not pose any limitations on the beamforming architecture for the data-transmission phase. In other words, we can still use hybrid or digital beamforming for the data-plane. In that case, the SINR expression and achievable rates would be slightly different, though the tradeoffs and design insights of this paper (which are mostly focused on the performance of the control-plane and IA) are still valid. D. Performance Metrics Denoting p BS as the BS transmit power and W as the thermal noise, the signal-to-interferenceplus-noise ratio (SINR) when the typical UE is receiving from BS x i is given by SINR i = G BS i |h i | 2 ( x i )S i x j ∈Φ\x i G BS j |h j | 2 ( x j )S j + σ 2 ,(9) where σ 2 = W (p BS G UE ) −1 is the normalized noise power 1 and S i is LOS condition indicator. We say the typical UE successfully detects the cell if the strongest signal it receives in any mini-slot from one of the directions achieves a minimum SINR threshold T . Namely, the success is defined as the time period during which the UE successfully registers with a BS and completes a packet transmission. event is I{max x i ∈Φ SINR i ≥ T }, IV. DETECTION FAILURE PROBABILITY AND LATENCY ANALYSIS In this section, we present our analytical results and some insights on the performance of IA using random beamforming. Proofs are provided in the Appendix. A. Detection Failure Probability To get some insights about the problem, we start from a simple model where only mainlobe gain and LOS links are considered, i.e. = 0 and α N = ∞. These assumptions are widely used in mmWave network analysis due to simplicity and the resulting acceptable accuracy of the performance analysis [10]. Nonetheless, we discuss the effect of NLOS path and sidelobe later in this section. In the following proposition, we derive the detection failure probability of initial cell search for the typical user under the simple model. Proposition 1. The detection failure probability P f (N c ) of the typical UE when = 0 and α N = ∞ is given by: P f = (1 − P s ) Nc ,(10) where P s is the successful detection probability in one mini-slot, given by P s = 2π N BS N UE λ ∞ 0 e −T r α σ 2 exp − 2π N BS N UE λ b ∞ 0 T r α e −βv v α + T r α vdv e −βr rdr .(11) It is worth noting that the failure probability will not fall to zero as N c → ∞. Due to blockage, there is always a non-zero probability that all the BSs are invisible (i.e., blocked) to the typical UE. In this case, the initial access cannot succeed when assuming α N = ∞. However, in all scenarios of practical interests, N c cannot take very high values due to the corresponding overhead and latency. Next, we incorporate NLOS paths into the analysis, i.e. α L < α N < ∞. The NLOS terms will be added into both signal and interference parts. We start from characterizing the laplace transform of the interference, followed by the derivation of the detection failure probability in Proposition 2. Lemma 1. The Laplace transform of the interference when = 0 and α L ≤ α N < ∞ is given by: L N I (s) = exp − 2π N BS N UE λ ∞ 0 (1 − e −βv v α L v α L + sk 1 − (1 − e −βv )v α N v α N + sk 2 )vdv ,(12) where k 1 = c 4πfc α L , k 2 = c 4πfc α N . Proposition 2. The detection failure probability P N f (N c ) of the typical UE when = 0 and α L ≤ α N < ∞ is: P N f = (1 − P N s ) Nc ,(13) where P N s is the successful detection probability in one mini-slot, given by P N s = 2π N BS N UE λ ∞ 0 (κ L + κ N ) rdr,(14)κ L = e −T C(r) −α L σ 2 L I (T C(r) −α L )e −βr , κ N = e −T C(r) −α N σ 2 L I (T C(r) −α N )(1 − e −βr ) . Although Proposition 2 appears unwieldy, we may gain some insight by decomposing the terms therein. In Proposition 2, κ L and κ N correspond to the LOS and NLOS contributions to the coverage, respectively. When r is small, the LOS probability is higher, i.e. e −βr > 1 − e −βr . Meanwhile, the LOS signal strength also dominates that of NLOS, which makes overall κ L κ N . Furthermore, as r becomes larger, although the NLOS probability grows to 1, κ N is still very small, as the signal strength of a far-away NLOS BS is comparable or even smaller than the noise power. These insights indicate that NLOS signals have limited impact on the performance as [30], [34], which is also validated in our numerical results in Section V. At last, the detection failure probability considering sidelobe gain 0 ≤ ≤ 1 is considered. When taking the sidelobe gain into account, we cannot treat each mini-slot independently as in Propositions 1 and 2 due to the correlation caused by sidelobes. In this situation, we consider the BSs pointing to the typical UE with mainlobe and sidelobe as two independent tiers of PPP distributed BSs as in [35]. Thus, we divide the analsis into mainlobe and sidelobe failure. The accuracy is validated in Section V. Lemma 2. The probability successfully detected by the mainlobe in one mini-slot when 0 ≤ ≤ 1 and α N = ∞ is: P sm = θ UE θ BS 2π λ ∞ 0 L Ix i ( T r α G BS i )e −T r α σ 2 e −βr rdr ,(15) where L m Ix i (s) is the Laplace transform of the interference given in (25). Lemma 3. The probability successfully detected by the sidelobe over n < N BS mini-slots when 0 ≤ ≤ 1 and α N = ∞ is: P ss = θ UE 2π − θ BS 2π λ ∞ 0 Q n x i (T )e −βr rdr ,(16) where Q n x i (T ) represents the probability of at least one successful detection during the n minislots at a certain BS x i , given in (31). Proposition 3. The detection failure probability P S f (N c ) of a typical UE when 0 < < 1 and α N = ∞ can be approximated as: P S f = (1 − P ss )(1 − P sm ) Nc ,(17) where P sm and P ss are the successful detection probabilities by the mainlobe within one mini-slot and by the sidelobe over N c mini-slots respectively, given in the Lemma 2 and 3. It is common in the literature to neglect the antenna sidelobes due to its large gap to the mainlobe and for analytical simplicity [20]. However, sidelobes may play an important role when the BSs and obstacles are getting denser. In Proposition 3, P ss grows with BS density, thereby contributing to successful detections. Proposition 1 to 3 give the expressions for the detection failure probability under three different scenarios. We observe that the failure probability depends on the result of each time slot P s and the total time budget N c . From the proofs, P s is further characterized by the BS density, beamwidth and blockage. Among these parameters, increasing the BS density and time budget can reduce the failure probability by enlarging the search space. The effect of beamwidth and blockage is not straightforward. Narrowing the beamwidth enhances the beamforming gain but leads to fewer available BSs. Lighter blockage leaves more available LOS BSs, yet creates a stronger interference. More details will be discussed in Section V. B. Delay Analysis Next, we derive the expected latency of the IA and data transmission. Recall notations T f , T cs , T ra , and Definitions 2 and 3. We have the following proposition. E[D I (N c )] = 1 1 − P f (N c ) − 1 T f + T cs + T ra ,(18)E[D T (N c )] = 1 1 − P f (N c ) − 1 T f + L R T (T f − T cs − T ra ) (T cs + T ra ) + L R T ,(19) where L is the packet size for transmission and R T is the achievable rate given in Section III-C. Considering the characterizations of the detection failure probability and IA latency, we aim to design the BS beamwidth (θ BS or equivalently N BS ) to minimize the IA latency D I (N c ) given a failure probability constraint P max f ∈ [0, 1]. To formulate the problem, we set the cell-search time budget as k scan cycle for all beamwidths, i.e. N c = kN BS mini-slots. The optimal number of sectors for k ∈ N scan cycles is min N BS E[D I (kN BS )] (20a) s.t. P f (kN BS ) ≤ P max f ,(20b)N BS ∈ N ,(20c) where P max f and k are inputs to this optimization problem. In (18), T cs and T ra are linearly increasing functions of N BS while 1 1−P f (Nc) − 1 T f is a decreasing function of N BS with a diminishing slope. Consequently the latency decreases with N BS up to some point and increases afterward. Therefore, we can always find the optimal solution of (20) by searching over rather small N BS values. The optimization problem (20) can be utilized for system design in mmWave networks. With the knowledge of network deployment (BS density), we can set the configuration of BS antennas to meet the requirements of various applications with different reliability and/or latency constraints. An example of the solutions to the problem can be observed from the figures in the next section. V. NUMERICAL RESULTS In this section, we present the numerical results of IA based on random beamforming. The simulation parameters are summarized in Table I. In the figures, "RB", "IS", and "ES" stand for random beamforming, iterative search and exhaustive search, respectively. In this paper, we focus on N c = N BS , i.e. the random cell-search lasts for 1 scan cycle. The effect of multiple cycles is present in our previous work [24]. We compare our scheme with exhaustive search and iterative search in a 3GPP NR framework. For a fair comparison, we set the number of SS blocks in one SS burst to 16, 32, and 64 for random beamforming, iterative search and exhaustive search so that every scheme can complete one scan cycle within one transmission frame. The beamwidth in the first stage of iterative search is set to θ BS1 = 90 • . As we mentioned in Introduction, there exists other cell-search algorithms either working on the link-level or designed for a single-cell scenario, which are not fairly comparable with our system-level multi-cell scenario and thus comparison with them is left for future work. Fig. 2 shows the cell-search performance against BS density λ. We observe a good matching between our theoretical analysis and the Monte Carlo simulations with the SBP model. Moreover, the difference is rather small between the curves using the SBP and ULA models. It is shown that the effect of NLOS paths on failure probability is negligible due to the overlapping of curves (1) and (2). Nevertheless, the sidelobe gain plays an important role which causes a notable decrease on detection failure probability. From Fig. 2, the detection failure probability reduces with the BS density for all schemes. As BS density grows, the gap between random beamforming and other schemes diminishes rapidly. In dense regime where BS density approaches 10 −3 /m 2 , all the probabilities converge to 0 gradually. In the low density regime, the IA latency of exhaustive search is the shortest even though its cell-search period is the longest. This is because it consumes the least number of overhead frames before transmission due to low failure probability. However, the IA latency tends to converge Altogether, the growing obstacle density first reduces the interference, thereby improving the performance at the beginning. After a critical point of the obstacle density, the failure probability starts to increase as the former scenario. Also, we observe that the failure probabilities of all beamwidths converge to a similar value in a densely blocked scenario, like an office room. Thus, we may even utilize wider beams in mmWave since the effective transmission distance is very short. At last, we present the result of total data transmission delay in Fig. 7. When the BS density is not large enough (λ = 10 −4 ), other schemes always outperform random beamforming due to its high failure probability and low achievable rate. However in dense regime (λ = 10 −3 ), as the available BSs increase, we achieve a lower total delay for relative shorter packet sizes with IA under random beamforming. Thus the most favorable scenario for IA under random beamforming is transmitting short packets in dense networks. VI. CONCLUSION AND FUTURE WORKS In this paper, we investigated the performance of random beamforming in initial cell-search of mmWave networks. We developed an analytical framework leveraging stochastic geometry to evaluate the detection failure and latency performance. We compared our method with two sophisticated schemes in both control and data plane. Given the antenna directivity, we can consider the BSs pointing to the typical UE as a thinning process Φ(λ ) from the original process Φ(λ) within one mini-slot, where the effective density λ = θ BS 2π λ. Furthermore, since we consider BS beamwidth smaller than π 2 , the BS and UE beams will be aligned for only 1 mini-slot during 1 scanning cycle. Thus the process Φ(λ ) can be divided into N BS independent PPPs Φ m (λ ), m = 1, . . . , N BS , where Φ m (λ ) consists of the BSs pointing to the typical UE with mainlobe in the m-th mini-slot of the scan cycle. Therefore, we can treat each mini-slot independently and write the detection failure probability after N c mini-slots as (1 − P s ) Nc , where P s denotes the detection success probability of 1 mini-slot. Let BS i denote the BS located at x i , r = x i and I x i x j ∈Φ\x i h j ( x j )S j . Then, the successful detection probability in one mini-slot under strongest BS association can be derived as follows: P s = Pr max x i ∈Φ SINR x i ≥ T = Pr x i ∈Φ SINR i ≥ T (a) = E x i ∈Φ 1(SINR i ≥ T ) (b) = θ BS 2π λ R 2 Pr(SINR i ≥ T | r)dr = θ UE θ BS 2π λ ∞ 0 Pr(SINR i ≥ T |S i = 1, r)Pr(S i = 1 | r)rdr (c) = 2π N BS N UE λ ∞ 0 L Ix i (T r α )e −T r α σ 2 e −βr rdr ,(21) where (a) follows from Lemma 1 in [35] 2 , (b) follows from Campbell Mecke Theorem [36] and (c) follows the Rayleigh fading assumption. The use of θ UE and θ BS λ/2π are due to BS and UE beamwidth. Here L Ix i (T r α ) is the Laplace transform of the interference I x i . Letting R j denote the distance from the jth interfering BS to the typical UE, L Ix i (T r α ) can be expressed as: L Ix i (T r α ) = E Φ,h i   exp   −T r α x j ∈Φ\x i (R j )h j S j     (a) = E   x j ∈Φ\x i E h j exp(−T r α R −α j h j ) e −βR j + 1 − e βR j   (b) = E   x j ∈Φ\x i 1 − T r α e −βR j R α j + T r α   (c) = exp −θ UE θ BS 2π λ ∞ 0 T r α e −βv v α + T r α vdv ,(22) where (a) follows that S j is a Bernoulli random variable with parameter e −βR i , (b) follows that h j is an exponential random variable and (c) is derived from the probability generating function of the PPP. Substituting (22) into (21) we obtain the successful detection probability in one mini-slot. B. Lemma 1 When α N < ∞, the interference is composed of LOS part and NLOS part. Similar as in 22, R j denotes the distance from the jth interfering BS to the typical UE and L N I (s) can be expressed as: L N I (s) = E Ix i [e −sIx i ] = E Φ,h i   exp   −s x j ∈Φ\x i (R j )h j S j     = E   x j ∈Φ\x i E α j ,h j exp(−s(kR j ) −α j h j S j   (a) = E   x j ∈Φ\x i E h j exp(−s(kR j ) −α L h j )e −βR j + exp(−s(kR j ) −α N h j )(1 − e βR j )   (b) = E   x j ∈Φ\x i R α L j e −βR j R α L j + sk 1 + R α N j (1 − e −βR j ) R α N j + sk 2   (c) = exp − 2π N BS N UE λ ∞ 0 (1 − e −βv v α L v α L + sk 1 − (1 − e −βv )v α N v α N + sk 2 )vdv ,(23) where (a) follows LOS/NLOS conditions with probability e −βR j , (b) follows that h j is an exponential random variable and (c) is derived from the probability generating function of the PPP. C. Proposition 2 Similar to Proposition 1, the successful detection probability in one mini-slot under strongest BS association can be derived as follows: P N s = θ BS 2π λ R 2 Pr(SINR i ≥ T | r)dr = θ UE θ BS 2π λ ∞ 0 (Pr(SINR i ≥ T | S i = 1, r) Pr(S i = 1 | r) + Pr(SINR i ≥ T | S i = 0, r) Pr(S i = 0 | r))rdr = 2π N BS N UE λ ∞ 0 e −T C(r) −α L σ 2 L Ir (T C(r) −α L )e −βr +e −T C(r) −α N σ 2 L Ir (T C(r) −α N )(1 − e −βr ) rdr ,(24) where the two steps follow from Campbell Mecke Theorem the Rayleigh fading assumption as in Proposition 1. D. Lemma 2 We start from deriving the Laplace transform of the interference which comprises the mainlobe tier and sidelobe tier: L m Ix i (s) = 2 j=1 E Φ   x j ∈Φ j \x i E h j exp(−sG BS j R −α j h j   (a) = 2 j=1 E Φ   x j ∈Φ\x i E h j exp(−sG BS j R −α j h j )e −βR j + 1 − e βR j   (b) = 2 j=1 E   x j ∈Φ\x i 1 − sP j R −α j e −βR j 1 + sG BS j R −α j   = 2 j=1 exp −θ UE λ j ∞ 0 sG BS j e −βR j v −α + sG BS j vdv ,(25) where (a) follows the blockage model with parameter e −βR j , (b) follows that h j is an exponential random variable and λ j = { θ BS 2π λ, 2π−θ BS 2π λ}. Putting s = T r α G BS i , we have L m Ix i ( T r α G BS i ) = 2 j=1 exp −θ UE λ j ∞ 0 T r α G BS j e −βR j v −α G BS i + T r α G BS j vdv(26) Then, similar as in Proposition 1, the successful detection probability in one mini-slot under strongest BS association can be derived as follows: P sm = Pr x i ∈Φm SINR x i ≥ T = λ i R 2 Pr(SINR x i ≥ T | r)dr = θ UE λ i ∞ 0 Pr G BS i h i (r) I x i + σ 2 ≥ T |S i = 1, r Pr(S i = 1 | r)rdr = θ UE θ BS 2π ∞ 0 L m Ix i ( T r α G BS i )e −T r α σ 2 e −βr rdr ,(27) E. Lemma 3 Define Z k as the event that the SINR with sidelobe is larger than T at mini-slot k given distance r = x i : Z k {SINR k x i > T }. Unlike serving with the mainlobe, each BS has N c − 1 mini-slots sending pilot to the UE with sidelobe. Therefore, given a serving BS, both the desired signal and part of the interference (sidelobe part) are at same locations during N c − 1 mini-slots, which makes the events Z k and Z j not independent. In this case, we can consider the N c −1 minislots as a single-input-multi-output (SIMO) system, i.e. one mini-slot as one receiving antenna. For better dealing with the correlated interference, we decompose the interference during N c − 1 mini-slots into two parts: correlated sidelobe interference with gain and independent mainlobe interference with gain G * m = G m − , where G m denotes the mainlobe gain of the antenna. We focus first on the probability of the joint occurrence of Z k over n mini-slots, P n (T, r). Let δ(r) = T r α and r = x i , we have: Z k   = Pr SINR 1 x i > T, · · · , SINR n x i > T = Pr h 1 > δ(r)(I 1 + σ 2 ), · · · , h n > δ(r)(I n + σ 2 ) = E e −δ(r)(I 1 +σ 2 ) · · · e −δ(r)(In+σ 2 ) where (a) follows from the decomposition of the interference into sidelobe part I ks and mainlobe part I km . The interferences caused by the sidelobe are correlated through the common randomness Φ. Thus we obtain 1 − δ(r)G * m R −α j e −βR j 1 + δ(r)G * m R −α j = n k=1 exp −θ UE θ BS 2π λ ∞ 0 T G * m r α e −βt t α + T G * m r α tdt (b) = exp −nθ UE θ BS 2π λ ∞ 0 T G * m r α e −βt t α + T G * m r α tdt ,(30) where (a) follows from the independent interference among mini-slots. In the next step, we need to derive the probability Q n x i (T ) that the SINR in at least one minislot exceeds the threshold. This can be viewed as a selection combining in SIMO system where a successful transmission occurs if max k∈[n] {SINR k > T }. Therefore, Q n x i (T ) can be expressed as: Finally, the probability of successful detection with the sidelobe over n mini-slots P ss is the selection combining from Φ s . Define event V x i {Q n x i (T )}, then P ss can be expressed as: P ss = Pr x i ∈Φs V x i = 2π − θ BS 2π λ R 2 Q n x i (T )dx i = θ UE 2π − θ BS 2π λ ∞ 0 Q n x i (T )e −βr rdr ,(32) F. Proposition 3 Since we assume the mainlobe and sidelobe detections are independent, the failure of them are independent as well. Thus, the detection failure probability can be written as: P S f = (1 − P ss )(1 − P sm ) Nc .(33) G. Proposition 4 We assume that the UE and BSs are independent over different frames. The reason is twofold: firstly, we can consider mobility scenario where UEs or obstacles are moving, similar as in [19]; secondly, the UE may not search over all the potential directions in one frame and will turn to another direction in the next frame. Thus even for static UE, we can consider the UE is pointing to different BS in different frames. Therefore, we define M ∈ N + as the number of frames for the typical UE to detect a BS and M follows a geometric distribution with parameter 1 − P f . The IA latency is then the sum of failed frames and one IA period as: E[D I (N c )] = 1 1 − P f (N c ) − 1 T f + T cs + T ra .(34) At last, the total transmission latency is comprised of three parts: the failed frame duration, the IA period and the data transmission period as below:
6,963
1812.00912
2952677293
Online communities provide a unique way for individuals to access information from those in similar circumstances, which can be critical for health conditions that require daily and personalized management. As these groups and topics often arise organically, identifying the types of topics discussed is necessary to understand their needs. As well, these communities and people in them can be quite diverse, and existing community detection methods have not been extended towards evaluating these heterogeneities. This has been limited as community detection methodologies have not focused on community detection based on semantic relations between textual features of the user-generated content. Thus here we develop an approach, NeuroCom, that optimally finds dense groups of users as communities in a latent space inferred by neural representation of published contents of users. By embedding of words and messages, we show that NeuroCom demonstrates improved clustering and identifies more nuanced discussion topics in contrast to other common unsupervised learning approaches.
Methods for identifying relevant groups is an active area of research; a great deal of work is on graph-based data such as social or information networks. Community detection is then based on choosing an objective function that captures the intuition of a community as a set of nodes with better internal connectivity than external connectivity @cite_9 . This is a rich area of research and briefly summarizing, there is work across spectral algorithms @cite_16 , measures of centrality @cite_0 and matrix factorization @cite_11 .
{ "abstract": [ "", "Detecting clusters or communities in large real-world graphs such as large social or information networks is a problem of considerable interest. In practice, one typically chooses an objective function that captures the intuition of a network cluster as set of nodes with better internal connectivity than external connectivity, and then one applies approximation algorithms or heuristics to extract sets of nodes that are related to the objective function and that \"look like\" good communities for the application of interest. In this paper, we explore a range of network community detection methods in order to compare them and to understand their relative performance and the systematic biases in the clusters they identify. We evaluate several common objective functions that are used to formalize the notion of a network community, and we examine several different classes of approximation algorithms that aim to optimize such objective functions. In addition, rather than simply fixing an objective and asking for an approximation to the best cluster of any size, we consider a size-resolved version of the optimization problem. Considering community quality as a function of its size provides a much finer lens with which to examine community detection algorithms, since objective functions and approximation algorithms often have non-obvious size-dependent behavior.", "We propose a new measure for assessing the quality of a clustering. A simple heuristic is shown to give worst-case guarantees under the new measure. Then we present two results regarding the quality of the clustering found by a popular spectral algorithm. One proffers worst case guarantees whilst the other shows that if there exists a \"good\" clustering then the spectral algorithm will find one close to it.", "Recently community detection has attracted much interest in social media to understand the collective behaviours of users and allow individuals to be modeled in the context of the group. Most existing approaches for community detection exploit either users' social links or their published content, aiming at discovering groups of densely connected or highly similar users. They often fail to find effective communities due to excessive noise in content, sparsity in links, and heterogenous behaviours of users in social media. Further, they are unable to provide insights and rationales behind the formation of the group and the collective behaviours of the users. To tackle these challenges, we propose to discover communities in a low- dimensional latent space in which we simultaneously learn the representation of users and communities. In particular, we integrated different social views of the network into a low-dimensional latent space in which we sought dense clusters of users as communities. By imposing a Laplacian regularizer into affiliation matrix, we further incorporated prior knowledge into the community discovery process. Finally community profiles were computed by a linear operator integrating the profiles of members. Taking the wellness domain as an example, we conducted experiments on a large scale real world dataset of users tweeting about diabetes and its related concepts, which demonstrate the effectiveness of our approach in discovering and profiling user communities." ], "cite_N": [ "@cite_0", "@cite_9", "@cite_16", "@cite_11" ], "mid": [ "", "2951938759", "2130891992", "2583348919" ] }
From the User to the Medium: Neural Profiling Across Web Communities
Online communities are places where individuals have found support and places to exchange customized diseasespecific information (Frost and Massagli 2008). For noncommunicable diseases such as diabetes, social platforms have become very relevant places where individuals connect to learn about their condition outside of clinical settings. This is important as diabetes manifests in an evolving and heterogeneous manner, shifting in concert with population-wide alterations in behavioral and lifestyle factors and disease management strategies (Weitzman et al. 2011). Decades of studies have shown that risk of diabetes can differ by ethnic or gender groups and as well efficacy of interventions can vary by population subgroups (Sarkar, Fisher, and Schillinger 2006). Thus individuals find the personal experiences of others managing their same conditions useful for learning about new efficacious interventions, or other day-to-day strategies. As the data in social media and web groups is generated by individuals in unstructured formats and venues, and is constantly updated and changing, there is a need for methods to distill and extract the types of topics and groups being discussed. Thus in this paper, we harness the increasing use of neural Copyright c 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. representations and statistical natural language processing to demonstrate an approach for embedding content of users' posts and discovering communities in the same space based on the content of online posts. Specifically, we adopt neural text representation to model the semantic link amongst words in a lower dimensional space and then perform community detection and profiling in that space which maintains a level of topicality in discovered user communities, messages and profiles. We make the following contributions: • We demonstrate a neural approach (NeuroCom) for learning a low-dimensional latent space from the embedding of users' posts, which enables community detection. • We combine the neural framework with inference and qualitative methods to demonstrate how it can be used to learn and compare the substantive topics within and between diabetes communities from several large scale realworld datasets. Methodology In this paper, we aim at learning the representation of user posts from web and social platforms using an effective neural model. Our framework, in contrast to conventional models which just embed users in a latent space, learns the representation of users, messages and communities in the same latent space, as shown in Figure 1. This helps in the identification of communities that may be under-represented and thus missed in a single user's messages. Distributed Representation of Social Messages Our model is derived from a neural model, continuous BoW (C-BoW) (Mikolov et al. 2013), with a few innovations to learn the embedding of social messages. The C-BoW model is a simplified neural model without any non-linear hidden layers, which learns a distributed representation for each word w t while taking care of the semantic similarity of words. More specifically, given a large training corpus represented as a sequence of words w 1 , ..., w N , the objective of embedding is to maximize the log-likelihood, i.e. N t=1 c∈Ct log p(w t |w c ), where C t , referred as context word, is the set of words surrounding w t , i.e., target word. We extend this model to learn the embedding of social messages from the embedding of their compositional components. Formally, we based our model on the assumption that the message embedding is the average of embeddings of its n-gram constitutional elements. Formally, the embedding for a social message m can be stated as: vm = 1 |E(m)| VI E(m) = 1 |E(m)| w∈E(m) vw,(1) where V ∈ R N ×v is a matrix collecting the learned embeddings for words, E(m) denotes the list of n-grams in the message m and I m ∈ {0, 1} N is an indicator vector representing the compositional elements of m. We adopted negative sampling framework for learning this model. Discovering Topical Communities of Messages To discover topical focuses of users, we leverage densitybased clustering of social messages in latent space as it does not require the input of a predefined number of clusters, and can form clusters with arbitrary shapes (Ertöz, Steinbach, and Kumar 2003). More specifically, we used DBSCAN which is a popular density-based clustering algorithm. Topical Profiling To profile topics, we can compute the embedding of each community by averaging the embeddings of its involving messages and the affiliation of each user to the community by his contribution in the community messages. More specifically, the discovered clusters in the prior section are considered topical communities of users and the affiliation matrix of users is defined as, H i,j = Ui,j K j=1 Ui,j , where H i,j denotes membership affiliation of the i-th user to the j-th community and U i,j the message number belonging to i-th user in the j-th community. Demographic Profiling In order to better understand these communities and the users who participate in them, we infer basic demographics of participants in both the social media and on-line forum groups. The epidemiological literature indicates that any endeavor is incomplete without an understanding of the target population (Chunara, Wisk, and Weitzman 2017), and this becomes more pertinent when working with observational and Internet-based datasets such as the work here. In order to accomplish this, we use inference methods which have been used in other social media-based studies (Huang et al. 2018). For age, we follow the approach in (Sloan et al. 2015;Huang et al. 2018). We classify users as under 30, or 30 and older, which, in diabetes research, has been used to qualify "young age at onset". For gender, look-up tables have been used based on user handles (Mullen 2015). However, as we did not want to introduce uncertainty and more than half of the names were not linked to a gender, we did not assess any gender composition. Experiments Datasets For evaluation, we chose three datasets that we anticipate will be slightly different in their content and included users. The statistics of our datasets are shown in Table 1. Diabetes Support Group: This dataset is collected from posts of users who follow and participate in diabetes support groups like "diabeteslife" or "diabetesconnect" on Twitter. To construct the dataset, we first gathered a set of users who followed these diabetes support groups in Twitter. We then crawled the Twitter timelines of these users. We selected 5 different support groups to avoid the bias coming from a specific support group 1 . BGnow Dataset: Another dataset derived from diabetic users who actively share their wellness data on Twitter. These users not only post about their lifestyle and activities such as their diet, but also share their health information in terms of medical events and measurements like their blood glucose value (Akbari et al. 2016). Users in this dataset are majority diabetic type I and they used "#Bgnow" hashtag to report their blood glucose value on Twitter. TuDiabetes Forum: We aslo collected a dataset from the TuDiabetes forum, a popular diabetes community operated by the Diabetes Hands Foundation. It provides a rich community experience for people with interest in diabetes, including a social network, personal pages and blogs. The Tu-Diabetes forums are very popular, and have been active for years thus encompassing many topics. Baselines and Metrics We compare the proposed method with following baselines: KMeans (a widely used clustering method in social networks (G.-J. Qi and Huang 2012) with Tf-IDF representation for users and cosine measure for similarity computation), KMeans-Lat (similar to the KMeans approach, however clustering is performed in a latent space derived by Eq.(1)), Biterm-LDA (Latent Dirichlet Allocation (LDA) topic modeling tuned for short messages), and RBM (a deep model for community detection in (Abdelbary, ElKorany, and Bahgat 2014)). We utilized Silhouette (sil) and normalized mutual information (nmi) metrics for benchmarking. Table 2 shows the clustering results of different methods in terms of sil and nmi. We followed previous research studies to tune the parameters for all baseline methods. For Biterm-LDA, as proposed by (Yan et al. 2013), the parameters α and β have been fixed to 50/K and 0.01, respectively, where K is the number of clusters/topics computed using grid-search. In the RBM model, we tuned the number of hidden units to 250 and we then examined different number of community detection units 25, 50, 75, 100 and reported the best results, as suggested in (Abdelbary, ElKorany, and Bahgat 2014). On Quantitative Comparisons of the Model From the table, the following points can be observed: (1) Kmeans and Kmeans-Lat achieve the lowest performance in terms of both quality and consensus metrics. This is mainly attributed to the fact that KMeans with Tf-Idf features fails to capture similarities between words/terms in posts. (2) RBM and BiTerm-LDA outperform KMeans-Lat. (3) NeuroCom achieves the highest performance in 1 The following support groups were selected as seed accounts in Twitter: "diabeteslife", "diabetesconnect", "American Diabetes Association", "DiabetesHealth", and "Diabetes Hand Foundations" terms of sil and nmi metrics, which shows our model can detect communities with focused topics. This is attributed to the fact that the communities were detected in the same space as the messages were embedded, which preserves semantic similarities between messages. On Qualitative Comparison of the Model It is also intuitive to examine the resulting communities profiles to better understand the output of the NeuroCom model in community profiling and in comparison to existing methods. First, in terms of the number of resulting communities, NeuroCom extracts a higher number of communities, 36, compared to 22, 19, 28, and 25 for KMeans, KMeans-Lat, Biterm-LDA, and RBM, respectively. Next, we examined the topics of resulting communities in each method. Qualitatively, we find that topics extracted by Biterm-LDA discuss general topics around diabetes, while we can find communities extracted by NeuroCom that are much more focused; each dealing with a specific aspect of the disease. For example Biterm-LDA fails to find small communities such as related to the drug "Afrezza" while NeuroCom identifies specific communities such as "Afrezza", "Metformin" and "Insulin". These differences are attributed to the fact that traditional approaches work based on word cooccurrences and for such communities they fail to find significant co-occurrence and semantic information. We highlight that, as expected, many users are part of multiple communities. From BGnow, 12.6% of users were in one community, 41.2% in 2 and 22.7% in 3 (the rest were in more). For TuDiabetes 11.6% in 1, 8.4% in 2 and 26.5% in 3. Finally in support groups, 28.3% in 1, 27.2% in 2 and 21.3% in 3. Medium Comparisons To compare across mediums we can examine inferred demographic profiles of the mediums, as well as the topics of distilled communities. Twitter groups were much more skewed, with statistically significant more users inferred to be in the younger age category than in the online forum (p < 0.05). We also can compare the topics of communities from Tu-Diabetes with existing forum categories. All group discussions on TuDiabetes are categorized into the following 14 topics: Community, Type 1 and LADA, TuDiabetes Website, Gestational diabetes, Weight, Type 2, Diabetes Advocacy, Diabetes Complications and other Conditions, Mental and Emotional Wellness, Healthy Living, Diabetes Technology, Food, Treatment, Diabetes and Pregnancy. While interpretation of the topics of communities identified with our method NeuroCom is qualitative, we report that we found Table 3. However given that our method is more organic, and the number of identified communities from NeuroCom, i.e. 36, is larger, there are some topics that overlap or augment the website categories. Community Comparisons Within Mediums Assuming community topics are homogeneous across users may be erroneous (e.g. one topic may be more common to, say, women versus men). Further, if community detection is used for recommendation purposes, knowing how various communities are patronized by different types of users can improve the performance of recommendation. Thus here we attempt to profile the resulting communities within each medium. From a demographic perspective, we found that specific types of communities in TuDiabetes (topics related to insulin management with keywords such as eat, cgm, pancreas and doctors) had a statistically significantly proportion of users who were in the older category than the mean proportion (p < 0.05). As well, of the user profile names that could be linked to gender, more were linked to male (though we caution interpretation of this result due to the large number of user profiles which were not linked to female or male names). We are also able to describe some qualitative findings from the Twitter datasets. For example, in the support groups, we found that the cluster topic of current events with keywords such as blackhawks, Hillary Clinton, primary also had a statistically significantly proportion of users who were in the older category than the mean proportion (p < 0.05). Conclusion NeuroCom outperformed existing unsupervised methods based on common cluster evaluation metrics. Community detection results showed that topics of distilled communities are interpretable and follow the intuition regarding span of discussion in the Support Group dataset, versus the BGnow and the TuDiabetes forum. Through inferred age categories, we also showed that the online forum had a statistically significantly higher proportion of people in the < 30 age category. As well, though demographic inference has limitations, there were significantly different proportion of people in the age categories across different communities. While these demographic results are mainly qualitative, we found results that match intuition and can be used in future to improve recommendation approaches or identify concerns of diabetes patients in a more precision and personalized manner. Finally, we compared identified topics to forum categories where available (TuDiabetes), and found that the identified communities in NeuroCom overlap and transcend these existing forum categories.
2,265
1812.00912
2952677293
Online communities provide a unique way for individuals to access information from those in similar circumstances, which can be critical for health conditions that require daily and personalized management. As these groups and topics often arise organically, identifying the types of topics discussed is necessary to understand their needs. As well, these communities and people in them can be quite diverse, and existing community detection methods have not been extended towards evaluating these heterogeneities. This has been limited as community detection methodologies have not focused on community detection based on semantic relations between textual features of the user-generated content. Thus here we develop an approach, NeuroCom, that optimally finds dense groups of users as communities in a latent space inferred by neural representation of published contents of users. By embedding of words and messages, we show that NeuroCom demonstrates improved clustering and identifies more nuanced discussion topics in contrast to other common unsupervised learning approaches.
Neural representation has been implemented for social media short messages albeit in different ways than we propose @cite_1 . Gaussian Restricted Boltzmann Machines (RBM) have been used for modeling user's posts within a social network to identify their topics of interest, and finally construct communities @cite_1 . However, a parametric approach was used in which it was necessary to specify the number of clusters communities. In addition to not requiring such initialization, this approach is conceptually different than our proposed work in that it directly maps individuals to communities (instead of mapping the content of their posts, which may better capture heterogeneous community memberships). Further, content-based density approachefs as proposed, versus parametric ones could potentially learn a more organic number of communities. Given this gap, and the fact that content-based community detection (opposed to graph-based) may be more pertinent in health-related communities, here we explore content-based clustering of health communities.
{ "abstract": [ "Online social networks have been wildly spread in recent years. They enable users to identify other users with common interests, exchange their opinions, and expertise. Discovering user communities from social networks have become one of the major challenges which help its members to interact with relevant people who have similar interests. Community detection approaches fall into two categories: the first one considers user' networks while the other utilizes usergenerated content. In this paper, a multi-layer community detection model based on identifying topics of interest from user published content is presented. This model applies Gaussian Restricted Boltzmann Machine for modeling user's posts within a social network which yields to identify their topics of interest, and finally construct communities. The effectiveness of the proposed multi-layer model is measured using KL divergence which measures similarity between users of the same community. Experiments on the real Twitter dataset show that the proposed deep model outperforms traditional community detection models that directly maps users into corresponding communities using several baseline techniques." ], "cite_N": [ "@cite_1" ], "mid": [ "2078525157" ] }
From the User to the Medium: Neural Profiling Across Web Communities
Online communities are places where individuals have found support and places to exchange customized diseasespecific information (Frost and Massagli 2008). For noncommunicable diseases such as diabetes, social platforms have become very relevant places where individuals connect to learn about their condition outside of clinical settings. This is important as diabetes manifests in an evolving and heterogeneous manner, shifting in concert with population-wide alterations in behavioral and lifestyle factors and disease management strategies (Weitzman et al. 2011). Decades of studies have shown that risk of diabetes can differ by ethnic or gender groups and as well efficacy of interventions can vary by population subgroups (Sarkar, Fisher, and Schillinger 2006). Thus individuals find the personal experiences of others managing their same conditions useful for learning about new efficacious interventions, or other day-to-day strategies. As the data in social media and web groups is generated by individuals in unstructured formats and venues, and is constantly updated and changing, there is a need for methods to distill and extract the types of topics and groups being discussed. Thus in this paper, we harness the increasing use of neural Copyright c 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. representations and statistical natural language processing to demonstrate an approach for embedding content of users' posts and discovering communities in the same space based on the content of online posts. Specifically, we adopt neural text representation to model the semantic link amongst words in a lower dimensional space and then perform community detection and profiling in that space which maintains a level of topicality in discovered user communities, messages and profiles. We make the following contributions: • We demonstrate a neural approach (NeuroCom) for learning a low-dimensional latent space from the embedding of users' posts, which enables community detection. • We combine the neural framework with inference and qualitative methods to demonstrate how it can be used to learn and compare the substantive topics within and between diabetes communities from several large scale realworld datasets. Methodology In this paper, we aim at learning the representation of user posts from web and social platforms using an effective neural model. Our framework, in contrast to conventional models which just embed users in a latent space, learns the representation of users, messages and communities in the same latent space, as shown in Figure 1. This helps in the identification of communities that may be under-represented and thus missed in a single user's messages. Distributed Representation of Social Messages Our model is derived from a neural model, continuous BoW (C-BoW) (Mikolov et al. 2013), with a few innovations to learn the embedding of social messages. The C-BoW model is a simplified neural model without any non-linear hidden layers, which learns a distributed representation for each word w t while taking care of the semantic similarity of words. More specifically, given a large training corpus represented as a sequence of words w 1 , ..., w N , the objective of embedding is to maximize the log-likelihood, i.e. N t=1 c∈Ct log p(w t |w c ), where C t , referred as context word, is the set of words surrounding w t , i.e., target word. We extend this model to learn the embedding of social messages from the embedding of their compositional components. Formally, we based our model on the assumption that the message embedding is the average of embeddings of its n-gram constitutional elements. Formally, the embedding for a social message m can be stated as: vm = 1 |E(m)| VI E(m) = 1 |E(m)| w∈E(m) vw,(1) where V ∈ R N ×v is a matrix collecting the learned embeddings for words, E(m) denotes the list of n-grams in the message m and I m ∈ {0, 1} N is an indicator vector representing the compositional elements of m. We adopted negative sampling framework for learning this model. Discovering Topical Communities of Messages To discover topical focuses of users, we leverage densitybased clustering of social messages in latent space as it does not require the input of a predefined number of clusters, and can form clusters with arbitrary shapes (Ertöz, Steinbach, and Kumar 2003). More specifically, we used DBSCAN which is a popular density-based clustering algorithm. Topical Profiling To profile topics, we can compute the embedding of each community by averaging the embeddings of its involving messages and the affiliation of each user to the community by his contribution in the community messages. More specifically, the discovered clusters in the prior section are considered topical communities of users and the affiliation matrix of users is defined as, H i,j = Ui,j K j=1 Ui,j , where H i,j denotes membership affiliation of the i-th user to the j-th community and U i,j the message number belonging to i-th user in the j-th community. Demographic Profiling In order to better understand these communities and the users who participate in them, we infer basic demographics of participants in both the social media and on-line forum groups. The epidemiological literature indicates that any endeavor is incomplete without an understanding of the target population (Chunara, Wisk, and Weitzman 2017), and this becomes more pertinent when working with observational and Internet-based datasets such as the work here. In order to accomplish this, we use inference methods which have been used in other social media-based studies (Huang et al. 2018). For age, we follow the approach in (Sloan et al. 2015;Huang et al. 2018). We classify users as under 30, or 30 and older, which, in diabetes research, has been used to qualify "young age at onset". For gender, look-up tables have been used based on user handles (Mullen 2015). However, as we did not want to introduce uncertainty and more than half of the names were not linked to a gender, we did not assess any gender composition. Experiments Datasets For evaluation, we chose three datasets that we anticipate will be slightly different in their content and included users. The statistics of our datasets are shown in Table 1. Diabetes Support Group: This dataset is collected from posts of users who follow and participate in diabetes support groups like "diabeteslife" or "diabetesconnect" on Twitter. To construct the dataset, we first gathered a set of users who followed these diabetes support groups in Twitter. We then crawled the Twitter timelines of these users. We selected 5 different support groups to avoid the bias coming from a specific support group 1 . BGnow Dataset: Another dataset derived from diabetic users who actively share their wellness data on Twitter. These users not only post about their lifestyle and activities such as their diet, but also share their health information in terms of medical events and measurements like their blood glucose value (Akbari et al. 2016). Users in this dataset are majority diabetic type I and they used "#Bgnow" hashtag to report their blood glucose value on Twitter. TuDiabetes Forum: We aslo collected a dataset from the TuDiabetes forum, a popular diabetes community operated by the Diabetes Hands Foundation. It provides a rich community experience for people with interest in diabetes, including a social network, personal pages and blogs. The Tu-Diabetes forums are very popular, and have been active for years thus encompassing many topics. Baselines and Metrics We compare the proposed method with following baselines: KMeans (a widely used clustering method in social networks (G.-J. Qi and Huang 2012) with Tf-IDF representation for users and cosine measure for similarity computation), KMeans-Lat (similar to the KMeans approach, however clustering is performed in a latent space derived by Eq.(1)), Biterm-LDA (Latent Dirichlet Allocation (LDA) topic modeling tuned for short messages), and RBM (a deep model for community detection in (Abdelbary, ElKorany, and Bahgat 2014)). We utilized Silhouette (sil) and normalized mutual information (nmi) metrics for benchmarking. Table 2 shows the clustering results of different methods in terms of sil and nmi. We followed previous research studies to tune the parameters for all baseline methods. For Biterm-LDA, as proposed by (Yan et al. 2013), the parameters α and β have been fixed to 50/K and 0.01, respectively, where K is the number of clusters/topics computed using grid-search. In the RBM model, we tuned the number of hidden units to 250 and we then examined different number of community detection units 25, 50, 75, 100 and reported the best results, as suggested in (Abdelbary, ElKorany, and Bahgat 2014). On Quantitative Comparisons of the Model From the table, the following points can be observed: (1) Kmeans and Kmeans-Lat achieve the lowest performance in terms of both quality and consensus metrics. This is mainly attributed to the fact that KMeans with Tf-Idf features fails to capture similarities between words/terms in posts. (2) RBM and BiTerm-LDA outperform KMeans-Lat. (3) NeuroCom achieves the highest performance in 1 The following support groups were selected as seed accounts in Twitter: "diabeteslife", "diabetesconnect", "American Diabetes Association", "DiabetesHealth", and "Diabetes Hand Foundations" terms of sil and nmi metrics, which shows our model can detect communities with focused topics. This is attributed to the fact that the communities were detected in the same space as the messages were embedded, which preserves semantic similarities between messages. On Qualitative Comparison of the Model It is also intuitive to examine the resulting communities profiles to better understand the output of the NeuroCom model in community profiling and in comparison to existing methods. First, in terms of the number of resulting communities, NeuroCom extracts a higher number of communities, 36, compared to 22, 19, 28, and 25 for KMeans, KMeans-Lat, Biterm-LDA, and RBM, respectively. Next, we examined the topics of resulting communities in each method. Qualitatively, we find that topics extracted by Biterm-LDA discuss general topics around diabetes, while we can find communities extracted by NeuroCom that are much more focused; each dealing with a specific aspect of the disease. For example Biterm-LDA fails to find small communities such as related to the drug "Afrezza" while NeuroCom identifies specific communities such as "Afrezza", "Metformin" and "Insulin". These differences are attributed to the fact that traditional approaches work based on word cooccurrences and for such communities they fail to find significant co-occurrence and semantic information. We highlight that, as expected, many users are part of multiple communities. From BGnow, 12.6% of users were in one community, 41.2% in 2 and 22.7% in 3 (the rest were in more). For TuDiabetes 11.6% in 1, 8.4% in 2 and 26.5% in 3. Finally in support groups, 28.3% in 1, 27.2% in 2 and 21.3% in 3. Medium Comparisons To compare across mediums we can examine inferred demographic profiles of the mediums, as well as the topics of distilled communities. Twitter groups were much more skewed, with statistically significant more users inferred to be in the younger age category than in the online forum (p < 0.05). We also can compare the topics of communities from Tu-Diabetes with existing forum categories. All group discussions on TuDiabetes are categorized into the following 14 topics: Community, Type 1 and LADA, TuDiabetes Website, Gestational diabetes, Weight, Type 2, Diabetes Advocacy, Diabetes Complications and other Conditions, Mental and Emotional Wellness, Healthy Living, Diabetes Technology, Food, Treatment, Diabetes and Pregnancy. While interpretation of the topics of communities identified with our method NeuroCom is qualitative, we report that we found Table 3. However given that our method is more organic, and the number of identified communities from NeuroCom, i.e. 36, is larger, there are some topics that overlap or augment the website categories. Community Comparisons Within Mediums Assuming community topics are homogeneous across users may be erroneous (e.g. one topic may be more common to, say, women versus men). Further, if community detection is used for recommendation purposes, knowing how various communities are patronized by different types of users can improve the performance of recommendation. Thus here we attempt to profile the resulting communities within each medium. From a demographic perspective, we found that specific types of communities in TuDiabetes (topics related to insulin management with keywords such as eat, cgm, pancreas and doctors) had a statistically significantly proportion of users who were in the older category than the mean proportion (p < 0.05). As well, of the user profile names that could be linked to gender, more were linked to male (though we caution interpretation of this result due to the large number of user profiles which were not linked to female or male names). We are also able to describe some qualitative findings from the Twitter datasets. For example, in the support groups, we found that the cluster topic of current events with keywords such as blackhawks, Hillary Clinton, primary also had a statistically significantly proportion of users who were in the older category than the mean proportion (p < 0.05). Conclusion NeuroCom outperformed existing unsupervised methods based on common cluster evaluation metrics. Community detection results showed that topics of distilled communities are interpretable and follow the intuition regarding span of discussion in the Support Group dataset, versus the BGnow and the TuDiabetes forum. Through inferred age categories, we also showed that the online forum had a statistically significantly higher proportion of people in the < 30 age category. As well, though demographic inference has limitations, there were significantly different proportion of people in the age categories across different communities. While these demographic results are mainly qualitative, we found results that match intuition and can be used in future to improve recommendation approaches or identify concerns of diabetes patients in a more precision and personalized manner. Finally, we compared identified topics to forum categories where available (TuDiabetes), and found that the identified communities in NeuroCom overlap and transcend these existing forum categories.
2,265
1812.00967
2902347647
De novo protein structure prediction from amino acid sequence is one of the most challenging problems in computational biology. As one of the extensively explored mathematical models for protein folding, Hydrophobic-Polar (HP) model enables thorough investigation of protein structure formation and evolution. Although HP model discretizes the conformational space and simplifies the folding energy function, it has been proven to be an NP-complete problem. In this paper, we propose a novel protein folding framework FoldingZero, self-folding a de novo protein 2D HP structure from scratch based on deep reinforcement learning. FoldingZero features the coupled approach of a two-head (policy and value heads) deep convolutional neural network (HPNet) and a regularized Upper Confidence Bounds for Trees (R-UCT). It is trained solely by a reinforcement learning algorithm, which improves HPNet and R-UCT iteratively through iterative policy optimization. Without any supervision and domain knowledge, FoldingZero not only achieves comparable results, but also learns the latent folding knowledge to stabilize the structure. Without exponential computation, FoldingZero shows promising potential to be adopted for real-world protein properties prediction.
Approximation algorithms offer rigorous mathematical tools and fold a protein structure within polynomial time. However, it may lead to a weak approximation ratio, resulting in a structure far from the optimal solution. Hart and Istrail proposed an approximation algorithm with ratio 3 8 of the optimal score for the 3D cubic lattice structure @cite_16 . An improved approximation algorithm with 2 5 performance guarantees was further developed by the same authors @cite_1 . For the 2D square lattice, an approximation algorithm @cite_7 can achieve the approximation ratio of 1 3.
{ "abstract": [ "", "ABSTRACT This paper considers the protein energy minimization problem for lattice and off-lattice protein folding models that explicitly represent side chains. Lattice models of proteins have proven useful tools for reasoning about protein folding in unrestricted continuous space through analogy. This paper provides the first illustration of how rigorous algorithmic analyses of lattice models can lead to rigorous algorithmic analyses of off-lattice models. We consider two side chain models: a lattice model that generalizes the HP model (Dill, 1985) to explicitly represent side chains on the cubic lattice and a new off-lattice model, the HP Tangent Spheres Side Chain model (HP-TSSC), that generalizes this model further by representing the backbone and side chains of proteins with tangent spheres. We describe algorithms with mathematically guaranteed error bounds for both of these models. In particular, we describe a linear time performance guaranteed approximation algorithm for the HP side chain model that...", "We consider the problem of protein folding in the HP model on the two-dimensional square lattice. This problem is combinatorially equivalent to folding a string of 0's and 1's so that the string forms a self-avoiding walk on the lattice and the number of adjacent pairs of 1's is maximized. We present a linear-time 1 3-approximation algorithm for this problem, improving on the previous best approximation factor of 1 4. The approximation guarantee of this algorithm is based on an upper bound presented by Hart and Istrail [6] and used in all previous papers that address this problem. We show that this upper bound cannot be used to obtain an approximation factor better than 1 2." ], "cite_N": [ "@cite_16", "@cite_1", "@cite_7" ], "mid": [ "1987923072", "1975297018", "2026211446" ] }
FoldingZero: Protein Folding from Scratch in Hydrophobic-Polar Model
Proteins are complex biological macromolecules that play critical roles in the body. In standard terms, proteins always naturally fold to the same unique 3-dimensional structures, which are known as their native conformation. Based on the thermodynamic hypothesis of Christian Anfinsen [15], the native conformation is only determined by the sequence of amino acid and formed via a physical process named protein folding. How to devise a computer algorithm to predict the protein structures from the sequences is one of the most challenging and fundamental problems in computational biology, molecular biology, and theoretical chemistry. It attracts lots of research attention for its significant impacts and applications in disease prediction [24], protein design [17] and so on. The Hydrophobic-Polar (HP) model proposed by Dill [8,16] is one of the extensively studied mathematical models for protein folding. In the HP model, 20 different types of amino acids are classified as hydrophobic (H) or polar (P) by the degree of their hydrophobicity. It simplifies the protein sequence based on the fact that the hydrophobic interaction is a significant factor in the folding process. The hydrophobic amino acids are predominantly located in the folded protein's core because they must have less contact with water, whereas the polar ones are more commonly on the surface [9]. The sequence is "folded" as a self-avoiding walk on a 2D or 3D lattice, such that the vertices of the lattice can be occupied by at most one amino acid, and the adjacent amino acids in the protein sequence must also occupy adjacent lattice vertices. 2D square based lattice is usually utilized as a benchmark for evaluating the algorithm. The HP model considers the interaction between two amino acids only if the pairwise residues are closest neighbors on lattices but not adjacent in the chain. It assigns a negative one energy value to a contact between adjacent, non-covalently bound H-H residues, and zero value to P-H and P-P contacts. The target of folding algorithm is to discover the protein native conformation with the lowest energy value, which equals to maximize the number of H-H contacts on the lattice. Although the HP model discretizes the conformational space and simplifies the folding energy function, it has been proven as a NP-complete problem [27,2,6]. Therefore, it is computationally intractable to reach the globally optimal solution in the HP model, especially with the increase of the protein sequence length. In this paper, we propose a novel and efficient framework FoldingZero to self-fold protein 2D HP structure based on deep reinforcement learning. It's the first folding from scratch solution in this one of the 21st century open grand challenges. This is ultimately needed with transformative impacts because we have a huge amount of sequenced protein data without structure annotations, which are fundamentally critical for protein functions, gene defect, disease detection and remedy. The key contributions of this work are multifold. • Within our knowledge, this is the first work that uses deep reinforcement learning technique to solve the challenging protein folding problem. We attempt to usher in the high-impact artificial intelligence tool to empower fundamental life science research. • We propose a novel protein folding framework FoldingZero to self-fold the de novo protein 2D HP structure from scratch based on the coupled approach of a two-head deep convolutional neural network (HPNet) and a regularized Upper ConfidenceBounds for Trees (R-UCT). • Although the folding scenario discussed in the paper focuses on the HP model, the FoldingZero approach can be generalized to more complicated protein models and meet more real-world needs in computational biology and chemistry. • Without any domain knowledge, FoldingZero learns from scratch and eventually achieve the comparable results on the benchmark dataset. It also learns latent folding knowledge to stabilize the protein structure. Methods The proposed FoldingZero architecture consists of two components: a HP folding environment and a self-folding agent as illustrated in Figure 1. Based on the current folding state given by the environment, the agent sequentially self-folds the amino acid along the protein sequence. For example, the agent randomly places the first amino acid in the environment. Based on the state of the first Figure 1: FoldingZero framework describes the interaction between the environment (left one), and the folding agent (right one). Starting from an initial state given by the environment, the agent carries out simulations, including selection, evaluation, expansion and backpropagation processes. The most promising folding position is selected to self-fold the next residue. When the folding terminates, states, rewards and policies will be stored into the memory to train the HPNet. amino acid, the agent uses its trained model to place the second amino acid next to the first one. This process continues until the agent folds the last amino acid in the sequence. With the self-folding process, the final H-H contact score is given by the environment. The score is utilized as a reward to evaluate each folding action. HP folding environment Protein primary sequence is typically notated as a string of letters. In the environment, each amino acid in the sequence is firstly translated to H or P type according to its chemical properties. For example, given a primary sequence ACRCDH, its HP representation is HPHHPH. Starting from the first one, each amino acid will be self-folded by the agent on the 2D grid. The environment defines the folding state at time-step t as s t and corresponding legal action as a ∈ A(s). Action space of each state contains at most 3 possible moves (forward, left and right) because of self-avoiding. Only the vertex of the lattice can be occupied. The neighboring residues in the protein sequence must also occupy adjacent vertices. Every folding action leads to a new folding state s t+1 , which contains all so far folded amino acids' positions on the 2D lattice. When the folding is done for all the residues along the sequence, the amount of final H-H contact is calculated as r, which will be fed back to the agent as self-folding reward. In FoldingZero, the lattice is represented as a 3D tensor with 2D grid (height and width) and 1D channel (analogous to RGB channels in images). Each grid point in the tensor corresponds to either vertex or edge of the 2D lattice. Vertex can be occupied by two types of amino acids, such as H or P. We also define two connection types on edge. One denotes the "primary connect" between adjacent residues on the primary sequence and the other denotes the "H-H contact" between pairwise H residues which are the closest neighbors on lattices but not adjacent on the primary sequence. Thus, 4 binary channels with value 0 or 1 are utilized to represent one grid point; only one channel can be activated, and the others are all zeroes. Self-folding mechanism The agent in FoldingZero incorporates two interactive components, HPNet and R-UCT. HPNet takes the folding state s as its input. Stacked residual blocks with convolutional layers are utilized to extract abstract features. At the top, HPNet extends to two output heads, namely policy and value. The policy head outputs a vector P with three values, which represents the probabilities of selecting three possible folding actions for the next residue. The value head outputs a scalar v, estimating the amount of H-H contact for the whole protein sequence based on the current folding results. R-UCT is the search algorithm utilized in FoldingZero for promising folding positions. It incrementally grows to a search tree during the self-folding process. Each child node corresponds to one possible folding action, and stores the related statistics information, such as visit count, total reward, mean reward and prior probability. These parameters are updated during multiple rounds of Monte Carlo tree search, which consists of selection, expansion, evaluation and backpropagation. When the amount of search round reaches to the configured upper limit, next action will be selected based on these statistic information. R-UCT in FoldingZero does not use Monte Carlo rollouts policy in each search round, compared with standard algorithms. Because in the HP model, the search space will inflate exponentially with the increase in protein length, rollout policy will result in overwhelming computational and memory cost. As an efficient replacement, FoldingZero leverages the HPNet to expand and evaluate the unexplored leaf nodes in the search tree. The output of policy head is directly appended to the new child node as its prior probability. The value head output is utilized to update the total and mean reward values of every node that locates along the search path during the backpropagation. Heuristically guided by the HPNet, R-UCT can effectively conduct the lookahead search and node-evaluation. To ensure validity of the folding results, self-avoid restriction is applied to the R-UCT. Except this basic policy, no other heuristics or prior knowledge is utilized to augment the R-UCT. When the tree search simulation completes, R-UCT provides a normalized probability vector π over all of the current valid actions. According to the probabilities, the agent in FoldingZero selects the most promising self-folding action for the next amino acid. FoldingZero repeats the above tree search process for each residual, until the whole protein sequence is traversed. Reinforcement learning To improve the quality of self-folding results, FoldingZero leverages a reinforcement learning algorithm, which is inspired by AlphaGo Zero [23]. It is designed to improve HPNet and R-UCT iteratively in the repeated policy procedures. In FoldingZero, HPNet is trained in a supervised manner to match the R-UCT search results closely. Action probability π in the R-UCT is calculated based on the raw network output P and multiple rounds of tree search, so π may be much stronger than P . As a policy improvement operator, π serves as the label for the policy head of HPNet. On the other head, the amount of final H-H contact is utilized as a positive reward to evaluate the quality of the self-folding trajectory. The algorithm is designed to maximize the reward to obtain the most stable protein conformation. As a policy evaluation operator, the final reward works as the label for the value head. The training samples are generated during the self-folding process. At time-step t, the current folding state s t , and its corresponding action probability π t in R-UCT can be immediately obtained. When a whole protein sequence is folded, the amount of final H-H contact is applied to each intermediate self-folding time-step as its reward r. For one protein sequence with length L, eventually it can generate L − 1 training samples (s t , π t , r), and we store all of them into a database. We keep training the HPNet until the configured iteration limit is reached. To ensure that we can always utilize the best HPNet to guide the R-UCT, we introduce a competitive mechanism. Over a test dataset, two FoldingZero agents compete with each other; one utilizes the latest HPNet parameters, and the other is based on the previous best model. If the former one wins with more folded H-H contacts, the updated HPNet will replace the previous best model to adopt in the future self-folding, and also serve as a baseline for the following agent competition. Heuristically guided and evaluated by the updated HPNet, the tree search in R-UCT may also become more powerful. By repeating this policy procedures, both HPNet and R-UCT can keep improving iteratively. In the next two subsections, we describe the tree search steps in R-UCT and the HPNet architecture. R-UCT where Q * (s t , a) = Q(s t , a) R upper (seq) (2) U (s t , a) = c α P (s t , a) i N (s t , a i ) 1 + N (s t , a)(3) where a ∈ A(s) represents all available actions that lead to corresponding candidate nodes. In (1), the first term Q * (s t , a) represents the exploitation component, which prefers the nodes with high folded H-H contact score. The second term U (s t , a) is the exploration component, which favors the nodes that have been relatively rarely visited. c α is a hyperparameter to balance exploitation and exploration. According to the proof by Hart-Istrail [13,14], given a protein sequence seq, the optimal number of H-H contacts Opt(seq) in the 2D HP model exists a theoretical upper bound R upper (seq). Divided by this upper bound, Q(s t , a) is scaled to Q * (s t , a) with the same magnitude as U (s t , a). To calculate the upper bound R upper (seq), residues are indexed by their positions in the primary sequence, using the ascending order 1, 2, 3, ...n, where n is the protein length. Denoting the numbers of hydrophobic residues located at odd and even positions as O(seq) and E(seq), respectively, we have Opt(seq) ≤ R upper (seq)(4) such that R upper (seq) = 2 × min{O(seq), E(seq)} Expansion and evaluation When reaching a leaf node s L , the HPNet is utilized to evaluate its state and output the estimated reward v L and prior probability vector P L . Then, the leaf node can be expanded to the search tree and its valid child node s i is initialized to N (s i , a i ) = 0, W (s i , a i ) = 0, Q(s i , a i ) = 0 and P (s i , a i ) = p i , p i ∈ P L . Backpropagation The statistics stored in nodes are updated backward after each simulation round. The visit count is accumulated with N (s t , a t ) = N (s t , a t ) + 1. The total reward W (s t , a t ) and mean reward Q(s t , a t ) are also updated by the equation (6) and (7). W (s t , a t ) = W (s t , a t ) + v t (6) Q(s t , a t ) = W (s t , a t ) N (s t , a t )(7) Self-folding probability π(a i |s) = N (s,ai) j N (s,aj ) is returned, when all the simulation rounds end. Based on π, the folding agent will select the most promising action to self-fold the residue. HPNet architecture The input to the neural network is defined as X t , a N × N × M image stack with grid size N × N and the number of binary channels M . The current folding state s t is represented as the concatenation of four binary value feature planes [H t , P t , C t , B t ]. They respectively correspond to H type residue, P type residue, "primary connect" and "H-H connect". For example, H g t = 1 only if the grid point g is occupied by the H type residue. To incorporate the sequence-folding information, we utilize 3 steps of history states and stack them together with the current state. An extra feature plane, E t is used to represent the next residue type to be folded. It will be set as 1 if the residue is H type, or 0 if the residue is P type. The final X t is a concatenation of all these 17 planes with X t = [s t , s t−1 , s t−2 , s t−3 , E t ]. HPNet architecture is illustrated in Figure 2. The latent spatial information is extracted from the raw lattice input by 20 stacked residual blocks with 3×3 filters. Each residual block is comprised of two convolutional layers with ReLU activation function, two batch normalization layers, and a skip connection. At the top, the HPNet is split into two output heads, namely policy and value. The policy head outputs a vector P , representing the prior probability of each folding action. The value head outputs a scalar v, estimating the H-H contact score for the whole protein sequence. To train the HPNet, we use a cross-entropy loss for the policy head to maximize the similarity of the estimated prior probability P to search probabilities π. A mean squared error is adopted to the value head to minimize the error between the predicted value v and the self-folded reward r. Thus, the loss function for HPNet is given by: l = (r − v) 2 − π log P + β θ 2 (8) where θ represents weights of HPNet and β is a hyperparameter that controls the L2 regularization to prevent overfitting. Experiments and analysis Experimental setting of FoldingZero Self-folding We collect around 9000 non-redundant protein sequences from the public PDB dataset (https://www.rcsb.org/), in which any two proteins share less than 25% sequence identity. FoldingZero utilizes the current best HPNet model and R-UCT to sequentially self-fold each protein sequence. A folding action is selected after 300 simulation rounds of the R-UCT. To increase the exploration spaces, a Dirichlet noise is added to the prior probabilities of the parent nodes, with P (n, a) = (1 − )p a + λ a , where λ ∼ Dir(0.03) and = 0.25. Training We store the most recent 60,000 self-folding results into the memory. In every iteration, 256 results are sampled uniformly from the memory slots to train the HPNet. We use SGD (Stochastic Gradient Descent) with momentum 0.9 as the optimization approach, and set the initial learning rate to 0.001 and the weight decay to 4e-5. Evaluation To ensure that the updated HPNet model can generate higher quality prediction, we use 500 unseen protein sequences for evaluation. For every 2000 training steps with 32 batch-size, we save a new checkpoint and evaluate it. If it performs better than the previous best model, it will be used to self-fold and become a new baseline for competition in the next round. Evaluation After training FoldingZero in around two days, we evaluate it on the well-known 2D HP model benchmark dataset (http://www.brown.edu/Research/Istrail_Lab/hp2dbenchmarks.html). First, we compare FoldingZero with a pure UCT based approach regarding the H-H contact score. The UCT approach employs the rollout strategy with similar information utilized by FoldingZero, except the prior probability from the HPNet. We fix the number of simulation round to 300 in FoldingZero, and adjust it in the controlled approach. As shown in Figure 3, with the increase in round number, the performance of the UCT algorithm slightly improves, because it can explore more state space before finalizing the selection. However, with the exponential growth of the search space, it becomes difficult to further improve performance by increasing simulation rounds. In contrast, even with much fewer simulation rounds, FoldingZero outperforms the UCT method, and the advantage is more noticeable when folding long sequences. It demonstrates that the trained HPNet can effectively guide the high-quality tree search simulations. Second, we compare FoldingZero with other state-of-the-art heuristic approaches. A conventional metric, free energy score is utilized to measure their performance. It is defined as the opposite of the H-H contact number. EMC [18] and ENLS [11] were developed based on the genetic algorithm, and Ant-Q [10] is a combined approach with evolutionary algorithm and reinforcement learning. Table 1 shows that FoldingZero achieves the comparable results and the folded free energy scores approach to the optimal ones. It is also worth noting that EMC is based on time-consuming simulation, ENLS uses memory structures to store intermediate results, and Ant-Q learns an independent Q-table for each specific sequence. Thus, when there tends to be an inordinately large number of possible solutions, the simulation rounds or memory requirements of these approaches tend to be prohibitive for longer sequences. In contrast, the efficiency of FoldingZero does not exponentially depend on the sequence length. Even for the long sequences, it can give the decent folding results in a reasonable time period. Table 2. The "S" vertex denotes the first starting residue in the sequence and "E" denotes the last ending one. Result analysis From the benchmark dataset, we select several representative protein sequences listed in Table 2, and visualize their folding results in Figure 4. We observe that FoldingZero successfully forms compact H-H cores by congregating the hydrophobic residues in the structure center and placing polar ones on the boundary. It demonstrates that FoldingZero learns the latent knowledge that hydrophobic residues are predominantly located in protein's core, whereas polar ones are more commonly located on the surface, through DRL with extensive experiences. We also evaluate FoldingZero with some long protein sequences, which are not available in the benchmark dataset due to the limited scalability. As shown in Figure 5a, the folded structure also exhibits the H-H core pattern. Table 2. During the evaluation, we also notice an interesting folding result of Seq3, shown in Figure 5b. For the penultimate residue of the sequence, FoldingZero still attempts to place it on the boundary, because the residue type is polar. However, this folding action causes that the last hydrophobic residue cannot form the potential H-H contact. One possible reason is that HPNet in the folding agent does not be offered the global information of the whole sequence, so that R-UCT may be misguided by the prediction. In the future work, we plan to embed the global information into the input of HPNet to further improve its capacity. Conclusion We proposed an intelligent protein folding framework FoldingZero to self-fold the de novo protein 2D HP structure from scratch. The HPNet and R-UCT are effectively integrated into FoldingZero to select the promising folding action. A reinforcement learning algorithm is adopted to improve the HPNet and R-UCT iteratively in repeated policy procedures. Without any supervision and domain knowledge, FoldingZero achieves comparable high-quality folding results, compared with other heuristic approaches. Without time-consuming searching and computation, FoldingZero is much more scalable and shows great potential to be applied for real-world protein properties prediction. We hope that this work could inspire future works of protein structure prediction with deep reinforcement learning techniques.
3,651
1812.00967
2902347647
De novo protein structure prediction from amino acid sequence is one of the most challenging problems in computational biology. As one of the extensively explored mathematical models for protein folding, Hydrophobic-Polar (HP) model enables thorough investigation of protein structure formation and evolution. Although HP model discretizes the conformational space and simplifies the folding energy function, it has been proven to be an NP-complete problem. In this paper, we propose a novel protein folding framework FoldingZero, self-folding a de novo protein 2D HP structure from scratch based on deep reinforcement learning. FoldingZero features the coupled approach of a two-head (policy and value heads) deep convolutional neural network (HPNet) and a regularized Upper Confidence Bounds for Trees (R-UCT). It is trained solely by a reinforcement learning algorithm, which improves HPNet and R-UCT iteratively through iterative policy optimization. Without any supervision and domain knowledge, FoldingZero not only achieves comparable results, but also learns the latent folding knowledge to stabilize the structure. Without exponential computation, FoldingZero shows promising potential to be adopted for real-world protein properties prediction.
Heuristic algorithms cannot guarantee the optimal solution, but they usually obtain an approximation solution in a reasonable time frame. Beutler and Dill introduced a Core-directed chain Growth method (CG) using a heuristic bias function to help assemble a hydrophobic core @cite_27 . Ant colony optimization based algorithms were developed by Shmygelska @cite_25 and Thalheim @cite_3 . proposed a new Monte Carlo method, fragment regrowth via energy-guided sequential sampling @cite_19 . Other techniques, such as simulated annealing @cite_26 , quantum annealing @cite_9 , genetic algorithms @cite_22 and reinforcement learning @cite_5 , were also applied to the HP model with limited success and scalability.
{ "abstract": [ "Background The protein folding problem remains one of the most challenging open problems in computational biology. Simplified models in terms of lattice structure and energy function have been proposed to ease the computational hardness of this optimization problem. Heuristic search algorithms and constraint programming are two common techniques to approach this problem. The present study introduces a novel hybrid approach to simulate the protein folding problem using constraint programming technique integrated within local search.", "Abstract Genetic algorithms methods utilize the same optimization procedures as natural genetic evolution, in which a population is gradually improved by selection. We have developed a genetic algorithm search procedure suitable for use in protein folding simulations. A population of conformations of the polypeptide chain is maintained, and conformations are changed by mutation, in the form of conventional Monte Carlo steps, and crossovers in which parts of the polypeptide chain are interchanged between conformations. For folding on a simple two-dimensional lattice it is found that the genetic algorithm is dramatically superior to conventional Monte Carlo methods.", "Lattice protein folding models are a cornerstone of computational biophysics. Although these models are a coarse grained representation, they provide useful insight into the energy landscape of natural proteins. Finding low-energy threedimensional structures is an intractable problem even in the simplest model, the Hydrophobic-Polar (HP) model. Description of protein-like properties are more accurately described by generalized models, such as the one proposed by Miyazawa and Jernigan (MJ), which explicitly take into account the unique interactions among all 20 amino acids. There is theoretical and experimental evidence of the advantage of solving classical optimization problems using quantum annealing over its classical analogue (simulated annealing). In this report, we present a benchmark implementation of quantum annealing for lattice protein folding problems (six different experiments up to 81 superconducting quantum bits). This first implementation of a biophysical problem paves the way towards studying optimization problems in biophysics and statistical mechanics using quantum devices.", "", "An efficient exploration of the configuration space of a biopolymer is essential for its structure modeling and prediction. In this study, the authors propose a new Monte Carlo method, fragment regrowth via energy-guided sequential sampling (FRESS), which incorporates the idea of multigrid Monte Carlo into the framework of configurational-bias Monte Carlo and is suitable for chain polymer simulations. As a by-product, the authors also found a novel extension of the Metropolis Monte Carlo framework applicable to all Monte Carlo computations. They tested FRESS on hydrophobic-hydrophilic (HP) protein folding models in both two and three dimensions. For the benchmark sequences, FRESS not only found all the minimum energies obtained by previous studies with substantially less computation time but also found new lower energies for all the three-dimensional HP models with sequence length longer than 80 residues.", "", "", "Background The protein folding problem is a fundamental problems in computational molecular biology and biochemical physics. Various optimisation methods have been applied to formulations of the ab-initio folding problem that are based on reduced models of protein structure, including Monte Carlo methods, Evolutionary Algorithms, Tabu Search and hybrid approaches. In our work, we have introduced an ant colony optimisation (ACO) algorithm to address the non-deterministic polynomial-time hard (NP-hard) combinatorial problem of predicting a protein's conformation from its amino acid sequence under a widely studied, conceptually simple model – the 2-dimensional (2D) and 3-dimensional (3D) hydrophobic-polar (HP) model." ], "cite_N": [ "@cite_26", "@cite_22", "@cite_9", "@cite_3", "@cite_19", "@cite_27", "@cite_5", "@cite_25" ], "mid": [ "2063087637", "2101206889", "2118518847", "", "2078621233", "", "", "1640378004" ] }
FoldingZero: Protein Folding from Scratch in Hydrophobic-Polar Model
Proteins are complex biological macromolecules that play critical roles in the body. In standard terms, proteins always naturally fold to the same unique 3-dimensional structures, which are known as their native conformation. Based on the thermodynamic hypothesis of Christian Anfinsen [15], the native conformation is only determined by the sequence of amino acid and formed via a physical process named protein folding. How to devise a computer algorithm to predict the protein structures from the sequences is one of the most challenging and fundamental problems in computational biology, molecular biology, and theoretical chemistry. It attracts lots of research attention for its significant impacts and applications in disease prediction [24], protein design [17] and so on. The Hydrophobic-Polar (HP) model proposed by Dill [8,16] is one of the extensively studied mathematical models for protein folding. In the HP model, 20 different types of amino acids are classified as hydrophobic (H) or polar (P) by the degree of their hydrophobicity. It simplifies the protein sequence based on the fact that the hydrophobic interaction is a significant factor in the folding process. The hydrophobic amino acids are predominantly located in the folded protein's core because they must have less contact with water, whereas the polar ones are more commonly on the surface [9]. The sequence is "folded" as a self-avoiding walk on a 2D or 3D lattice, such that the vertices of the lattice can be occupied by at most one amino acid, and the adjacent amino acids in the protein sequence must also occupy adjacent lattice vertices. 2D square based lattice is usually utilized as a benchmark for evaluating the algorithm. The HP model considers the interaction between two amino acids only if the pairwise residues are closest neighbors on lattices but not adjacent in the chain. It assigns a negative one energy value to a contact between adjacent, non-covalently bound H-H residues, and zero value to P-H and P-P contacts. The target of folding algorithm is to discover the protein native conformation with the lowest energy value, which equals to maximize the number of H-H contacts on the lattice. Although the HP model discretizes the conformational space and simplifies the folding energy function, it has been proven as a NP-complete problem [27,2,6]. Therefore, it is computationally intractable to reach the globally optimal solution in the HP model, especially with the increase of the protein sequence length. In this paper, we propose a novel and efficient framework FoldingZero to self-fold protein 2D HP structure based on deep reinforcement learning. It's the first folding from scratch solution in this one of the 21st century open grand challenges. This is ultimately needed with transformative impacts because we have a huge amount of sequenced protein data without structure annotations, which are fundamentally critical for protein functions, gene defect, disease detection and remedy. The key contributions of this work are multifold. • Within our knowledge, this is the first work that uses deep reinforcement learning technique to solve the challenging protein folding problem. We attempt to usher in the high-impact artificial intelligence tool to empower fundamental life science research. • We propose a novel protein folding framework FoldingZero to self-fold the de novo protein 2D HP structure from scratch based on the coupled approach of a two-head deep convolutional neural network (HPNet) and a regularized Upper ConfidenceBounds for Trees (R-UCT). • Although the folding scenario discussed in the paper focuses on the HP model, the FoldingZero approach can be generalized to more complicated protein models and meet more real-world needs in computational biology and chemistry. • Without any domain knowledge, FoldingZero learns from scratch and eventually achieve the comparable results on the benchmark dataset. It also learns latent folding knowledge to stabilize the protein structure. Methods The proposed FoldingZero architecture consists of two components: a HP folding environment and a self-folding agent as illustrated in Figure 1. Based on the current folding state given by the environment, the agent sequentially self-folds the amino acid along the protein sequence. For example, the agent randomly places the first amino acid in the environment. Based on the state of the first Figure 1: FoldingZero framework describes the interaction between the environment (left one), and the folding agent (right one). Starting from an initial state given by the environment, the agent carries out simulations, including selection, evaluation, expansion and backpropagation processes. The most promising folding position is selected to self-fold the next residue. When the folding terminates, states, rewards and policies will be stored into the memory to train the HPNet. amino acid, the agent uses its trained model to place the second amino acid next to the first one. This process continues until the agent folds the last amino acid in the sequence. With the self-folding process, the final H-H contact score is given by the environment. The score is utilized as a reward to evaluate each folding action. HP folding environment Protein primary sequence is typically notated as a string of letters. In the environment, each amino acid in the sequence is firstly translated to H or P type according to its chemical properties. For example, given a primary sequence ACRCDH, its HP representation is HPHHPH. Starting from the first one, each amino acid will be self-folded by the agent on the 2D grid. The environment defines the folding state at time-step t as s t and corresponding legal action as a ∈ A(s). Action space of each state contains at most 3 possible moves (forward, left and right) because of self-avoiding. Only the vertex of the lattice can be occupied. The neighboring residues in the protein sequence must also occupy adjacent vertices. Every folding action leads to a new folding state s t+1 , which contains all so far folded amino acids' positions on the 2D lattice. When the folding is done for all the residues along the sequence, the amount of final H-H contact is calculated as r, which will be fed back to the agent as self-folding reward. In FoldingZero, the lattice is represented as a 3D tensor with 2D grid (height and width) and 1D channel (analogous to RGB channels in images). Each grid point in the tensor corresponds to either vertex or edge of the 2D lattice. Vertex can be occupied by two types of amino acids, such as H or P. We also define two connection types on edge. One denotes the "primary connect" between adjacent residues on the primary sequence and the other denotes the "H-H contact" between pairwise H residues which are the closest neighbors on lattices but not adjacent on the primary sequence. Thus, 4 binary channels with value 0 or 1 are utilized to represent one grid point; only one channel can be activated, and the others are all zeroes. Self-folding mechanism The agent in FoldingZero incorporates two interactive components, HPNet and R-UCT. HPNet takes the folding state s as its input. Stacked residual blocks with convolutional layers are utilized to extract abstract features. At the top, HPNet extends to two output heads, namely policy and value. The policy head outputs a vector P with three values, which represents the probabilities of selecting three possible folding actions for the next residue. The value head outputs a scalar v, estimating the amount of H-H contact for the whole protein sequence based on the current folding results. R-UCT is the search algorithm utilized in FoldingZero for promising folding positions. It incrementally grows to a search tree during the self-folding process. Each child node corresponds to one possible folding action, and stores the related statistics information, such as visit count, total reward, mean reward and prior probability. These parameters are updated during multiple rounds of Monte Carlo tree search, which consists of selection, expansion, evaluation and backpropagation. When the amount of search round reaches to the configured upper limit, next action will be selected based on these statistic information. R-UCT in FoldingZero does not use Monte Carlo rollouts policy in each search round, compared with standard algorithms. Because in the HP model, the search space will inflate exponentially with the increase in protein length, rollout policy will result in overwhelming computational and memory cost. As an efficient replacement, FoldingZero leverages the HPNet to expand and evaluate the unexplored leaf nodes in the search tree. The output of policy head is directly appended to the new child node as its prior probability. The value head output is utilized to update the total and mean reward values of every node that locates along the search path during the backpropagation. Heuristically guided by the HPNet, R-UCT can effectively conduct the lookahead search and node-evaluation. To ensure validity of the folding results, self-avoid restriction is applied to the R-UCT. Except this basic policy, no other heuristics or prior knowledge is utilized to augment the R-UCT. When the tree search simulation completes, R-UCT provides a normalized probability vector π over all of the current valid actions. According to the probabilities, the agent in FoldingZero selects the most promising self-folding action for the next amino acid. FoldingZero repeats the above tree search process for each residual, until the whole protein sequence is traversed. Reinforcement learning To improve the quality of self-folding results, FoldingZero leverages a reinforcement learning algorithm, which is inspired by AlphaGo Zero [23]. It is designed to improve HPNet and R-UCT iteratively in the repeated policy procedures. In FoldingZero, HPNet is trained in a supervised manner to match the R-UCT search results closely. Action probability π in the R-UCT is calculated based on the raw network output P and multiple rounds of tree search, so π may be much stronger than P . As a policy improvement operator, π serves as the label for the policy head of HPNet. On the other head, the amount of final H-H contact is utilized as a positive reward to evaluate the quality of the self-folding trajectory. The algorithm is designed to maximize the reward to obtain the most stable protein conformation. As a policy evaluation operator, the final reward works as the label for the value head. The training samples are generated during the self-folding process. At time-step t, the current folding state s t , and its corresponding action probability π t in R-UCT can be immediately obtained. When a whole protein sequence is folded, the amount of final H-H contact is applied to each intermediate self-folding time-step as its reward r. For one protein sequence with length L, eventually it can generate L − 1 training samples (s t , π t , r), and we store all of them into a database. We keep training the HPNet until the configured iteration limit is reached. To ensure that we can always utilize the best HPNet to guide the R-UCT, we introduce a competitive mechanism. Over a test dataset, two FoldingZero agents compete with each other; one utilizes the latest HPNet parameters, and the other is based on the previous best model. If the former one wins with more folded H-H contacts, the updated HPNet will replace the previous best model to adopt in the future self-folding, and also serve as a baseline for the following agent competition. Heuristically guided and evaluated by the updated HPNet, the tree search in R-UCT may also become more powerful. By repeating this policy procedures, both HPNet and R-UCT can keep improving iteratively. In the next two subsections, we describe the tree search steps in R-UCT and the HPNet architecture. R-UCT where Q * (s t , a) = Q(s t , a) R upper (seq) (2) U (s t , a) = c α P (s t , a) i N (s t , a i ) 1 + N (s t , a)(3) where a ∈ A(s) represents all available actions that lead to corresponding candidate nodes. In (1), the first term Q * (s t , a) represents the exploitation component, which prefers the nodes with high folded H-H contact score. The second term U (s t , a) is the exploration component, which favors the nodes that have been relatively rarely visited. c α is a hyperparameter to balance exploitation and exploration. According to the proof by Hart-Istrail [13,14], given a protein sequence seq, the optimal number of H-H contacts Opt(seq) in the 2D HP model exists a theoretical upper bound R upper (seq). Divided by this upper bound, Q(s t , a) is scaled to Q * (s t , a) with the same magnitude as U (s t , a). To calculate the upper bound R upper (seq), residues are indexed by their positions in the primary sequence, using the ascending order 1, 2, 3, ...n, where n is the protein length. Denoting the numbers of hydrophobic residues located at odd and even positions as O(seq) and E(seq), respectively, we have Opt(seq) ≤ R upper (seq)(4) such that R upper (seq) = 2 × min{O(seq), E(seq)} Expansion and evaluation When reaching a leaf node s L , the HPNet is utilized to evaluate its state and output the estimated reward v L and prior probability vector P L . Then, the leaf node can be expanded to the search tree and its valid child node s i is initialized to N (s i , a i ) = 0, W (s i , a i ) = 0, Q(s i , a i ) = 0 and P (s i , a i ) = p i , p i ∈ P L . Backpropagation The statistics stored in nodes are updated backward after each simulation round. The visit count is accumulated with N (s t , a t ) = N (s t , a t ) + 1. The total reward W (s t , a t ) and mean reward Q(s t , a t ) are also updated by the equation (6) and (7). W (s t , a t ) = W (s t , a t ) + v t (6) Q(s t , a t ) = W (s t , a t ) N (s t , a t )(7) Self-folding probability π(a i |s) = N (s,ai) j N (s,aj ) is returned, when all the simulation rounds end. Based on π, the folding agent will select the most promising action to self-fold the residue. HPNet architecture The input to the neural network is defined as X t , a N × N × M image stack with grid size N × N and the number of binary channels M . The current folding state s t is represented as the concatenation of four binary value feature planes [H t , P t , C t , B t ]. They respectively correspond to H type residue, P type residue, "primary connect" and "H-H connect". For example, H g t = 1 only if the grid point g is occupied by the H type residue. To incorporate the sequence-folding information, we utilize 3 steps of history states and stack them together with the current state. An extra feature plane, E t is used to represent the next residue type to be folded. It will be set as 1 if the residue is H type, or 0 if the residue is P type. The final X t is a concatenation of all these 17 planes with X t = [s t , s t−1 , s t−2 , s t−3 , E t ]. HPNet architecture is illustrated in Figure 2. The latent spatial information is extracted from the raw lattice input by 20 stacked residual blocks with 3×3 filters. Each residual block is comprised of two convolutional layers with ReLU activation function, two batch normalization layers, and a skip connection. At the top, the HPNet is split into two output heads, namely policy and value. The policy head outputs a vector P , representing the prior probability of each folding action. The value head outputs a scalar v, estimating the H-H contact score for the whole protein sequence. To train the HPNet, we use a cross-entropy loss for the policy head to maximize the similarity of the estimated prior probability P to search probabilities π. A mean squared error is adopted to the value head to minimize the error between the predicted value v and the self-folded reward r. Thus, the loss function for HPNet is given by: l = (r − v) 2 − π log P + β θ 2 (8) where θ represents weights of HPNet and β is a hyperparameter that controls the L2 regularization to prevent overfitting. Experiments and analysis Experimental setting of FoldingZero Self-folding We collect around 9000 non-redundant protein sequences from the public PDB dataset (https://www.rcsb.org/), in which any two proteins share less than 25% sequence identity. FoldingZero utilizes the current best HPNet model and R-UCT to sequentially self-fold each protein sequence. A folding action is selected after 300 simulation rounds of the R-UCT. To increase the exploration spaces, a Dirichlet noise is added to the prior probabilities of the parent nodes, with P (n, a) = (1 − )p a + λ a , where λ ∼ Dir(0.03) and = 0.25. Training We store the most recent 60,000 self-folding results into the memory. In every iteration, 256 results are sampled uniformly from the memory slots to train the HPNet. We use SGD (Stochastic Gradient Descent) with momentum 0.9 as the optimization approach, and set the initial learning rate to 0.001 and the weight decay to 4e-5. Evaluation To ensure that the updated HPNet model can generate higher quality prediction, we use 500 unseen protein sequences for evaluation. For every 2000 training steps with 32 batch-size, we save a new checkpoint and evaluate it. If it performs better than the previous best model, it will be used to self-fold and become a new baseline for competition in the next round. Evaluation After training FoldingZero in around two days, we evaluate it on the well-known 2D HP model benchmark dataset (http://www.brown.edu/Research/Istrail_Lab/hp2dbenchmarks.html). First, we compare FoldingZero with a pure UCT based approach regarding the H-H contact score. The UCT approach employs the rollout strategy with similar information utilized by FoldingZero, except the prior probability from the HPNet. We fix the number of simulation round to 300 in FoldingZero, and adjust it in the controlled approach. As shown in Figure 3, with the increase in round number, the performance of the UCT algorithm slightly improves, because it can explore more state space before finalizing the selection. However, with the exponential growth of the search space, it becomes difficult to further improve performance by increasing simulation rounds. In contrast, even with much fewer simulation rounds, FoldingZero outperforms the UCT method, and the advantage is more noticeable when folding long sequences. It demonstrates that the trained HPNet can effectively guide the high-quality tree search simulations. Second, we compare FoldingZero with other state-of-the-art heuristic approaches. A conventional metric, free energy score is utilized to measure their performance. It is defined as the opposite of the H-H contact number. EMC [18] and ENLS [11] were developed based on the genetic algorithm, and Ant-Q [10] is a combined approach with evolutionary algorithm and reinforcement learning. Table 1 shows that FoldingZero achieves the comparable results and the folded free energy scores approach to the optimal ones. It is also worth noting that EMC is based on time-consuming simulation, ENLS uses memory structures to store intermediate results, and Ant-Q learns an independent Q-table for each specific sequence. Thus, when there tends to be an inordinately large number of possible solutions, the simulation rounds or memory requirements of these approaches tend to be prohibitive for longer sequences. In contrast, the efficiency of FoldingZero does not exponentially depend on the sequence length. Even for the long sequences, it can give the decent folding results in a reasonable time period. Table 2. The "S" vertex denotes the first starting residue in the sequence and "E" denotes the last ending one. Result analysis From the benchmark dataset, we select several representative protein sequences listed in Table 2, and visualize their folding results in Figure 4. We observe that FoldingZero successfully forms compact H-H cores by congregating the hydrophobic residues in the structure center and placing polar ones on the boundary. It demonstrates that FoldingZero learns the latent knowledge that hydrophobic residues are predominantly located in protein's core, whereas polar ones are more commonly located on the surface, through DRL with extensive experiences. We also evaluate FoldingZero with some long protein sequences, which are not available in the benchmark dataset due to the limited scalability. As shown in Figure 5a, the folded structure also exhibits the H-H core pattern. Table 2. During the evaluation, we also notice an interesting folding result of Seq3, shown in Figure 5b. For the penultimate residue of the sequence, FoldingZero still attempts to place it on the boundary, because the residue type is polar. However, this folding action causes that the last hydrophobic residue cannot form the potential H-H contact. One possible reason is that HPNet in the folding agent does not be offered the global information of the whole sequence, so that R-UCT may be misguided by the prediction. In the future work, we plan to embed the global information into the input of HPNet to further improve its capacity. Conclusion We proposed an intelligent protein folding framework FoldingZero to self-fold the de novo protein 2D HP structure from scratch. The HPNet and R-UCT are effectively integrated into FoldingZero to select the promising folding action. A reinforcement learning algorithm is adopted to improve the HPNet and R-UCT iteratively in repeated policy procedures. Without any supervision and domain knowledge, FoldingZero achieves comparable high-quality folding results, compared with other heuristic approaches. Without time-consuming searching and computation, FoldingZero is much more scalable and shows great potential to be applied for real-world protein properties prediction. We hope that this work could inspire future works of protein structure prediction with deep reinforcement learning techniques.
3,651
1812.00804
2950959563
Given a set of observations generated by an optimization process, the goal of inverse optimization is to determine likely parameters of that process. We cast inverse optimization as a form of deep learning. Our method, called deep inverse optimization, is to unroll an iterative optimization process and then use backpropagation to learn parameters that generate the observations. We demonstrate that by backpropagating through the interior point algorithm we can learn the coefficients determining the cost vector and the constraints, independently or jointly, for both non-parametric and parametric linear programs, starting from one or multiple observations. With this approach, inverse optimization can leverage concepts and algorithms from deep learning.
In the parametric optimization setting, @cite_16 develop an optimization model that encodes KKT optimality conditions for imputing objective function coefficients of a convex optimization problem. @cite_28 focus on the same problem under the assumption of noisy measurements, developing a bilevel problem and two algorithms which are shown to maintain statistical consistency. Saez-Gallego and Morales @cite_20 address the case of learning @math and @math jointly in a parametric setting where the @math vector is assumed to be an affine function of a regressor. The general case of learning the weights of a parametric linear optimization problem where @math , @math and @math are functions of @math (Figure (iii)) has not been addressed in the literature.
{ "abstract": [ "Inverse optimization refers to the inference of unknown parameters of an optimization problem based on knowledge of its optimal solutions. This paper considers inverse optimization in the setting where measurements of the optimal solutions of a convex optimization problem are corrupted by noise. We first provide a formulation for inverse optimization and prove it to be NP-hard. In contrast to existing methods, we show that the parameter estimates produced by our formulation are statistically consistent. Our approach involves combining a new duality-based reformulation for bilevel programs with a regularization scheme that smooths discontinuities in the formulation. Using epi-convergence theory, we show the regularization parameter can be adjusted to approximate the original inverse optimization problem to arbitrary accuracy, which we use to prove our consistency results. Next, we propose two solution algorithms based on our duality-based formulation. The first is an enumeration algorithm that is applicabl...", "We consider an optimizing process (or parametric optimization problem), i.e., an optimization problem that depends on some parameters. We present a method for imputing or estimating the objective function, based on observations of optimal or nearly optimal choices of the variable for several values of the parameter, and prior knowledge (or assumptions) about the objective. Applications include estimation of consumer utility functions from purchasing choices, estimation of value functions in control problems, given observations of an optimal (or just good) controller, and estimation of cost functions in a flow network.", "We consider the problem of forecasting the aggregate demand of a pool of price-responsive consumers of electricity. The response of the aggregate load to price is modeled by an optimization problem that is characterized by a set of marginal utility curves and minimum and maximum power consumption limits. The task of estimating these parameters is addressed using a generalized inverse optimization scheme that, in turn, requires solving a nonconvex mathematical program. We introduce a solution method that overcomes the nonconvexities by solving instead two linear problems with a penalty term, which is statistically adjusted by using a cross-validation algorithm. The proposed methodology is data-driven and leverages information from regressors, such as time and weather variables, to account for changes in the parameter estimates. The power load of a group of heating, ventilation, and air conditioning systems in buildings is simulated, and the results show that the aggregate demand of the group can be successfully captured by the proposed model, making it suitable for short-term forecasting purposes." ], "cite_N": [ "@cite_28", "@cite_16", "@cite_20" ], "mid": [ "2963474773", "2107308405", "2964052424" ] }
Deep Inverse Optimization
The potential for synergy between optimization and machine learning is wellrecognized [6], with recent examples including [8,18,26]. Our work uses machine learning for inverse optimization. Consider a parametric linear optimization problem, PLP(u, w): minimize x c(u, w) x subject to A(u, w)x ≤ b(u, w),(1) where x ∈ R d and c(u, w) ∈ R d , A(u, w) ∈ R d×m and b(u, w) ∈ R m are all functions of features u and weights w. Let x n tru be an optimal solution to PLP(u n , w tru ). Given a set of observed optimal solutions, {x 1 tru , x 2 tru , . . . , x N tru }, for observed conditions {u 1 , u 2 , . . . , u N }, the goal of inverse optimization (IO) is to determine values of optimization process parameters w that generated the observed optimal solutions. Applications of IO range from medicine (e.g., imputing the importance of treatment sub-objectives from clinically-approved radiotherapy plans [11]) to energy (e.g., predicting the behaviour of price-responsive customers [31]). Fundamentally, IO problems are learning problems: each u n is a feature vector and x n tru is its corresponding target; the goal is to learn model parameters w that minimize some loss function. In this paper, we cast inverse optimization as a form of deep learning. Our method, called deep inverse optimization, is to unroll an iterative optimization process and then use backpropagation to learn model parameters that generate the observations/targets. c(u, wtru) True parametric LP Initial parametric LP Learned parametric LP Figure 1 shows the actual result of applying our deep IO method to three inverse optimization learning tasks. The top panel illustrates the non-parametric, single-point variant of model (1) -the case when exactly one x tru is given -a classical problem in IO (see [1,12]). In Figure 1 (i), only c needs to be learned: starting from an initial cost vector c ini , our method finds c lrn which makes x tru an optimal solution of the LP by minimizing x tru −x lrn 2 . In Figure 1 (ii), starting from c ini , A ini and b ini , our approach finds c lrn , A lrn and b lrn which make x tru an optimal solution of the learned LP through minimizing x tru − x lrn 2 . Figure 1 (iii) shows learning w = [w 0 , w 1 ] for the parametric problem instance minimize x cos(w 0 + w 1 u)x 1 + sin(w 0 + w 1 u)x 2 subject to − x 1 ≤ 0.2w 0 u, − x 2 ≤ −0.2w 1 u, w 0 x 1 + (1 + 1 3 w 1 u)x 2 ≤ w 0 + 0.1u.(2) Starting from w ini = [0.2, 0.4] with a loss (mean squared error) of 0.45, our method is able to find w lrn = [1.0, 1.0] with a loss of zero, thereby making x n tru optimal solutions of (2) for u values {−1.5, −0.5, 0.5, 1.5}. Given newly observed u values, in this example w lrn would predict correct decisions. In other words, the learned model generalizes well. The contributions of this paper are as follows. We propose a general framework for inverse optimization based on deep learning. This framework is applicable to learning coefficients of the objective function and constraints, individually or jointly; minimizing a general loss function; learning from a single or multiple observations; and solving both non-parametric and parametric problems. As a proof of concept, we demonstrate that our method obtains effectively zero loss on many randomly generated linear programs for all three types of learning tasks shown in Figure 1, and always improves the loss significantly. Such a numerical study on randomly generated non-parameteric and parametric linear programs with multiple learnable parameters has not previously been published for any IO method in the literature. Finally, to the best of our knowledge, we are the first to use unrolling and backpropagation for constrained inverse optimization. We explain how our approach differs from methods in inverse optimization and machine learning in Section 2. We present our deep IO framework in Section 3 and our experimental results in Section 4. Section 5 discusses both the generality and the limitations of our work, and Section 6 concludes the paper. Deep Learning Framework for Inverse Optimization The problems studied in inverse optimization are learning problems: given features u n and corresponding targets x n tru , the goal is to learn parameters of a forward optimization model that generate x n tru as its optimal solutions. A complementary view is that inverse optimization is a learning technique specialized to the case when the observed data is coming from an optimization process. Given this perspective on inverse optimization and motivated by the success of deep learning for a variety of learning tasks in recent years (see [23]), this paper develops a deep learning framework for inverse optimization problems. Deep learning is a set techniques for training the parameters of a sequence of transformations (layers) chained together. The more intermediate layers, the 'deeper' the architecture. We refer the reader to the textbook by Goodfellow, Bengio and Courville [19] for additional details about deep learning. The features of the intermediate layers can be trained/learned through backpropagation, an automatic differentiation technique that computes the gradient of an output with respect to its input through the layers of a neural network, starting from the final layer all the way to the initial one. This method efficiently computes an update to the weights of the model [30]. Importantly, current machine learning libraries such as PyTorch provide built-in backpropagation capabilities [28] that allow for wider use of deep learning. Thus, our deep inverse optimization framework iterates between solving the forward optimization problem using an iterative optimization algorithm and backpropagating through the steps (layers) of that algorithm to improve the estimates of learnable parameters (weights) of the forward process. Algorithm 1 Deep inverse optimization framework. Input: wini; (u n , x n tru ) for n = 1, .. N , Output: w lrn 1: w ← wini 2: for s in 1 .. max steps do 3: ∆w ← 0 4: for n in 1 .. N do 5: x ← FO(u n , w) Solve forward problem 6: ← L(x, x n tru ) Compute loss 7: ∆w ← ∆w + ∂ ∂w Accumulate gradient by backprop 8: end for 9: β ← line search(w, α · ∆w N ) Find safe step size 10: w ← w − βα · ∆w N Update weights 11: end for 12: Return w Our approach, shown in Algorithm 1, takes the pairs (u n , x n tru ), n = 1, .., N , as input, and starts by initializing w = w ini . For each n, the forward optimization problem (FO) is solved with the current weights (line 5), and the loss between the resulting optimal solution x and x tru is computed (line 6). The gradient of the loss function with respect to w is computed by backpropagation through the layers of the forward process. In line 9, line search is used to determine the step size, β, for updating the weights: β is reduced by half if infeasibility or unboundedness is encountered until a value is found that will lead to loss reduction or β < 10 −8 , in which case early algorithm termination is triggered. Finally, in line 10, the weights are updated using the average gradient, step size β, and α, a vector representing the component-wise learning rates for w. (i) IPM forward process c(u,w) x(u,w tru ) x (1) x(u,w) x (2) u w c(u,w) A(u,w) b(u,w) x (1) x (2) x(u,w) (ii) Deep inverse optimization through IPM x(u,w tru ) loss(u,w) ... Importantly, our framework is applicable in the context of any differentiable, iterative forward optimization procedure. In principle, parameter gradients are automatically computable even with non-linear constraints or non-linear objectives, so long as they can be expressed through standard differentiable primitives. Our particular implementation uses the barrier interior point method (IPM) as described by Boyd and Vandenberghe [9], as our forward optimization solver. The IPM forward process is illustrated in Figure 2 (i): the central path taken by IPM is illustrated for the current u and w, which define both the current feasible region and the current c(u, w). As shown in Figure 2 (ii), backpropagation starts from the computation of the loss function between a (near) optimal forward optimization solution x(u, w) and the target x(u, w tru ) and proceeds backward through all the steps of IPM, i.e., x(u, w) to x (1) , the starting point of IPM, to the forward instance parameters and finally w to compute ∆w. In practice, backpropagating all the way to x (1) may not be necessary for computing accurate gradients; see Section 5. The framework requires setting three main hyperparameters: w ini , the initial weight vector; max steps, the total number of steps allotted to the training; and α, the learning rates for the different components of w. The number of additional hyperparameters depends on the forward optimization process. Experimental Results In this section, we demonstrate the application of our framework on randomlygenerated LPs for the three types of problems shown in Figure 1: learning c in the non-parametric case; learning c, A and b together in the non-parametric case; and learning w in the parametric case. Implementation Our framework is implemented in Python, using PyTorch version 0.4.1 and its built-in backpropagation capabilities [28]. All numerical operations are carried out with PyTorch tensors and standard PyTorch primitives, including the matrix inversion at the heart of the Newton step. Hyperparameters We limit learning to max steps = 200 in all experiments. Four additional hyperparameters are set in each experiment: -, which controls the precision and termination of IPM; t (0) : the initial value of the barrier IPM sharpness parameter t; µ: the factor by which t is increased along the IPM central path; -α: the vector of per-parameter learning rates, which in some experiments is broken down into α c and α Ab . In all experiments, the hyperparameter is either a constant 10 −5 or decays exponentially from 0.1 to 10 −5 during learning. The decay is a form of graduated optimization [7], and tends to help performance when using the MSE loss. Baseline LPs To generate problem instances, we first create a set of baseline LPs with d variables and m constraints by sampling at least d random points from N (0, 1), and then construct the convex hull via the scipy.spatial.convexhull package [29]. We generate 50 LP instances for each of the following six problem sizes: d = 2 and m ∈ {4, 8, 16} and d = 10, m ∈ {20, 36, 80}. Our experiments focus on inequality constraints. We observed that our method can work for equality constrained instances, but we did not systematically evaluate equality constraints and we leave that for future work. Non-Parametric We first demonstrate the performance of our method for learning c only, and learning c, A and b jointly, on the single-point variant of model (1), i.e., when a single optimal target x tru is given, a classical problem in IO [1]. We use two loss functions, absolute duality gap (ADG) and squared error (SE), defined as follows: ADG = c lrn |x tru − x lrn |,(3)SE = x tru − x lrn 2 2 ,(4) the first of which is a classical performance metric in IO [11] and the second is a standard metric in machine learning. Learning c only To complete instance generation for this experiment, we randomly select one vertex of the convex hull to be x tru for each of the 50 baseline LP instances and for each of the six (m, d) combinations. Initialization is done by sampling each parameter of c ini from N (0, 1). We implement a randomized grid search by sampling 20 random combinations of the following three hyperparameter sets: t (0) ∈ {0.5, 1, 5, 10}, µ ∈ {1.5, 2, 5, 10, 20}, and α c ∈ {1, 10, 100, 1000}. As in other applications of deep learning, it is not clear which hyperparameters will work best for a particular problem instance. For each instance we run our algorithm with the same 20 hyperparameter combinations, reporting the best final error values. Figure 3 (i) shows the results of this experiment for ADG and SE loss. In both cases, our method is able to reliably learn c: in fact, for all instances, the final error is under 10 −4 , while the majority of initial errors are above 10 −1 . There is no clear pattern in the performance of the method as m and d change for ADG; for SE, the final loss is slightly bigger for higher d. Learning c, A, b jointly Our approach to instance generation here is to start with each baseline LP and generate a strictly feasible or infeasible target within some reasonable proximity of an existing vertex. The algorithm is then forced to learn a new c, A, b that generate the target, which is not an optimum for the initial LP. To make this task more challenging, we also perturb c so that it is not initialized too close to the optimal direction. For each of the 50 baseline LP feasible regions, we generate a c ∼ N (0, 1) and compute its optimal solution x * . To generate an infeasible target we set x tru = x * + η where η ∼ U [−0.2, 0.2]. We similarly generate a challenging c ini by corrupting c with noise from U [−0.2, 0.2]. To generate a strictly feasible target near x * , we set x tru = 0.9x * + 0.1x where x is a uniformly random point within the feasible region generated by Dirichlet-weighted combination of all vertices; this method was used because adding noise in 10 dimensions almost always results in an infeasible target. In summary, we generate new LP instances with the same feasible region as the baseline LPs but a corrupted c ini and one feasible and one infeasible target. The goal is to demonstrate the ability of our algorithm to detect the change and also move the constraints and objective so that the feasible/infeasible target becomes a vertex optimum. For each of the six problem sizes, we randomly split the 50 instances into two subsets, one with feasible and the other with infeasible targets. For ADG loss we set = 10 −5 and for SE we use the decay strategy. In practice, this decay strategy is similar to putting emphasis on learning c in the initial iterations and ending with emphasis on constraint learning. The values of hyperparameters α c and α Ab are independently selected from {0.1, 1, 10} and concatenated into one learning rate vector α. We generate 20 different hyperparameter combinations. We run our algorithm on each instance with all hyperparameter combinations and record the value of the best trial. Figure 3 (ii) shows the results of this experiment for ADG and SE loss. In both cases, our method is able to learn model parameters that result in median loss of under 10 −4 . For ADG, our method performs equally well for all problem sizes, and there is not much difference in the final loss for feasible and infeasible targets. For SE, however, the final loss is larger for higher d but decreases as m increases. Furthermore, there is a visible difference in performance of the method on feasible and infeasible points for 10-dimensional instances: learning from infeasible targets becomes a more difficult task. Parametric Several aspects of the experiment for parametric LPs are different from the nonparametric case. First, we train by minimizing MSE(w), defined as MSE(w) = 1 N N n=1 x(u n , w tru ) − x(u n , w) 2 2 .(5) We chose the mean of SE loss instead of the mean of ADG loss for the parametric experiments because it is only zero if the targets are all feasible, which is not necessarily required for ADG to be zero. This makes the SE loss more difficult from a learning point of view, but also leads to more intuitive notion of success. See Section 5 for discussion. In the parametric case, we also assess how well the learned PLP generalizes, by evaluating its MSE(w lrn ) on a held-out test set. To generate parametric problem instances, we again started from the baseline LP feasible regions. To generate a true PLP, we used six weights to define linear functions of u for all elements of c, all elements of b, and one random element in each row of A. For example, for 2-dimensional problems with four constraints, our instances have the following form: minimize x (c 1 + w 1 + w 2 u)x 1 + (c 2 + w 1 + w 2 u)x 2 subject to     a 11 a 12 + w 3 + w 4 u a 21 a 22 + w 3 + w 4 u a 31 + w 3 + w 4 u a 32 a 41 a 42 + w 3 + w 4 u     ≤     b 1 + w 5 + w 6 u b 2 + w 5 + w 6 u b 3 + w 5 + w 6 u b 4 + w 5 + w 6 u    (6) Specifically, the "true PLP" instances are generated by setting w 1 , w 3 , w 5 = 0 and w 2 , w 4 , w 6 We also sample 20 test points u sampled uniformly from [u min , u max ]. We then initialize learning from a corrupted PLP by setting w ini = w tru + η where each element of η ∼ U [−0.2, 0.2]. Hyperparameters are sampled as t (0) ∈ {0.5, 1, 5, 10}, µ ∈ {1.5, 2, 5, 10, 20} and α Ab ∈ {1, 10}, and α c is then chosen to be a factor of {0.01, 1, 100} times α Ab , i.e., a relative learning rate. Here, α c and α Ab control the learning rate of parameters within w that determine c and (A, b), respectively. In total, we generate 20 different hyperparameter combinations. We run our algorithm on each instance with all hyperparameter combinations and record the best final error value. A constant value of = 10 −5 is used. We demonstrate the performance of our method on learning parametric LPs of the form shown in (6) with d = 2, m = 8, and d = 10, m = 36. In Figure 4, we report two metrics evaluated on the training set, namely MSE(w ini ) and MSE(w lrn ), and one metric for the test set, MSE(w lrn ). Figure 4 (iii) shows an example of an instance with d = 2, m = 8 from the training set. We see that, overall, our deep learning method works well on 2-dimensional problems with the training and testing error both being much smaller than the initial error. In the vast majority of cases the test error is also comparable to training error, though there are a few cases where it is worse, which indicates a failure to generalize well. For 10D instances, the algorithm significantly improves MSE(w lrn ) over the initialization MSE(w ini ), but in most cases fails to drive the loss to zero, either due to local minima or slow convergence. Again, performance on the test set is similar to that on training set. Discussion The conceptual message that we wish to reinforce is that inverse optimization should be viewed as a form of deep learning, and that unrolling gives easy access to the gradients of any parameter used directly or indirectly in the forward optimization process. There are many aspects to this view that merit further exploration. What kind of forward optimization processes can be inversely optimized this way? Which ideas and algorithms from the deep learning community will help? Are there aspects of IO that make gradient-based learning more challenging than in deep learning at large? Conclusive answers are beyond the scope of this paper, but we discuss these and other questions below. Generality and applicability. As a proof of concept, this paper uses linear programming for the forward problems and IPM with barrier method as the forward optimization process. In principle, the framework is applicable to any forward process for which automatic differentiation can be applied. This observation does not mean that ours is the best approach for a specialized IO problem, such as learning c from a single point [12] or multiple points within the same feasible region [14], but it provides a new strategy. The practical message of our paper is that, when faced with novel classes or novel parameterizations of IO problems, the unrolling strategy provides convenient access to a suite of general-purpose gradient-based algorithms for solving the IO problem at hand. This strategy is made especially easy by deep learning libraries that support dynamic 'computation graphs' such as PyTorch. Researchers working within this framework can rapidly apply IO to many differentiable forward optimization processes, without having to derive the algorithm for each case. Automatic differentiation and backpropagation have enabled a new level of productivity for deep learning research, and they may do the same for inverse optimization research. Applying deep inverse optimization does not require expertise in deep learning itself. We chose IPM as a forward process because the inner Newton step is differentiable and because we expected the gradient to temperature parameter t to have a stabilizing effect on the gradient. For non-differentiable optimization processes, it may still be possible to develop differentiable versions. In deep learning, many advances have been made by developing differentiable versions of traditionally discrete operations, such as memory addressing [20] or sampling from a discrete distribution [25]. We believe the scope of differentiable forward optimization processes may similarly be expanded over time. Limitations and possible improvements. Deep IO inherits the limitations of most gradient-based methods. If learning is initialized to the right "basin of attraction", it can proceed to a global optimum. Even then, the choice of learning algorithm may be crucial. When implemented within a steepest descent framework, as we have here, the learning procedure can get trapped in local minima or exhibit very slow convergence. Such effects are why most instances in Figure 4 (ii) failed to achieve zero loss. In deep learning with neural networks, poor local minima become exponentially rare as the dimension of the learning problem increases [15,33]. A typical strategy for training neural networks is therefore to over-parameterize (use a high search dimension) and then use regularization to avoid over-fitting to the data. In deep IO, natural parameterizations of the forward process may not permit an increase in dimension, or there may not be enough observations for regularization to compensate, so local minima remain a potential obstacle. We believe training and regularization methods specialized to low-dimensional learning problems such as by Sahoo et al. [32] may be applicable here. We expect that other techniques from deep learning, and from gradient-based optimization in general, will translate to deep IO. For example, optimization techniques with second-order aspects such as momentum [35] and L-BFGS [10] are readily available in deep learning frameworks. Other deep learning 'tricks' may be applicable to stabilizing deep IO. For example, we observe that, when c is normal to a constraint, the gradient with respect to c can suddenly grow very large. We stabilized this behaviour with line search, but a similar 'exploding gradient' phenomenon exists when training deep recurrent neural networks, and gradient clipping [27] is a popular way to stabilize training. A detailed investigation of applicable deep learning techniques is outside the scope of this paper. Deep IO may be more successful when the loss with respect to the forward process can be annealed or 'smoothed' in a manner akin to graduated nonconvexity [7]. Our -decay strategy is an example of this, as discussed below. Finally, it may be possible to develop hybrid approaches, combining gradientbased learning with closed-form solutions or combinatorial algorithms. Loss function and metric of success. One advantage of the deep inverse optimization approach is that it is can accommodate various loss functions, or combinations of loss functions, without special development or analysis. For example one could substitute other p-norms, or losses that are robust to outliers, and the gradient will be automatically available. This flexibility may be valuable. Special loss functions have been important in machine learning, especially for structured output problems [21]. The decision variables of optimization processes are likewise a form of structured output. In this study we chose two classical loss functions: absolute duality gap and squared error. The behaviour of our algorithm varied depending on the loss function used. Looking at Figure 3 (ii) it appears that deep IO performs better with ADG loss than with SE loss when learning c, A, b jointly. However, this performance is due to the theoretical property that ADG can be zero even when the observed target point is arbitrarily infeasible [12]. With ADG, all the IO solver needs to do is adjust c, A, b so that x lrn −x tru is orthogonal to c, which in no way requires the learned model to be capable of generating x tru as an optimum. In other words, ADG is meaningful mainly when the true feasible region is known, as in Figure 3 (i). When the true region is unknown, SE prioritizes solutions that directly generate the observations x n tru , and may therefore be a more meaningful loss function. That is why we used it for our parametric experiments depicted in Figure 4. Minimizing the SE loss also appears to be more challenging for steepest descent. To get a sense for the characteristics of ADG versus SE from the point of view of varying c, consider Figure 5, which depicts the loss for the IO problem in Figure 1 (i) using both high precision ( = 10 −5 ) and low precision ( = 0.1, 0.01) for IPM. Because the ADG loss is directly dependent on c, the loss varies smoothly even as the corresponding optimum x * stays fixed. The SE loss, in contrast, is piece-wise constant; an instantaneous perturbation of c will almost never change the SE loss in the limit of → 0. Note that the gradients derived by implicit differentiation [2] indicate ∂ ∂c = 0 everywhere in the linear case, which would mean c cannot be learned by gradient descent. IPM can learn c nonetheless because the barrier sharpness parameter t smooths the loss, especially at low values. The precision parameter limits the maximal sharpness during forward optimization, and so the gradient ∂ ∂c is not zero in practice, especially when is weak. Notice that the SE loss surface becomes qualitatively smoother, whereas ADG is not fundamentally changed. Also notice that when c is normal to a constraint (when the optimal point is about to transition from one point to another) the gradient ∂ ∂c explodes even when the problem is smoothed. Computational efficiency. Our paper is conceptual and focuses on flexibility and the likelihood of success, rather than computational efficiency. Many applications of IO are not real-time, and so we expect methods with running times on the order of seconds or minutes to be of practical use. Still, we believe the framework can be both flexible and fast. Deep learning frameworks are GPU accelerated and scale well with the size of an individual forward problem, so large instances are not a concern. A bigger issue for GPUs is solving many small or moderate instances efficiently. Amos and Kolter [2] developed a batch-mode GPU forward solver to address this. What is more concerning for the unrolling strategy is that forward optimization processes can be very deep, with hundreds or thousands of iterations. Backpropagation requires keeping all the intermediate values of the forward pass resident in memory, for later use in the backward pass. The computational cost of backpropagation is comparable to that of the forward process, so there is no asymptotic advantage to skipping the backwards pass. Although memory usage was small in our instances, if the memory usage is linear with depth, then at some depth the unrolling strategy will cease to be practical compared to Amos and Kolter's [2] implicit differentiation approach. However, we observed that for IPM most of the gradient contribution comes from the final ten Newton steps before termination. In other words, there is a vanishing gradient with depth, which means the gradient can be well-approximated in practice with truncated backpropagation through time (see [34] for review), which uses a small constant pool of memory regardless of depth. In practice, we suggest that the unrolling approach is convenient during the development and exploration phase of IO research. Once an IO model is proven to work, it can potentially be made more efficient by deriving the implicit gradients [2] and comparing them to the unrolled implementation as a reference. Still, more important than improving any of these constants is to use asymptotically faster learning algorithms actively being developed in the deep learning community. Conclusion We developed a deep learning framework for inverse optimization based on backpropagation through an iterative forward optimization process. We illustrate the potential of this framework via an implementation where the forward process is the interior point barrier method. Our results on linear non-parametric and parametric problems show promising performance. To the best of our knowledge, this paper is the first to explicitly connect deep learning and inverse optimization.
4,982
1812.00804
2950959563
Given a set of observations generated by an optimization process, the goal of inverse optimization is to determine likely parameters of that process. We cast inverse optimization as a form of deep learning. Our method, called deep inverse optimization, is to unroll an iterative optimization process and then use backpropagation to learn parameters that generate the observations. We demonstrate that by backpropagating through the interior point algorithm we can learn the coefficients determining the cost vector and the constraints, independently or jointly, for both non-parametric and parametric linear programs, starting from one or multiple observations. With this approach, inverse optimization can leverage concepts and algorithms from deep learning.
Recent work in machine learning @cite_27 @cite_23 @cite_33 views inverse optimization through the lens of online learning, where new observations appear over time rather than as one batch. Our approach may be applicable in online settings, but we focus on generality in the batch setting and do not investigate real-time cases.
{ "abstract": [ "In this paper, we demonstrate how to learn the objective function of a decision-maker while only observing the problem input data and the decision-maker's corresponding decisions over multiple rounds. Our approach is based on online learning and works for linear objectives over arbitrary feasible sets for which we have a linear optimization oracle. As such, it generalizes previous approaches based on KKT-system decomposition and dualization. The two exact algorithms we present -- based on multiplicative weights updates and online gradient descent respectively -- converge at a rate of O(1 sqrt(T)) and thus allow taking decisions which are essentially as good as those of the observed decision-maker already after relatively few observations. We also discuss several useful generalizations, such as the approximate learning of non-linear objective functions and the case of suboptimal observations. Finally, we show the effectiveness and possible applications of our methods in a broad computational study.", "Inverse optimization is a powerful paradigm for learning preferences and restrictions that explain the behavior of a decision maker, based on a set of external signal and the corresponding decision pairs. However, most inverse optimization algorithms are designed specifically in batch setting, where all the data is available in advance. As a consequence, there has been rare use of these methods in an online setting suitable for real-time applications. In this paper, we propose a general framework for inverse optimization through online learning. Specifically, we develop an online learning algorithm that uses an implicit update rule which can handle noisy data. Moreover, under additional regularity assumptions in terms of the data and the model, we prove that our algorithm converges at a rate of @math and is statistically consistent. In our experiments, we show the online learning approach can learn the parameters with great accuracy and is very robust to noises, and achieves a dramatic improvement in computational efficacy over the batch learning approach.", "" ], "cite_N": [ "@cite_27", "@cite_33", "@cite_23" ], "mid": [ "2899173741", "2892189299", "2740573360" ] }
Deep Inverse Optimization
The potential for synergy between optimization and machine learning is wellrecognized [6], with recent examples including [8,18,26]. Our work uses machine learning for inverse optimization. Consider a parametric linear optimization problem, PLP(u, w): minimize x c(u, w) x subject to A(u, w)x ≤ b(u, w),(1) where x ∈ R d and c(u, w) ∈ R d , A(u, w) ∈ R d×m and b(u, w) ∈ R m are all functions of features u and weights w. Let x n tru be an optimal solution to PLP(u n , w tru ). Given a set of observed optimal solutions, {x 1 tru , x 2 tru , . . . , x N tru }, for observed conditions {u 1 , u 2 , . . . , u N }, the goal of inverse optimization (IO) is to determine values of optimization process parameters w that generated the observed optimal solutions. Applications of IO range from medicine (e.g., imputing the importance of treatment sub-objectives from clinically-approved radiotherapy plans [11]) to energy (e.g., predicting the behaviour of price-responsive customers [31]). Fundamentally, IO problems are learning problems: each u n is a feature vector and x n tru is its corresponding target; the goal is to learn model parameters w that minimize some loss function. In this paper, we cast inverse optimization as a form of deep learning. Our method, called deep inverse optimization, is to unroll an iterative optimization process and then use backpropagation to learn model parameters that generate the observations/targets. c(u, wtru) True parametric LP Initial parametric LP Learned parametric LP Figure 1 shows the actual result of applying our deep IO method to three inverse optimization learning tasks. The top panel illustrates the non-parametric, single-point variant of model (1) -the case when exactly one x tru is given -a classical problem in IO (see [1,12]). In Figure 1 (i), only c needs to be learned: starting from an initial cost vector c ini , our method finds c lrn which makes x tru an optimal solution of the LP by minimizing x tru −x lrn 2 . In Figure 1 (ii), starting from c ini , A ini and b ini , our approach finds c lrn , A lrn and b lrn which make x tru an optimal solution of the learned LP through minimizing x tru − x lrn 2 . Figure 1 (iii) shows learning w = [w 0 , w 1 ] for the parametric problem instance minimize x cos(w 0 + w 1 u)x 1 + sin(w 0 + w 1 u)x 2 subject to − x 1 ≤ 0.2w 0 u, − x 2 ≤ −0.2w 1 u, w 0 x 1 + (1 + 1 3 w 1 u)x 2 ≤ w 0 + 0.1u.(2) Starting from w ini = [0.2, 0.4] with a loss (mean squared error) of 0.45, our method is able to find w lrn = [1.0, 1.0] with a loss of zero, thereby making x n tru optimal solutions of (2) for u values {−1.5, −0.5, 0.5, 1.5}. Given newly observed u values, in this example w lrn would predict correct decisions. In other words, the learned model generalizes well. The contributions of this paper are as follows. We propose a general framework for inverse optimization based on deep learning. This framework is applicable to learning coefficients of the objective function and constraints, individually or jointly; minimizing a general loss function; learning from a single or multiple observations; and solving both non-parametric and parametric problems. As a proof of concept, we demonstrate that our method obtains effectively zero loss on many randomly generated linear programs for all three types of learning tasks shown in Figure 1, and always improves the loss significantly. Such a numerical study on randomly generated non-parameteric and parametric linear programs with multiple learnable parameters has not previously been published for any IO method in the literature. Finally, to the best of our knowledge, we are the first to use unrolling and backpropagation for constrained inverse optimization. We explain how our approach differs from methods in inverse optimization and machine learning in Section 2. We present our deep IO framework in Section 3 and our experimental results in Section 4. Section 5 discusses both the generality and the limitations of our work, and Section 6 concludes the paper. Deep Learning Framework for Inverse Optimization The problems studied in inverse optimization are learning problems: given features u n and corresponding targets x n tru , the goal is to learn parameters of a forward optimization model that generate x n tru as its optimal solutions. A complementary view is that inverse optimization is a learning technique specialized to the case when the observed data is coming from an optimization process. Given this perspective on inverse optimization and motivated by the success of deep learning for a variety of learning tasks in recent years (see [23]), this paper develops a deep learning framework for inverse optimization problems. Deep learning is a set techniques for training the parameters of a sequence of transformations (layers) chained together. The more intermediate layers, the 'deeper' the architecture. We refer the reader to the textbook by Goodfellow, Bengio and Courville [19] for additional details about deep learning. The features of the intermediate layers can be trained/learned through backpropagation, an automatic differentiation technique that computes the gradient of an output with respect to its input through the layers of a neural network, starting from the final layer all the way to the initial one. This method efficiently computes an update to the weights of the model [30]. Importantly, current machine learning libraries such as PyTorch provide built-in backpropagation capabilities [28] that allow for wider use of deep learning. Thus, our deep inverse optimization framework iterates between solving the forward optimization problem using an iterative optimization algorithm and backpropagating through the steps (layers) of that algorithm to improve the estimates of learnable parameters (weights) of the forward process. Algorithm 1 Deep inverse optimization framework. Input: wini; (u n , x n tru ) for n = 1, .. N , Output: w lrn 1: w ← wini 2: for s in 1 .. max steps do 3: ∆w ← 0 4: for n in 1 .. N do 5: x ← FO(u n , w) Solve forward problem 6: ← L(x, x n tru ) Compute loss 7: ∆w ← ∆w + ∂ ∂w Accumulate gradient by backprop 8: end for 9: β ← line search(w, α · ∆w N ) Find safe step size 10: w ← w − βα · ∆w N Update weights 11: end for 12: Return w Our approach, shown in Algorithm 1, takes the pairs (u n , x n tru ), n = 1, .., N , as input, and starts by initializing w = w ini . For each n, the forward optimization problem (FO) is solved with the current weights (line 5), and the loss between the resulting optimal solution x and x tru is computed (line 6). The gradient of the loss function with respect to w is computed by backpropagation through the layers of the forward process. In line 9, line search is used to determine the step size, β, for updating the weights: β is reduced by half if infeasibility or unboundedness is encountered until a value is found that will lead to loss reduction or β < 10 −8 , in which case early algorithm termination is triggered. Finally, in line 10, the weights are updated using the average gradient, step size β, and α, a vector representing the component-wise learning rates for w. (i) IPM forward process c(u,w) x(u,w tru ) x (1) x(u,w) x (2) u w c(u,w) A(u,w) b(u,w) x (1) x (2) x(u,w) (ii) Deep inverse optimization through IPM x(u,w tru ) loss(u,w) ... Importantly, our framework is applicable in the context of any differentiable, iterative forward optimization procedure. In principle, parameter gradients are automatically computable even with non-linear constraints or non-linear objectives, so long as they can be expressed through standard differentiable primitives. Our particular implementation uses the barrier interior point method (IPM) as described by Boyd and Vandenberghe [9], as our forward optimization solver. The IPM forward process is illustrated in Figure 2 (i): the central path taken by IPM is illustrated for the current u and w, which define both the current feasible region and the current c(u, w). As shown in Figure 2 (ii), backpropagation starts from the computation of the loss function between a (near) optimal forward optimization solution x(u, w) and the target x(u, w tru ) and proceeds backward through all the steps of IPM, i.e., x(u, w) to x (1) , the starting point of IPM, to the forward instance parameters and finally w to compute ∆w. In practice, backpropagating all the way to x (1) may not be necessary for computing accurate gradients; see Section 5. The framework requires setting three main hyperparameters: w ini , the initial weight vector; max steps, the total number of steps allotted to the training; and α, the learning rates for the different components of w. The number of additional hyperparameters depends on the forward optimization process. Experimental Results In this section, we demonstrate the application of our framework on randomlygenerated LPs for the three types of problems shown in Figure 1: learning c in the non-parametric case; learning c, A and b together in the non-parametric case; and learning w in the parametric case. Implementation Our framework is implemented in Python, using PyTorch version 0.4.1 and its built-in backpropagation capabilities [28]. All numerical operations are carried out with PyTorch tensors and standard PyTorch primitives, including the matrix inversion at the heart of the Newton step. Hyperparameters We limit learning to max steps = 200 in all experiments. Four additional hyperparameters are set in each experiment: -, which controls the precision and termination of IPM; t (0) : the initial value of the barrier IPM sharpness parameter t; µ: the factor by which t is increased along the IPM central path; -α: the vector of per-parameter learning rates, which in some experiments is broken down into α c and α Ab . In all experiments, the hyperparameter is either a constant 10 −5 or decays exponentially from 0.1 to 10 −5 during learning. The decay is a form of graduated optimization [7], and tends to help performance when using the MSE loss. Baseline LPs To generate problem instances, we first create a set of baseline LPs with d variables and m constraints by sampling at least d random points from N (0, 1), and then construct the convex hull via the scipy.spatial.convexhull package [29]. We generate 50 LP instances for each of the following six problem sizes: d = 2 and m ∈ {4, 8, 16} and d = 10, m ∈ {20, 36, 80}. Our experiments focus on inequality constraints. We observed that our method can work for equality constrained instances, but we did not systematically evaluate equality constraints and we leave that for future work. Non-Parametric We first demonstrate the performance of our method for learning c only, and learning c, A and b jointly, on the single-point variant of model (1), i.e., when a single optimal target x tru is given, a classical problem in IO [1]. We use two loss functions, absolute duality gap (ADG) and squared error (SE), defined as follows: ADG = c lrn |x tru − x lrn |,(3)SE = x tru − x lrn 2 2 ,(4) the first of which is a classical performance metric in IO [11] and the second is a standard metric in machine learning. Learning c only To complete instance generation for this experiment, we randomly select one vertex of the convex hull to be x tru for each of the 50 baseline LP instances and for each of the six (m, d) combinations. Initialization is done by sampling each parameter of c ini from N (0, 1). We implement a randomized grid search by sampling 20 random combinations of the following three hyperparameter sets: t (0) ∈ {0.5, 1, 5, 10}, µ ∈ {1.5, 2, 5, 10, 20}, and α c ∈ {1, 10, 100, 1000}. As in other applications of deep learning, it is not clear which hyperparameters will work best for a particular problem instance. For each instance we run our algorithm with the same 20 hyperparameter combinations, reporting the best final error values. Figure 3 (i) shows the results of this experiment for ADG and SE loss. In both cases, our method is able to reliably learn c: in fact, for all instances, the final error is under 10 −4 , while the majority of initial errors are above 10 −1 . There is no clear pattern in the performance of the method as m and d change for ADG; for SE, the final loss is slightly bigger for higher d. Learning c, A, b jointly Our approach to instance generation here is to start with each baseline LP and generate a strictly feasible or infeasible target within some reasonable proximity of an existing vertex. The algorithm is then forced to learn a new c, A, b that generate the target, which is not an optimum for the initial LP. To make this task more challenging, we also perturb c so that it is not initialized too close to the optimal direction. For each of the 50 baseline LP feasible regions, we generate a c ∼ N (0, 1) and compute its optimal solution x * . To generate an infeasible target we set x tru = x * + η where η ∼ U [−0.2, 0.2]. We similarly generate a challenging c ini by corrupting c with noise from U [−0.2, 0.2]. To generate a strictly feasible target near x * , we set x tru = 0.9x * + 0.1x where x is a uniformly random point within the feasible region generated by Dirichlet-weighted combination of all vertices; this method was used because adding noise in 10 dimensions almost always results in an infeasible target. In summary, we generate new LP instances with the same feasible region as the baseline LPs but a corrupted c ini and one feasible and one infeasible target. The goal is to demonstrate the ability of our algorithm to detect the change and also move the constraints and objective so that the feasible/infeasible target becomes a vertex optimum. For each of the six problem sizes, we randomly split the 50 instances into two subsets, one with feasible and the other with infeasible targets. For ADG loss we set = 10 −5 and for SE we use the decay strategy. In practice, this decay strategy is similar to putting emphasis on learning c in the initial iterations and ending with emphasis on constraint learning. The values of hyperparameters α c and α Ab are independently selected from {0.1, 1, 10} and concatenated into one learning rate vector α. We generate 20 different hyperparameter combinations. We run our algorithm on each instance with all hyperparameter combinations and record the value of the best trial. Figure 3 (ii) shows the results of this experiment for ADG and SE loss. In both cases, our method is able to learn model parameters that result in median loss of under 10 −4 . For ADG, our method performs equally well for all problem sizes, and there is not much difference in the final loss for feasible and infeasible targets. For SE, however, the final loss is larger for higher d but decreases as m increases. Furthermore, there is a visible difference in performance of the method on feasible and infeasible points for 10-dimensional instances: learning from infeasible targets becomes a more difficult task. Parametric Several aspects of the experiment for parametric LPs are different from the nonparametric case. First, we train by minimizing MSE(w), defined as MSE(w) = 1 N N n=1 x(u n , w tru ) − x(u n , w) 2 2 .(5) We chose the mean of SE loss instead of the mean of ADG loss for the parametric experiments because it is only zero if the targets are all feasible, which is not necessarily required for ADG to be zero. This makes the SE loss more difficult from a learning point of view, but also leads to more intuitive notion of success. See Section 5 for discussion. In the parametric case, we also assess how well the learned PLP generalizes, by evaluating its MSE(w lrn ) on a held-out test set. To generate parametric problem instances, we again started from the baseline LP feasible regions. To generate a true PLP, we used six weights to define linear functions of u for all elements of c, all elements of b, and one random element in each row of A. For example, for 2-dimensional problems with four constraints, our instances have the following form: minimize x (c 1 + w 1 + w 2 u)x 1 + (c 2 + w 1 + w 2 u)x 2 subject to     a 11 a 12 + w 3 + w 4 u a 21 a 22 + w 3 + w 4 u a 31 + w 3 + w 4 u a 32 a 41 a 42 + w 3 + w 4 u     ≤     b 1 + w 5 + w 6 u b 2 + w 5 + w 6 u b 3 + w 5 + w 6 u b 4 + w 5 + w 6 u    (6) Specifically, the "true PLP" instances are generated by setting w 1 , w 3 , w 5 = 0 and w 2 , w 4 , w 6 We also sample 20 test points u sampled uniformly from [u min , u max ]. We then initialize learning from a corrupted PLP by setting w ini = w tru + η where each element of η ∼ U [−0.2, 0.2]. Hyperparameters are sampled as t (0) ∈ {0.5, 1, 5, 10}, µ ∈ {1.5, 2, 5, 10, 20} and α Ab ∈ {1, 10}, and α c is then chosen to be a factor of {0.01, 1, 100} times α Ab , i.e., a relative learning rate. Here, α c and α Ab control the learning rate of parameters within w that determine c and (A, b), respectively. In total, we generate 20 different hyperparameter combinations. We run our algorithm on each instance with all hyperparameter combinations and record the best final error value. A constant value of = 10 −5 is used. We demonstrate the performance of our method on learning parametric LPs of the form shown in (6) with d = 2, m = 8, and d = 10, m = 36. In Figure 4, we report two metrics evaluated on the training set, namely MSE(w ini ) and MSE(w lrn ), and one metric for the test set, MSE(w lrn ). Figure 4 (iii) shows an example of an instance with d = 2, m = 8 from the training set. We see that, overall, our deep learning method works well on 2-dimensional problems with the training and testing error both being much smaller than the initial error. In the vast majority of cases the test error is also comparable to training error, though there are a few cases where it is worse, which indicates a failure to generalize well. For 10D instances, the algorithm significantly improves MSE(w lrn ) over the initialization MSE(w ini ), but in most cases fails to drive the loss to zero, either due to local minima or slow convergence. Again, performance on the test set is similar to that on training set. Discussion The conceptual message that we wish to reinforce is that inverse optimization should be viewed as a form of deep learning, and that unrolling gives easy access to the gradients of any parameter used directly or indirectly in the forward optimization process. There are many aspects to this view that merit further exploration. What kind of forward optimization processes can be inversely optimized this way? Which ideas and algorithms from the deep learning community will help? Are there aspects of IO that make gradient-based learning more challenging than in deep learning at large? Conclusive answers are beyond the scope of this paper, but we discuss these and other questions below. Generality and applicability. As a proof of concept, this paper uses linear programming for the forward problems and IPM with barrier method as the forward optimization process. In principle, the framework is applicable to any forward process for which automatic differentiation can be applied. This observation does not mean that ours is the best approach for a specialized IO problem, such as learning c from a single point [12] or multiple points within the same feasible region [14], but it provides a new strategy. The practical message of our paper is that, when faced with novel classes or novel parameterizations of IO problems, the unrolling strategy provides convenient access to a suite of general-purpose gradient-based algorithms for solving the IO problem at hand. This strategy is made especially easy by deep learning libraries that support dynamic 'computation graphs' such as PyTorch. Researchers working within this framework can rapidly apply IO to many differentiable forward optimization processes, without having to derive the algorithm for each case. Automatic differentiation and backpropagation have enabled a new level of productivity for deep learning research, and they may do the same for inverse optimization research. Applying deep inverse optimization does not require expertise in deep learning itself. We chose IPM as a forward process because the inner Newton step is differentiable and because we expected the gradient to temperature parameter t to have a stabilizing effect on the gradient. For non-differentiable optimization processes, it may still be possible to develop differentiable versions. In deep learning, many advances have been made by developing differentiable versions of traditionally discrete operations, such as memory addressing [20] or sampling from a discrete distribution [25]. We believe the scope of differentiable forward optimization processes may similarly be expanded over time. Limitations and possible improvements. Deep IO inherits the limitations of most gradient-based methods. If learning is initialized to the right "basin of attraction", it can proceed to a global optimum. Even then, the choice of learning algorithm may be crucial. When implemented within a steepest descent framework, as we have here, the learning procedure can get trapped in local minima or exhibit very slow convergence. Such effects are why most instances in Figure 4 (ii) failed to achieve zero loss. In deep learning with neural networks, poor local minima become exponentially rare as the dimension of the learning problem increases [15,33]. A typical strategy for training neural networks is therefore to over-parameterize (use a high search dimension) and then use regularization to avoid over-fitting to the data. In deep IO, natural parameterizations of the forward process may not permit an increase in dimension, or there may not be enough observations for regularization to compensate, so local minima remain a potential obstacle. We believe training and regularization methods specialized to low-dimensional learning problems such as by Sahoo et al. [32] may be applicable here. We expect that other techniques from deep learning, and from gradient-based optimization in general, will translate to deep IO. For example, optimization techniques with second-order aspects such as momentum [35] and L-BFGS [10] are readily available in deep learning frameworks. Other deep learning 'tricks' may be applicable to stabilizing deep IO. For example, we observe that, when c is normal to a constraint, the gradient with respect to c can suddenly grow very large. We stabilized this behaviour with line search, but a similar 'exploding gradient' phenomenon exists when training deep recurrent neural networks, and gradient clipping [27] is a popular way to stabilize training. A detailed investigation of applicable deep learning techniques is outside the scope of this paper. Deep IO may be more successful when the loss with respect to the forward process can be annealed or 'smoothed' in a manner akin to graduated nonconvexity [7]. Our -decay strategy is an example of this, as discussed below. Finally, it may be possible to develop hybrid approaches, combining gradientbased learning with closed-form solutions or combinatorial algorithms. Loss function and metric of success. One advantage of the deep inverse optimization approach is that it is can accommodate various loss functions, or combinations of loss functions, without special development or analysis. For example one could substitute other p-norms, or losses that are robust to outliers, and the gradient will be automatically available. This flexibility may be valuable. Special loss functions have been important in machine learning, especially for structured output problems [21]. The decision variables of optimization processes are likewise a form of structured output. In this study we chose two classical loss functions: absolute duality gap and squared error. The behaviour of our algorithm varied depending on the loss function used. Looking at Figure 3 (ii) it appears that deep IO performs better with ADG loss than with SE loss when learning c, A, b jointly. However, this performance is due to the theoretical property that ADG can be zero even when the observed target point is arbitrarily infeasible [12]. With ADG, all the IO solver needs to do is adjust c, A, b so that x lrn −x tru is orthogonal to c, which in no way requires the learned model to be capable of generating x tru as an optimum. In other words, ADG is meaningful mainly when the true feasible region is known, as in Figure 3 (i). When the true region is unknown, SE prioritizes solutions that directly generate the observations x n tru , and may therefore be a more meaningful loss function. That is why we used it for our parametric experiments depicted in Figure 4. Minimizing the SE loss also appears to be more challenging for steepest descent. To get a sense for the characteristics of ADG versus SE from the point of view of varying c, consider Figure 5, which depicts the loss for the IO problem in Figure 1 (i) using both high precision ( = 10 −5 ) and low precision ( = 0.1, 0.01) for IPM. Because the ADG loss is directly dependent on c, the loss varies smoothly even as the corresponding optimum x * stays fixed. The SE loss, in contrast, is piece-wise constant; an instantaneous perturbation of c will almost never change the SE loss in the limit of → 0. Note that the gradients derived by implicit differentiation [2] indicate ∂ ∂c = 0 everywhere in the linear case, which would mean c cannot be learned by gradient descent. IPM can learn c nonetheless because the barrier sharpness parameter t smooths the loss, especially at low values. The precision parameter limits the maximal sharpness during forward optimization, and so the gradient ∂ ∂c is not zero in practice, especially when is weak. Notice that the SE loss surface becomes qualitatively smoother, whereas ADG is not fundamentally changed. Also notice that when c is normal to a constraint (when the optimal point is about to transition from one point to another) the gradient ∂ ∂c explodes even when the problem is smoothed. Computational efficiency. Our paper is conceptual and focuses on flexibility and the likelihood of success, rather than computational efficiency. Many applications of IO are not real-time, and so we expect methods with running times on the order of seconds or minutes to be of practical use. Still, we believe the framework can be both flexible and fast. Deep learning frameworks are GPU accelerated and scale well with the size of an individual forward problem, so large instances are not a concern. A bigger issue for GPUs is solving many small or moderate instances efficiently. Amos and Kolter [2] developed a batch-mode GPU forward solver to address this. What is more concerning for the unrolling strategy is that forward optimization processes can be very deep, with hundreds or thousands of iterations. Backpropagation requires keeping all the intermediate values of the forward pass resident in memory, for later use in the backward pass. The computational cost of backpropagation is comparable to that of the forward process, so there is no asymptotic advantage to skipping the backwards pass. Although memory usage was small in our instances, if the memory usage is linear with depth, then at some depth the unrolling strategy will cease to be practical compared to Amos and Kolter's [2] implicit differentiation approach. However, we observed that for IPM most of the gradient contribution comes from the final ten Newton steps before termination. In other words, there is a vanishing gradient with depth, which means the gradient can be well-approximated in practice with truncated backpropagation through time (see [34] for review), which uses a small constant pool of memory regardless of depth. In practice, we suggest that the unrolling approach is convenient during the development and exploration phase of IO research. Once an IO model is proven to work, it can potentially be made more efficient by deriving the implicit gradients [2] and comparing them to the unrolled implementation as a reference. Still, more important than improving any of these constants is to use asymptotically faster learning algorithms actively being developed in the deep learning community. Conclusion We developed a deep learning framework for inverse optimization based on backpropagation through an iterative forward optimization process. We illustrate the potential of this framework via an implementation where the forward process is the interior point barrier method. Our results on linear non-parametric and parametric problems show promising performance. To the best of our knowledge, this paper is the first to explicitly connect deep learning and inverse optimization.
4,982
1812.00804
2950959563
Given a set of observations generated by an optimization process, the goal of inverse optimization is to determine likely parameters of that process. We cast inverse optimization as a form of deep learning. Our method, called deep inverse optimization, is to unroll an iterative optimization process and then use backpropagation to learn parameters that generate the observations. We demonstrate that by backpropagating through the interior point algorithm we can learn the coefficients determining the cost vector and the constraints, independently or jointly, for both non-parametric and parametric linear programs, starting from one or multiple observations. With this approach, inverse optimization can leverage concepts and algorithms from deep learning.
Methodologically, our unrolling strategy is similar to @cite_26 who directly optimize the hyperparameters of a neural network training procedure with gradient descent. Conceptually, the closest papers to our work are by Amos and Kolter @cite_7 and Donti, Amos and Kolter @cite_11 . However, these papers are written independently of the inverse optimization literature. Amos and Kolter @cite_7 present the OptNet framework, which integrates a quadratic optimization layer in a deep neural network. The gradients for updating the coefficients of the optimization problem are derived through implicit differentiation. This approach involves taking matrix differentials of the KKT conditions for the optimization problem in question, while our strategy is based on allowing a deep learning framework to unroll an existing optimization procedure. Their method has efficiency advantages, while our unrolling approach is easily applicable, including to processes for which the KKT conditions may not hold or are difficult to implicitly differentiate. We include a more in-depth discussion in .
{ "abstract": [ "Tuning hyperparameters of learning algorithms is hard because gradients are usually unavailable. We compute exact gradients of cross-validation performance with respect to all hyperparameters by chaining derivatives backwards through the entire training procedure. These gradients allow us to optimize thousands of hyperparameters, including step-size and momentum schedules, weight initialization distributions, richly parameterized regularization schemes, and neural network architectures. We compute hyperparameter gradients by exactly reversing the dynamics of stochastic gradient descent with momentum.", "", "With the increasing popularity of machine learning techniques, it has become common to see prediction algorithms operating within some larger process. However, the criteria by which we train these algorithms often differ from the ultimate criteria on which we evaluate them. This paper proposes an end-to-end approach for learning probabilistic machine learning models in a manner that directly captures the ultimate task-based objective for which they will be used, within the context of stochastic programming. We present three experimental evaluations of the proposed approach: a classical inventory stock problem, a real-world electrical grid scheduling task, and a real-world energy storage arbitrage task. We show that the proposed approach can outperform both traditional modeling and purely black-box policy optimization approaches in these applications." ], "cite_N": [ "@cite_26", "@cite_7", "@cite_11" ], "mid": [ "1868018859", "", "2949444583" ] }
Deep Inverse Optimization
The potential for synergy between optimization and machine learning is wellrecognized [6], with recent examples including [8,18,26]. Our work uses machine learning for inverse optimization. Consider a parametric linear optimization problem, PLP(u, w): minimize x c(u, w) x subject to A(u, w)x ≤ b(u, w),(1) where x ∈ R d and c(u, w) ∈ R d , A(u, w) ∈ R d×m and b(u, w) ∈ R m are all functions of features u and weights w. Let x n tru be an optimal solution to PLP(u n , w tru ). Given a set of observed optimal solutions, {x 1 tru , x 2 tru , . . . , x N tru }, for observed conditions {u 1 , u 2 , . . . , u N }, the goal of inverse optimization (IO) is to determine values of optimization process parameters w that generated the observed optimal solutions. Applications of IO range from medicine (e.g., imputing the importance of treatment sub-objectives from clinically-approved radiotherapy plans [11]) to energy (e.g., predicting the behaviour of price-responsive customers [31]). Fundamentally, IO problems are learning problems: each u n is a feature vector and x n tru is its corresponding target; the goal is to learn model parameters w that minimize some loss function. In this paper, we cast inverse optimization as a form of deep learning. Our method, called deep inverse optimization, is to unroll an iterative optimization process and then use backpropagation to learn model parameters that generate the observations/targets. c(u, wtru) True parametric LP Initial parametric LP Learned parametric LP Figure 1 shows the actual result of applying our deep IO method to three inverse optimization learning tasks. The top panel illustrates the non-parametric, single-point variant of model (1) -the case when exactly one x tru is given -a classical problem in IO (see [1,12]). In Figure 1 (i), only c needs to be learned: starting from an initial cost vector c ini , our method finds c lrn which makes x tru an optimal solution of the LP by minimizing x tru −x lrn 2 . In Figure 1 (ii), starting from c ini , A ini and b ini , our approach finds c lrn , A lrn and b lrn which make x tru an optimal solution of the learned LP through minimizing x tru − x lrn 2 . Figure 1 (iii) shows learning w = [w 0 , w 1 ] for the parametric problem instance minimize x cos(w 0 + w 1 u)x 1 + sin(w 0 + w 1 u)x 2 subject to − x 1 ≤ 0.2w 0 u, − x 2 ≤ −0.2w 1 u, w 0 x 1 + (1 + 1 3 w 1 u)x 2 ≤ w 0 + 0.1u.(2) Starting from w ini = [0.2, 0.4] with a loss (mean squared error) of 0.45, our method is able to find w lrn = [1.0, 1.0] with a loss of zero, thereby making x n tru optimal solutions of (2) for u values {−1.5, −0.5, 0.5, 1.5}. Given newly observed u values, in this example w lrn would predict correct decisions. In other words, the learned model generalizes well. The contributions of this paper are as follows. We propose a general framework for inverse optimization based on deep learning. This framework is applicable to learning coefficients of the objective function and constraints, individually or jointly; minimizing a general loss function; learning from a single or multiple observations; and solving both non-parametric and parametric problems. As a proof of concept, we demonstrate that our method obtains effectively zero loss on many randomly generated linear programs for all three types of learning tasks shown in Figure 1, and always improves the loss significantly. Such a numerical study on randomly generated non-parameteric and parametric linear programs with multiple learnable parameters has not previously been published for any IO method in the literature. Finally, to the best of our knowledge, we are the first to use unrolling and backpropagation for constrained inverse optimization. We explain how our approach differs from methods in inverse optimization and machine learning in Section 2. We present our deep IO framework in Section 3 and our experimental results in Section 4. Section 5 discusses both the generality and the limitations of our work, and Section 6 concludes the paper. Deep Learning Framework for Inverse Optimization The problems studied in inverse optimization are learning problems: given features u n and corresponding targets x n tru , the goal is to learn parameters of a forward optimization model that generate x n tru as its optimal solutions. A complementary view is that inverse optimization is a learning technique specialized to the case when the observed data is coming from an optimization process. Given this perspective on inverse optimization and motivated by the success of deep learning for a variety of learning tasks in recent years (see [23]), this paper develops a deep learning framework for inverse optimization problems. Deep learning is a set techniques for training the parameters of a sequence of transformations (layers) chained together. The more intermediate layers, the 'deeper' the architecture. We refer the reader to the textbook by Goodfellow, Bengio and Courville [19] for additional details about deep learning. The features of the intermediate layers can be trained/learned through backpropagation, an automatic differentiation technique that computes the gradient of an output with respect to its input through the layers of a neural network, starting from the final layer all the way to the initial one. This method efficiently computes an update to the weights of the model [30]. Importantly, current machine learning libraries such as PyTorch provide built-in backpropagation capabilities [28] that allow for wider use of deep learning. Thus, our deep inverse optimization framework iterates between solving the forward optimization problem using an iterative optimization algorithm and backpropagating through the steps (layers) of that algorithm to improve the estimates of learnable parameters (weights) of the forward process. Algorithm 1 Deep inverse optimization framework. Input: wini; (u n , x n tru ) for n = 1, .. N , Output: w lrn 1: w ← wini 2: for s in 1 .. max steps do 3: ∆w ← 0 4: for n in 1 .. N do 5: x ← FO(u n , w) Solve forward problem 6: ← L(x, x n tru ) Compute loss 7: ∆w ← ∆w + ∂ ∂w Accumulate gradient by backprop 8: end for 9: β ← line search(w, α · ∆w N ) Find safe step size 10: w ← w − βα · ∆w N Update weights 11: end for 12: Return w Our approach, shown in Algorithm 1, takes the pairs (u n , x n tru ), n = 1, .., N , as input, and starts by initializing w = w ini . For each n, the forward optimization problem (FO) is solved with the current weights (line 5), and the loss between the resulting optimal solution x and x tru is computed (line 6). The gradient of the loss function with respect to w is computed by backpropagation through the layers of the forward process. In line 9, line search is used to determine the step size, β, for updating the weights: β is reduced by half if infeasibility or unboundedness is encountered until a value is found that will lead to loss reduction or β < 10 −8 , in which case early algorithm termination is triggered. Finally, in line 10, the weights are updated using the average gradient, step size β, and α, a vector representing the component-wise learning rates for w. (i) IPM forward process c(u,w) x(u,w tru ) x (1) x(u,w) x (2) u w c(u,w) A(u,w) b(u,w) x (1) x (2) x(u,w) (ii) Deep inverse optimization through IPM x(u,w tru ) loss(u,w) ... Importantly, our framework is applicable in the context of any differentiable, iterative forward optimization procedure. In principle, parameter gradients are automatically computable even with non-linear constraints or non-linear objectives, so long as they can be expressed through standard differentiable primitives. Our particular implementation uses the barrier interior point method (IPM) as described by Boyd and Vandenberghe [9], as our forward optimization solver. The IPM forward process is illustrated in Figure 2 (i): the central path taken by IPM is illustrated for the current u and w, which define both the current feasible region and the current c(u, w). As shown in Figure 2 (ii), backpropagation starts from the computation of the loss function between a (near) optimal forward optimization solution x(u, w) and the target x(u, w tru ) and proceeds backward through all the steps of IPM, i.e., x(u, w) to x (1) , the starting point of IPM, to the forward instance parameters and finally w to compute ∆w. In practice, backpropagating all the way to x (1) may not be necessary for computing accurate gradients; see Section 5. The framework requires setting three main hyperparameters: w ini , the initial weight vector; max steps, the total number of steps allotted to the training; and α, the learning rates for the different components of w. The number of additional hyperparameters depends on the forward optimization process. Experimental Results In this section, we demonstrate the application of our framework on randomlygenerated LPs for the three types of problems shown in Figure 1: learning c in the non-parametric case; learning c, A and b together in the non-parametric case; and learning w in the parametric case. Implementation Our framework is implemented in Python, using PyTorch version 0.4.1 and its built-in backpropagation capabilities [28]. All numerical operations are carried out with PyTorch tensors and standard PyTorch primitives, including the matrix inversion at the heart of the Newton step. Hyperparameters We limit learning to max steps = 200 in all experiments. Four additional hyperparameters are set in each experiment: -, which controls the precision and termination of IPM; t (0) : the initial value of the barrier IPM sharpness parameter t; µ: the factor by which t is increased along the IPM central path; -α: the vector of per-parameter learning rates, which in some experiments is broken down into α c and α Ab . In all experiments, the hyperparameter is either a constant 10 −5 or decays exponentially from 0.1 to 10 −5 during learning. The decay is a form of graduated optimization [7], and tends to help performance when using the MSE loss. Baseline LPs To generate problem instances, we first create a set of baseline LPs with d variables and m constraints by sampling at least d random points from N (0, 1), and then construct the convex hull via the scipy.spatial.convexhull package [29]. We generate 50 LP instances for each of the following six problem sizes: d = 2 and m ∈ {4, 8, 16} and d = 10, m ∈ {20, 36, 80}. Our experiments focus on inequality constraints. We observed that our method can work for equality constrained instances, but we did not systematically evaluate equality constraints and we leave that for future work. Non-Parametric We first demonstrate the performance of our method for learning c only, and learning c, A and b jointly, on the single-point variant of model (1), i.e., when a single optimal target x tru is given, a classical problem in IO [1]. We use two loss functions, absolute duality gap (ADG) and squared error (SE), defined as follows: ADG = c lrn |x tru − x lrn |,(3)SE = x tru − x lrn 2 2 ,(4) the first of which is a classical performance metric in IO [11] and the second is a standard metric in machine learning. Learning c only To complete instance generation for this experiment, we randomly select one vertex of the convex hull to be x tru for each of the 50 baseline LP instances and for each of the six (m, d) combinations. Initialization is done by sampling each parameter of c ini from N (0, 1). We implement a randomized grid search by sampling 20 random combinations of the following three hyperparameter sets: t (0) ∈ {0.5, 1, 5, 10}, µ ∈ {1.5, 2, 5, 10, 20}, and α c ∈ {1, 10, 100, 1000}. As in other applications of deep learning, it is not clear which hyperparameters will work best for a particular problem instance. For each instance we run our algorithm with the same 20 hyperparameter combinations, reporting the best final error values. Figure 3 (i) shows the results of this experiment for ADG and SE loss. In both cases, our method is able to reliably learn c: in fact, for all instances, the final error is under 10 −4 , while the majority of initial errors are above 10 −1 . There is no clear pattern in the performance of the method as m and d change for ADG; for SE, the final loss is slightly bigger for higher d. Learning c, A, b jointly Our approach to instance generation here is to start with each baseline LP and generate a strictly feasible or infeasible target within some reasonable proximity of an existing vertex. The algorithm is then forced to learn a new c, A, b that generate the target, which is not an optimum for the initial LP. To make this task more challenging, we also perturb c so that it is not initialized too close to the optimal direction. For each of the 50 baseline LP feasible regions, we generate a c ∼ N (0, 1) and compute its optimal solution x * . To generate an infeasible target we set x tru = x * + η where η ∼ U [−0.2, 0.2]. We similarly generate a challenging c ini by corrupting c with noise from U [−0.2, 0.2]. To generate a strictly feasible target near x * , we set x tru = 0.9x * + 0.1x where x is a uniformly random point within the feasible region generated by Dirichlet-weighted combination of all vertices; this method was used because adding noise in 10 dimensions almost always results in an infeasible target. In summary, we generate new LP instances with the same feasible region as the baseline LPs but a corrupted c ini and one feasible and one infeasible target. The goal is to demonstrate the ability of our algorithm to detect the change and also move the constraints and objective so that the feasible/infeasible target becomes a vertex optimum. For each of the six problem sizes, we randomly split the 50 instances into two subsets, one with feasible and the other with infeasible targets. For ADG loss we set = 10 −5 and for SE we use the decay strategy. In practice, this decay strategy is similar to putting emphasis on learning c in the initial iterations and ending with emphasis on constraint learning. The values of hyperparameters α c and α Ab are independently selected from {0.1, 1, 10} and concatenated into one learning rate vector α. We generate 20 different hyperparameter combinations. We run our algorithm on each instance with all hyperparameter combinations and record the value of the best trial. Figure 3 (ii) shows the results of this experiment for ADG and SE loss. In both cases, our method is able to learn model parameters that result in median loss of under 10 −4 . For ADG, our method performs equally well for all problem sizes, and there is not much difference in the final loss for feasible and infeasible targets. For SE, however, the final loss is larger for higher d but decreases as m increases. Furthermore, there is a visible difference in performance of the method on feasible and infeasible points for 10-dimensional instances: learning from infeasible targets becomes a more difficult task. Parametric Several aspects of the experiment for parametric LPs are different from the nonparametric case. First, we train by minimizing MSE(w), defined as MSE(w) = 1 N N n=1 x(u n , w tru ) − x(u n , w) 2 2 .(5) We chose the mean of SE loss instead of the mean of ADG loss for the parametric experiments because it is only zero if the targets are all feasible, which is not necessarily required for ADG to be zero. This makes the SE loss more difficult from a learning point of view, but also leads to more intuitive notion of success. See Section 5 for discussion. In the parametric case, we also assess how well the learned PLP generalizes, by evaluating its MSE(w lrn ) on a held-out test set. To generate parametric problem instances, we again started from the baseline LP feasible regions. To generate a true PLP, we used six weights to define linear functions of u for all elements of c, all elements of b, and one random element in each row of A. For example, for 2-dimensional problems with four constraints, our instances have the following form: minimize x (c 1 + w 1 + w 2 u)x 1 + (c 2 + w 1 + w 2 u)x 2 subject to     a 11 a 12 + w 3 + w 4 u a 21 a 22 + w 3 + w 4 u a 31 + w 3 + w 4 u a 32 a 41 a 42 + w 3 + w 4 u     ≤     b 1 + w 5 + w 6 u b 2 + w 5 + w 6 u b 3 + w 5 + w 6 u b 4 + w 5 + w 6 u    (6) Specifically, the "true PLP" instances are generated by setting w 1 , w 3 , w 5 = 0 and w 2 , w 4 , w 6 We also sample 20 test points u sampled uniformly from [u min , u max ]. We then initialize learning from a corrupted PLP by setting w ini = w tru + η where each element of η ∼ U [−0.2, 0.2]. Hyperparameters are sampled as t (0) ∈ {0.5, 1, 5, 10}, µ ∈ {1.5, 2, 5, 10, 20} and α Ab ∈ {1, 10}, and α c is then chosen to be a factor of {0.01, 1, 100} times α Ab , i.e., a relative learning rate. Here, α c and α Ab control the learning rate of parameters within w that determine c and (A, b), respectively. In total, we generate 20 different hyperparameter combinations. We run our algorithm on each instance with all hyperparameter combinations and record the best final error value. A constant value of = 10 −5 is used. We demonstrate the performance of our method on learning parametric LPs of the form shown in (6) with d = 2, m = 8, and d = 10, m = 36. In Figure 4, we report two metrics evaluated on the training set, namely MSE(w ini ) and MSE(w lrn ), and one metric for the test set, MSE(w lrn ). Figure 4 (iii) shows an example of an instance with d = 2, m = 8 from the training set. We see that, overall, our deep learning method works well on 2-dimensional problems with the training and testing error both being much smaller than the initial error. In the vast majority of cases the test error is also comparable to training error, though there are a few cases where it is worse, which indicates a failure to generalize well. For 10D instances, the algorithm significantly improves MSE(w lrn ) over the initialization MSE(w ini ), but in most cases fails to drive the loss to zero, either due to local minima or slow convergence. Again, performance on the test set is similar to that on training set. Discussion The conceptual message that we wish to reinforce is that inverse optimization should be viewed as a form of deep learning, and that unrolling gives easy access to the gradients of any parameter used directly or indirectly in the forward optimization process. There are many aspects to this view that merit further exploration. What kind of forward optimization processes can be inversely optimized this way? Which ideas and algorithms from the deep learning community will help? Are there aspects of IO that make gradient-based learning more challenging than in deep learning at large? Conclusive answers are beyond the scope of this paper, but we discuss these and other questions below. Generality and applicability. As a proof of concept, this paper uses linear programming for the forward problems and IPM with barrier method as the forward optimization process. In principle, the framework is applicable to any forward process for which automatic differentiation can be applied. This observation does not mean that ours is the best approach for a specialized IO problem, such as learning c from a single point [12] or multiple points within the same feasible region [14], but it provides a new strategy. The practical message of our paper is that, when faced with novel classes or novel parameterizations of IO problems, the unrolling strategy provides convenient access to a suite of general-purpose gradient-based algorithms for solving the IO problem at hand. This strategy is made especially easy by deep learning libraries that support dynamic 'computation graphs' such as PyTorch. Researchers working within this framework can rapidly apply IO to many differentiable forward optimization processes, without having to derive the algorithm for each case. Automatic differentiation and backpropagation have enabled a new level of productivity for deep learning research, and they may do the same for inverse optimization research. Applying deep inverse optimization does not require expertise in deep learning itself. We chose IPM as a forward process because the inner Newton step is differentiable and because we expected the gradient to temperature parameter t to have a stabilizing effect on the gradient. For non-differentiable optimization processes, it may still be possible to develop differentiable versions. In deep learning, many advances have been made by developing differentiable versions of traditionally discrete operations, such as memory addressing [20] or sampling from a discrete distribution [25]. We believe the scope of differentiable forward optimization processes may similarly be expanded over time. Limitations and possible improvements. Deep IO inherits the limitations of most gradient-based methods. If learning is initialized to the right "basin of attraction", it can proceed to a global optimum. Even then, the choice of learning algorithm may be crucial. When implemented within a steepest descent framework, as we have here, the learning procedure can get trapped in local minima or exhibit very slow convergence. Such effects are why most instances in Figure 4 (ii) failed to achieve zero loss. In deep learning with neural networks, poor local minima become exponentially rare as the dimension of the learning problem increases [15,33]. A typical strategy for training neural networks is therefore to over-parameterize (use a high search dimension) and then use regularization to avoid over-fitting to the data. In deep IO, natural parameterizations of the forward process may not permit an increase in dimension, or there may not be enough observations for regularization to compensate, so local minima remain a potential obstacle. We believe training and regularization methods specialized to low-dimensional learning problems such as by Sahoo et al. [32] may be applicable here. We expect that other techniques from deep learning, and from gradient-based optimization in general, will translate to deep IO. For example, optimization techniques with second-order aspects such as momentum [35] and L-BFGS [10] are readily available in deep learning frameworks. Other deep learning 'tricks' may be applicable to stabilizing deep IO. For example, we observe that, when c is normal to a constraint, the gradient with respect to c can suddenly grow very large. We stabilized this behaviour with line search, but a similar 'exploding gradient' phenomenon exists when training deep recurrent neural networks, and gradient clipping [27] is a popular way to stabilize training. A detailed investigation of applicable deep learning techniques is outside the scope of this paper. Deep IO may be more successful when the loss with respect to the forward process can be annealed or 'smoothed' in a manner akin to graduated nonconvexity [7]. Our -decay strategy is an example of this, as discussed below. Finally, it may be possible to develop hybrid approaches, combining gradientbased learning with closed-form solutions or combinatorial algorithms. Loss function and metric of success. One advantage of the deep inverse optimization approach is that it is can accommodate various loss functions, or combinations of loss functions, without special development or analysis. For example one could substitute other p-norms, or losses that are robust to outliers, and the gradient will be automatically available. This flexibility may be valuable. Special loss functions have been important in machine learning, especially for structured output problems [21]. The decision variables of optimization processes are likewise a form of structured output. In this study we chose two classical loss functions: absolute duality gap and squared error. The behaviour of our algorithm varied depending on the loss function used. Looking at Figure 3 (ii) it appears that deep IO performs better with ADG loss than with SE loss when learning c, A, b jointly. However, this performance is due to the theoretical property that ADG can be zero even when the observed target point is arbitrarily infeasible [12]. With ADG, all the IO solver needs to do is adjust c, A, b so that x lrn −x tru is orthogonal to c, which in no way requires the learned model to be capable of generating x tru as an optimum. In other words, ADG is meaningful mainly when the true feasible region is known, as in Figure 3 (i). When the true region is unknown, SE prioritizes solutions that directly generate the observations x n tru , and may therefore be a more meaningful loss function. That is why we used it for our parametric experiments depicted in Figure 4. Minimizing the SE loss also appears to be more challenging for steepest descent. To get a sense for the characteristics of ADG versus SE from the point of view of varying c, consider Figure 5, which depicts the loss for the IO problem in Figure 1 (i) using both high precision ( = 10 −5 ) and low precision ( = 0.1, 0.01) for IPM. Because the ADG loss is directly dependent on c, the loss varies smoothly even as the corresponding optimum x * stays fixed. The SE loss, in contrast, is piece-wise constant; an instantaneous perturbation of c will almost never change the SE loss in the limit of → 0. Note that the gradients derived by implicit differentiation [2] indicate ∂ ∂c = 0 everywhere in the linear case, which would mean c cannot be learned by gradient descent. IPM can learn c nonetheless because the barrier sharpness parameter t smooths the loss, especially at low values. The precision parameter limits the maximal sharpness during forward optimization, and so the gradient ∂ ∂c is not zero in practice, especially when is weak. Notice that the SE loss surface becomes qualitatively smoother, whereas ADG is not fundamentally changed. Also notice that when c is normal to a constraint (when the optimal point is about to transition from one point to another) the gradient ∂ ∂c explodes even when the problem is smoothed. Computational efficiency. Our paper is conceptual and focuses on flexibility and the likelihood of success, rather than computational efficiency. Many applications of IO are not real-time, and so we expect methods with running times on the order of seconds or minutes to be of practical use. Still, we believe the framework can be both flexible and fast. Deep learning frameworks are GPU accelerated and scale well with the size of an individual forward problem, so large instances are not a concern. A bigger issue for GPUs is solving many small or moderate instances efficiently. Amos and Kolter [2] developed a batch-mode GPU forward solver to address this. What is more concerning for the unrolling strategy is that forward optimization processes can be very deep, with hundreds or thousands of iterations. Backpropagation requires keeping all the intermediate values of the forward pass resident in memory, for later use in the backward pass. The computational cost of backpropagation is comparable to that of the forward process, so there is no asymptotic advantage to skipping the backwards pass. Although memory usage was small in our instances, if the memory usage is linear with depth, then at some depth the unrolling strategy will cease to be practical compared to Amos and Kolter's [2] implicit differentiation approach. However, we observed that for IPM most of the gradient contribution comes from the final ten Newton steps before termination. In other words, there is a vanishing gradient with depth, which means the gradient can be well-approximated in practice with truncated backpropagation through time (see [34] for review), which uses a small constant pool of memory regardless of depth. In practice, we suggest that the unrolling approach is convenient during the development and exploration phase of IO research. Once an IO model is proven to work, it can potentially be made more efficient by deriving the implicit gradients [2] and comparing them to the unrolled implementation as a reference. Still, more important than improving any of these constants is to use asymptotically faster learning algorithms actively being developed in the deep learning community. Conclusion We developed a deep learning framework for inverse optimization based on backpropagation through an iterative forward optimization process. We illustrate the potential of this framework via an implementation where the forward process is the interior point barrier method. Our results on linear non-parametric and parametric problems show promising performance. To the best of our knowledge, this paper is the first to explicitly connect deep learning and inverse optimization.
4,982
1812.00647
2902303115
We propose Deep Hierarchical Machine (DHM), a model inspired from the divide-and-conquer strategy while emphasizing representation learning ability and flexibility. A stochastic routing framework as used by recent deep neural decision regression forests is incorporated, but we remove the need to evaluate unnecessary computation paths by utilizing a different topology and introducing a probabilistic pruning technique. We also show a specified version of DHM (DSHM) for efficiency, which inherits the sparse feature extraction process as in traditional decision tree with pixel-difference feature. To achieve sparse feature extraction, we propose to utilize sparse convolution operation in DSHM and show one possibility of introducing sparse convolution kernels by using local binary convolution layer. DHM can be applied to both classification and regression problems, and we validate it on standard image classification and face alignment tasks to show its advantages over past architectures.
@cite_25 @cite_27 proposed to extract deep features to divide the problem space and use simple probabilistic distribution at leaf nodes. These models enabled traditional decision regression trees with deep representation learning ability. Leaf node update rules were proposed based on convex optimization techniques, and they out-performed deep models without divide-and-conquer strategy. However, since the last layer of a deep model was used to divide the problem space, every path in the tree needs to be computed. Even when a branch of computation contributes little to the final prediction, it stills need evaluation because each splitting node requires the full forward-pass of the deep neural network. A model structure where each splitting node is separately evaluated was used @cite_7 for depth estimation, but a general framework was missing and the effect of computation path pruning was not investigated.
{ "abstract": [ "", "We present Deep Neural Decision Forests - a novel approach that unifies classification trees with the representation learning functionality known from deep convolutional networks, by training them in an end-to-end manner. To combine these two worlds, we introduce a stochastic and differentiable decision tree model, which steers the representation learning usually conducted in the initial layers of a (deep) convolutional network. Our model differs from conventional deep networks because a decision forest provides the final predictions and it differs from conventional decision forests since we propose a principled, joint and global optimization of split and leaf node parameters. We show experimental results on benchmark machine learning datasets like MNIST and ImageNet and find on-par or superior results when compared to state-of-the-art deep models. Most remarkably, we obtain Top5-Errors of only 7.84 6.38 on ImageNet validation data when integrating our forests in a single-crop, single seven model GoogLeNet architecture, respectively. Thus, even without any form of training data set augmentation we are improving on the 6.67 error obtained by the best GoogLeNet architecture (7 models, 144 crops).", "This paper presents a novel deep architecture, called neural regression forest (NRF), for depth estimation from a single image. NRF combines random forests and convolutional neural networks (CNNs). Scanning windows extracted from the image represent samples which are passed down the trees of NRF for predicting their depth. At every tree node, the sample is filtered with a CNN associated with that node. Results of the convolutional filtering are passed to left and right children nodes, i.e., corresponding CNNs, with a Bernoulli probability, until the leaves, where depth estimations are made. CNNs at every node are designed to have fewer parameters than seen in recent work, but their stacked processing along a path in the tree effectively amounts to a deeper CNN. NRF allows for parallelizable training of all \"shallow\" CNNs, and efficient enforcing of smoothness in depth estimation results. Our evaluation on the benchmark Make3D and NYUv2 datasets demonstrates that NRF outperforms the state of the art, and gracefully handles gradually decreasing training datasets." ], "cite_N": [ "@cite_27", "@cite_25", "@cite_7" ], "mid": [ "", "2220384803", "2436453945" ] }
Deep Hierarchical Machine: a Flexible Divide-and-Conquer Architecture
Divide-and-conquer is a widely-adopted problemsolving philosophy which has been demonstrated to be successful in many computer vision tasks, e.g. object detection and tracking [9] [21]. Instead of solving a complete and huge problem, divide-and-conquer suggests decomposing the problem into several sub-problems and solving them in different constrained contexts. Figure 1 illustrates this idea with a binary classification problem. Finding a decision boundary in the original problem space is difficult and leads to a sophisticated nonlinear model, but linear decision models could be more easily obtained when solving the subproblems. The traditional decision tree, which splits the input feature space at each splitting node and gives the prediction at a leaf node, inherently uses the divide-and-conquer strategy as an inductive bias. The designs of input features and splitting functions are key to the success of this model. Conventional methods usually employ hand-crafted features such as the pixel-difference feature [10,7,14,23] and Harr-like feature [24]. However, the input space for vision tasks are usually high-dimensional and often lead to a huge pool of candidate features and splitting functions that are impractical for an exhaustive evaluation. In practice the huge candidate pool is randomly sampled to form a small candidate set of splitting functions and a local greedy heuristic such as entropy minimization is adopted to choose the "best" splitting function which maximizes data "purity", limiting the representation learning ability of the traditional decision tree. Deep neural decision forests [8] was proposed to enable a decision tree with deep representation learning ability. In [8], the outputs of the last fully connected layer of a CNN are utilized as stochastic splitting functions. A global loss function is differentiable with respect to the network parameters in this framework, enabling greater representation learning ability than the local greedy heuristics in conventional decision trees. Deep regression forests [19] was later proposed for regression problems based on the general framework of [8]. However, the success in introducing representation learning ability comes with the price of transforming decision trees into stochastic trees which make soft decision at each splitting node. As a result, all splitting functions have to be evaluated as every leaf node contributes to the final prediction, yielding a significant time cost. Pruning branches that contribute little to the final prediction should effectively reduce the computational cost with little accuracy degradation. Unfortunately, the network topology used in previous works [8,19] requires a complete forward pass of the entire CNN to compute the routing probability for each splitting node, making pruning impractical. A major advantage of the divide-and-conquer strategy (e.g. random forests) is its high efficiency in many timeconstraint vision tasks such as face detection and face alignment. Simple and ultrafast-to-compute features such as pixel difference, only extract sparse information (e.g. two pixels) from the image space. However, existing deep neural decision/regression forests [8,19] completely ignore the computational complexity of splitting nodes and in turn greatly limit their efficiency. In this work, we propose a general tree-like model architecture, named Deep Hierarchical Machine (DHM), which utilizes a flexible model topology to decouple the evaluation of splitting nodes and a probabilistic pruning strategy to avoid the evaluation of unnecessary paths. For the splitting nodes, we also explore the feasibility of inheriting the sparse feature extraction process (i.e. the pixel-difference feature) of the traditional random forests and design a deep sparse hierarchical machine (DSHM) for high efficiency. We evaluate our method on standard image classification and facial landmark coordinate regression tasks and show its effectiveness. Our implementation can be easily incorporated into any deep learning frameworks and the source code and pre-trained models will be available on the website 1 . In summary, our contributions are: 1. We propose Deep Hierarchical Machine (DHM) with a flexible model topology and probabilistic pruning strategy to avoid evaluating unnecessary paths. The DHM enjoys a unified framework for both classification and regression tasks. 2. We introduce sparse feature extraction process into DHM, which to our best knowledge is the first attempt to mimic traditional decision trees with pixeldifference feature in deep models. 1 The website address is currently unavailable. 3. For the first time, we study using deep regression tree for a multi-task problem, i.e., regressing multiple facial landmarks. Traditional divide-and-conquer models Traditional decision trees or random forests [18,1] can be naturally viewed as divide-and-conquer models, where each non-leaf node in the tree splits the input feature space and route the input deterministically to one of its children nodes. These models employ a greedy heuristic training procedure which randomly samples a huge pool of candidate splitting functions to minimize a local loss function. The parameter sampling procedure is sub-optimal compared to using optimization techniques, which in combination of the hand-crafted nature of the used features, limit these models' representation learning ability. Hierarchal mixture of experts [5] also partitions the problem space in a tree-like structure using some gating models and distribute inputs to each expert model with a probability. A global maximum likelihood estimation task was formulated under a generative model framework, and EM algorithm was proposed to optimize linear gating and expert models. This work inspires our methodology but deep representation learning and probabilistic pruning was not studied at that time. [8,19] proposed to extract deep features to divide the problem space and use simple probabilistic distribution at leaf nodes. These models enabled traditional decision/regression trees with deep representation learning ability. Leaf node update rules were proposed based on convex optimization techniques, and they out-performed deep models without divide-and-conquer strategy. However, since the last layer of a deep model was used to divide the problem space, every path in the tree needs to be computed. Even when a branch of computation contributes little to the final prediction, it stills need evaluation because each splitting node requires the full forward-pass of the deep neural network. A model structure where each splitting node is separately evaluated was used [17] for depth estimation, but a general framework was missing and the effect of computation path pruning was not investigated. Deep decision/regression tree Sparse feature extraction Pixel-difference feature is a special type of hand-crafted feature where only several pixels from an input are considered during its evaluation. They are thus efficient to compute and succeeded in computer vision tasks such as face detection [10], face alignment [14,7,3,23,4], pose estimation [20,22] and body part classification [15]. These features were also naturally incorporated into decision/regression trees to divide the input feature space. A counterpart of sparse feature extraction process in CNNs is sparse convolution where the few non-zero entries in the convolution kernel determine the feature extraction process. To obtain a sparse convolution kernel, sparse decomposition [11] and pruning [13] techniques were proposed to sparsify a pre-trained dense CNN. [6] proposed an alternative where random sparse kernel was initialized before the training process. While they focus on speeding up CNNs, there have not been study on using these sparse convolutional layers in problem space dividing process, as traditional pixeldifference feature was used in decision trees. Methodology We first formulate the general DHM based on a hierarchical mixture of experts (HME) framework, then we specify the model for classification and regression experiments. General framework of DHM The general divide-and-conquer strategy consists of multiple levels of dividing operations and one final conquering step. The computation process is depicted as a tree where all leaf nodes are called conquering nodes while the others are named as dividing nodes. We index a node by a tuple subscript (i, s) where s denotes the vertical stage depth (see Figure 1) and i denotes the horizontal index of the node. Every node has a non-negative integer number of children nodes, which forms a sequence K i,s = {K 1 i,s , K 2 i,s , ..., K |Ki,s| i,s }. Each node has exactly one input I i,s and one output O i,s . A dividing node D i,s is composed of a tuple of functions (R i,s , M i,s ). The first function is called the recommendation function which judges the node input and gives the recommendation score vector s i,s = R i,s (I i,s ) whose length equals the children sequence length |K i,s | and the jth entry s i,s (j) is a real number associated with the jth child node. We require 0 ≤ s i,s (j) ≤ 1, |Ki,s| j=1 s i,s (j) = 1(1) so that s i,s (j) can be considered as the significance or probability of recommending the input I i,s to the jth child node. The second function M i,s is called mapping function and maps the input to form the output of the node O i,s = M i,s (I i,s ), which is allowed to be copied and sent to all its children nodes K i,s . We name the unique path from the root node to one conquering (leaf) node a computation path P i,s . Each conquering node only stores one function M i,s that maps its input into a prediction vector p i,s = M i,s (I i,s ), which is considered the termination of its computation path. To get the final prediction P, each conquering node contributes its output weighted by the probability of taking its computation path as P = (i,s)∈Nc w i,s p i,s(2) and N c is the set of all conquering nodes. The weight can be obtained by multiplying all the recommendation scores along the path given by each dividing node. Assume the path P i,s consists of a sequence of s dividing nodes and one conquering node as {D j1 i1,s1 , D j2 i2,s2 , . . . , C i,s }, where the superscript for a dividing node denotes which child node to choose. Then the weight can be expressed as w i,s = s m=1 s im,sm (j m )(3) Note that the weights of all conquering nodes sum to 1 due to (1) and the final prediction is hence a convex combination of all the outputs of conquering nodes. In addition, we assume every function mentioned above is a differentiable function parametrized by θ R i,s or θ M i,s for recommendation or mapping function at node (i, s). Thus the final prediction is a differentiable function with respect to all the parameters which we omit above to ensure clarity. A loss function defined upon the final prediction can hence be optimized with back-propagation algorithm and benefit from some frameworks that provide automatic differentiation. A flexible feature in this framework is that, the recommendation functions R i,s are in general not coupled with each other. [8,19] pass the last fully-connected layer to sigmoid gates, whose results are used as recommendation scores in the dividing nodes (Figure 2 left). In this way all recommendation functions are evaluated simultaneously to give probabilities of taking all computation paths, even when most of the paths contribute little to the final results. On the other hand, our framework allows separation of the recommendation functions (Figure 2 right) so that we can avoid evaluating unnecessary computation paths. We define a Probabilistic Pruning (PP) strategy based on the separability of the recommendation functions. Starting from the root dividing node, its children node will not be visited if their corresponding recommendation scores are lower than a pruning threshold P th . This process recursively applies to its descendant dividing nodes and finally Classification For classification problem, the output p i,s for each conquering node C i,s is a discrete probability distribution vector whose length equals the number of classes. The yth entry p i,s (y) gives the probability P(y|I 0,0 ) that the root node input I 0,0 belongs to class y . To train the model, we adopt the probabilistic generative model formulation [5] which leads to a maximum likelihood optimization problem. For one training instance which is composed of an input vector and a class label {x i , y i }, the likelihood of generating it is, P(y i |x i ) = (i,s)∈Nc s m=1 s im,sm (j m )p i,s (y i )(4) The optimization target is to minimize the negative loglikelihood loss over the whole training set containing N in- stances D = {x i , y i } N i=1 , L(D) = − N i=1 log(P(y i |x i ))(5) In this study, we constrain each dividing node to have exactly two children since we do not assume any prior knowledge on how many parts the input feature space should to be split into. We also assume a full binary-tree structure for simplicity. If some application-specific information such as clustering results are available, the tree structure can be adjusted accordingly. In the case of full binary tree, we can index each node with a single non-negative integer i for convenience. The recommendation function in each divingnode only needs to give a 2-vector s i and we use the shorthand s i to denote the probability the current dividing node input I i is recommended to the left sub-tree. For a dividing node D i , we denote nodes in its left and right sub-trees as node sets D l i and D r i , respectively. Then the probability of recommending the input x to a conquering node C i can be expressed as, P(C i |x) = Dj ∈N d s 1(Ci∈D l j ) j (1 − s j ) 1(Ci∈D r j )(6) where N d is the set of all dividing nodes and 1 is an indicator variable for the expression inside the parenthesis to hold. For the classification experiments we use the simplest conquering strategy for each conquering node as in [8], where each conquering node gives a constant probability distribution p i . The loss function is differentiable with respect to each s i , and the gradient for this full binary tree structure ∂L(D) ∂si is [8,19,17], in [8] and [19] all D i come from the last layer of a deep model and are hence coupled. When the dividing nodes are fixed, the distribution at each conquering node can be updated iteratively [8], N t=1 ( Cj ∈D l i p j (y t )P(C j |x t ) s i P(y t |x t ) − Cj ∈D r i p j (y t )P(C j |x t ) (1 − s i )P(y t |x t ) )(7)p t+1 j (y) = 1 Q t j N i=0 1(y i = y)p t j (y i )P(C j |x i ) P(y i |x i )(8) where Q t j is a normalization factor to ensure |pj | y=1 p t+1 j (y) = 1. The backward propagation and the conquering nodes update are carried out alternately to train the model. Regression For regression problems, the output of a conquering node C i,s is also a real-valued vector p i,s but the entries do not necessarily sum to 1. The final prediction vector P i for input x i is, P i = (i,s)∈Nc s m=1 s im,sm (j m )p i,s(9) For a multi-task regression dataset with N instances D = {x i , y i } N i=1 , we directly use the squared loss function, L(D) = 1 2 N i=1 ||P i − y i || 2(10) which was also used in the mixture of experts framework [12]. Here we use the same full binary tree structure and assume simple conquering nodes which have constant mapping functions just as the classification case. Similarly, ∂L(D) ∂si is computed as, ∂L(D) ∂s i = N t=1 (P t − y t ) T ( A l s i − A r (1 − s i ) ) (11) where A l = Cj ∈D l i P(C j |x i )p j and A r = Cj ∈D r i P(C j |x i )p j . Similar to 8, we update the conquering node prediction as p t+1 j = N i=0 y i P(C j |x i ) N i=0 P(C j |x i )(12) This update rule is inspired from traditional regression trees which compute an average of target vectors that are routed to a leaf node. Here the target vectors are weighted by how likely it is recommended into this conquering node. Experiments Classification for MNIST We start with an illustration using MNIST. We compare the model architecture of [8,19] with two variants of our proposed DHM as shown in Figure 3. The original architecture [8,19] is denoted as NDF. NDF passes some randomly chosen outputs from the last fully-connected layer to sigmoid gates, whose outputs are used as the recommendation scores s i of each dividing node. The other two structures are detailed in the following subsections. The MNIST data set contains 60000 training images and 10000 testing images of size 28 by 28 2 . During the experiment, binary tree depth and tree number are set to 7 and 1, respectively. Adam optimizer is used with learning rate specified as 0.001. Batch size is set to 500 and the training time is fixed to 50 epochs. Every experiment is repeated 10 times and averaged results with standard deviation are reported. Separated Recommendation Functions This type of DHM separates each dividing node's input and output, as shown in the middle column of Figure 3. Each dividing node processes the raw input image and produces a single number after the fully-connected layer, which is passed through a sigmoid function to give s i . One can think of this structure as the mapping functions for all dividing nodes are identity mappings M i,s (I i,s ) = I i,s . We denote this type as DHM (separated). The final test accuracy of this and other types of models are summarized in Table 2. The number of multiplication (NOM) before and after probabilistic pruning. multiplication (NOM) operation needed in the convolution and linear layers, which is shown in Table 2. Deeper Feature Along the Path In this type of architecture, the root dividing node does more initial processing and reduces the size of the input images (See the right column of Figure 3). Other dividing nodes pass the processed feature maps to its children dividing nodes as inputs. Every dividing node also sends their flattened outputs to a linear and sigmoid layer to produce s i . The mapping function in this case can be seen as the local network without the last fully-connected layer. The intuition to use this topology is that the node input at larger depth will pass more dividing nodes and be processed more times. This type of model is denoted as DHM (connected). Probabilistic Pruning The distribution of s i during the training process is shown in Figure 4. Every bar plot contains 500 bins to quantize all dividing nodes' s i values from 60000 training images. After initialization the distribution is centered around 0.5 while after longer training time, the dividing nodes are more decisive to recommend their inputs. When s i is very close to 1 or 0, the contribution from one of the two sub-trees is too low to be worthwhile for extra evaluation. This motivates the Probabilistic Pruning (PP) strategy which gives up evaluation of a sub-tree dynamically if the recommendation score of entering it is too low. NDF does not support PP even if the distribution strongly encourages it (see Figure 4 left), since all dividing nodes are coupled to the last fullyconnected layer of the network. On the other hand, DHM can support PP naturally. In the experiment, we set the pruning threshold as 0.5 so that only one computation path is taken for every input image. The resulting test accuracy and NOM are shown in Table 1 and Applying PP only sacrifices the testing accuracy negligibly but the computational cost is reduced from exponential to linear since now the most significant computation path determines the result. These results prove that DHM can take advantage of the distribution of recommendation scores. The recommendation scores distribution for testing images before and after pruning is shown in Figure 5. Surprisingly, when a large amount of "hesitating" dividing nodes are deterministically given which child-node to use, the accuracy was not affected significantly. Adding Sparsity Here we use local binary convolution [6] to add sparse feature extraction process into DHM, making it DSHM. Every original convolution layer is replaced by two convolution layers and a ReLU gate. The first convolution layer is fixed and does not introduce any learnable parameters. The output feature maps of the first layer is passed to the ReLU gate, whose outputs are linear combined by the second 1 by 1 convolution layer. During initialization, some entries in the convolution kernel of the first layer are randomly assigned to be zero. The remaining entries are randomly assigned to 1 or -1 with probability 0.5 for each option. The percentage of non-zero entries in the fixed convolution kernel is defined as the sparsity level. In the experiment, we use 16 intermediate channels (output feature map number of the first layer) for all local binary convolution layers. DHM (separated) is used and other network parameters are consistent with the former experiments without sparse convolution layer. The resulting test accuracy and NOM is shown in Table 3. Since convolution with binary kernel can be implemented by addition and subtraction, the required NOM is further reduced. This experiment shows sparse feature extraction process can be seamlessly incorporated into DHM, which can be used in devices with limited computational resources. Cascaded regression with DHM Here we compare DHM with NDF architecture for a regression task, i.e., cascaded regression based face alignment. For an input image x i , the goal of face alignment is to predict the facial landmark position vector y i . Cascaded re- gression method starts with an initialized shapeŷ 0 and use a cascade of regressors to update the estimated facial shape stage by stage. The final predictionŷ =ŷ 0 + K t=1 ∆y t where K is the total stage number and ∆y t is the shape update at stage t. In [7,23], every regressor was an ensemble of regression trees whose leaf nodes give the shape update vector. Every splitting node in the regression tree locates two pixels around one current estimated landmark, whose difference was used to route the input in a hard-splitting manner. (see Figure 6) We replace the the traditional regression trees with our DHMs that use a full binary tree structure so as to extend [7] with deep representation learning ability. During initialization, every dividing node is randomly assigned a landmark index. The input to a dividing node is then a cropped region centered around its indexed landmark. In the experiment we use a crop size of 60 by 60 and a simple CNN to compute the recommendation score, whose structure is shown in Figure 6. The comparison group uses traditional NDF architecture and we feed entire image as input (see Figure 7). Similarly, every conquering node store a shape update vector as in [7,23] and (12) is used to update them. We use a large scale synthetic 3D face alignment dataset 300W-LP [25] for training and the AFLW3D dataset (reannotated by [2]) for testing. We use 57559 training images in 300W-LP and the whole 1998 images in the AFLW3D for testing. The images are cropped and resized to 224 by 224 patch using the same initial processing procedure in [2]. To rule out the influences of face detectors as mentioned in [16], a bounding box for a face is assumed to be centered at the centroid of the facial landmarks and encloses all the facial landmarks inside. We use the same error metric as Method NOM (After Pruning) Error (After Pruning) NDF 254M (Not able) 0.0643 (Not able) DHM 228M (35.6M) 0.0628 (0.06382) Table 4. The comparison of traditional NDF architecture and our DHM for regression task. Numbers in the parentheses give the results with PP and only one computational path was taken. [2] where the landmark prediction error is normalized by the bounding box size. In the experiment we use a cascade length of 10 and tree depth of 5 and in each stage we use an ensemble of 5 DHMs. We use the ADAM optimizer with a learning rate at 0.01 (0.001 for NDF as it works better for it in the experiment) and train 10 epochs for each stage. The average test errors of the two different architectures are shown in Table 4. Again, DHM supports PP to greatly reduce the computational cost and the performance only drops gracefully. This experiment validates again the strength of DHM over traditional NDF architecture in regression problems. Figure 8 shows some success and failure cases of this model. Compared with NDF, our DHM can significantly reduce the computational complexity after pruning with even slightly better alignment accuracy. Conclusion We proposed Deep Hierarchical Machine (DHM), a flexible framework for combining divide-and-conquer strategy and deep representation learning. Unlike recently proposed deep neural decision/regression forest, DHM can take advantage of the distribution of recommendation scores and a probabilistic pruning strategy is proposed to avoid un-necessary path evaluation. We also showed the feasibility of introducing sparse feature extraction process into DHM by using local binary convolution, which mimics traditional decision tree with pixel-difference feature and has potential for devices with limited computing resources.
4,221
1812.00647
2902303115
We propose Deep Hierarchical Machine (DHM), a model inspired from the divide-and-conquer strategy while emphasizing representation learning ability and flexibility. A stochastic routing framework as used by recent deep neural decision regression forests is incorporated, but we remove the need to evaluate unnecessary computation paths by utilizing a different topology and introducing a probabilistic pruning technique. We also show a specified version of DHM (DSHM) for efficiency, which inherits the sparse feature extraction process as in traditional decision tree with pixel-difference feature. To achieve sparse feature extraction, we propose to utilize sparse convolution operation in DSHM and show one possibility of introducing sparse convolution kernels by using local binary convolution layer. DHM can be applied to both classification and regression problems, and we validate it on standard image classification and face alignment tasks to show its advantages over past architectures.
Pixel-difference feature is a special type of hand-crafted feature where only several pixels from an input are considered during its evaluation. They are thus efficient to compute and succeeded in computer vision tasks such as face detection @cite_17 , face alignment @cite_9 @cite_5 @cite_6 @cite_10 @cite_13 , pose estimation @cite_1 @cite_21 and body part classification @cite_24 . These features were also naturally incorporated into decision regression trees to divide the input feature space. A counterpart of sparse feature extraction process in CNNs is sparse convolution where the few non-zero entries in the convolution kernel determine the feature extraction process. To obtain a sparse convolution kernel, sparse decomposition @cite_3 and pruning @cite_11 techniques were proposed to sparsify a pre-trained dense CNN. @cite_8 proposed an alternative where random sparse kernel was initialized before the training process. While they focus on speeding up CNNs, there have not been study on using these sparse convolutional layers in problem space dividing process, as traditional pixel-difference feature was used in decision trees.
{ "abstract": [ "", "Phenomenally successful in practical inference problems, convolutional neural networks (CNN) are widely deployed in mobile devices, data centers, and even supercomputers. The number of parameters needed in CNNs, however, are often large and undesirable. Consequently, various methods have been developed to prune a CNN once it is trained. Nevertheless, the resulting CNNs offer limited benefits. While pruning the fully connected layers reduces a CNN's size considerably, it does not improve inference speed noticeably as the compute heavy parts lie in convolutions. Pruning CNNs in a way that increase inference speed often imposes specific sparsity structures, thus limiting the achievable sparsity levels. We present a method to realize simultaneously size economy and speed improvement while pruning CNNs. Paramount to our success is an efficient general sparse-with-dense matrix multiplication implementation that is applicable to convolution of feature maps with kernels of arbitrary sparsity patterns. Complementing this, we developed a performance model that predicts sweet spots of sparsity levels for different layers and on different computer architectures. Together, these two allow us to demonstrate 3.1--7.3 @math convolution speedups over dense convolution in AlexNet, on Intel Atom, Xeon, and Xeon Phi processors, spanning the spectrum from mobile devices to supercomputers. We also open source our project at this https URL.", "We propose local binary convolution (LBC), an efficient alternative to convolutional layers in standard convolutional neural networks (CNN). The design principles of LBC are motivated by local binary patterns (LBP). The LBC layer comprises of a set of fixed sparse pre-defined binary convolutional filters that are not updated during the training process, a non-linear activation function and a set of learnable linear weights. The linear weights combine the activated filter responses to approximate the corresponding activated filter responses of a standard convolutional layer. The LBC layer affords significant parameter savings, 9x to 169x in the number of learnable parameters compared to a standard convolutional layer. Furthermore, the sparse and binary nature of the weights also results in up to 9x to 169x savings in model size compared to a standard convolutional layer. We demonstrate both theoretically and experimentally that our local binary convolution layer is a good approximation of a standard convolutional layer. Empirically, CNNs with LBC layers, called local binary convolutional neural networks (LBCNN), achieves performance parity with regular CNNs on a range of visual datasets (MNIST, SVHN, CIFAR-10, and ImageNet) while enjoying significant computational savings.", "This paper presents a highly efficient, very accurate regression approach for face alignment. Our approach has two novel components: a set of local binary features, and a locality principle for learning those features. The locality principle guides us to learn a set of highly discriminative local binary features for each facial landmark independently. The obtained local binary features are used to jointly learn a linear regression for the final output. Our approach achieves the state-of-the-art results when tested on the current most challenging benchmarks. Furthermore, because extracting and regressing local binary features is computationally very cheap, our system is much faster than previous methods. It achieves over 3, 000 fps on a desktop or 300 fps on a mobile phone for locating a few dozens of landmarks.", "", "We propose a new method to quickly and accurately predict human pose---the 3D positions of body joints---from a single depth image, without depending on information from preceding frames. Our approach is strongly rooted in current object recognition strategies. By designing an intermediate representation in terms of body parts, the difficult pose estimation problem is transformed into a simpler per-pixel classification problem, for which efficient machine learning techniques exist. By using computer graphics to synthesize a very large dataset of training image pairs, one can train a classifier that estimates body part labels from test images invariant to pose, body shape, clothing, and other irrelevances. Finally, we generate confidence-scored 3D proposals of several body joints by reprojecting the classification result and finding local modes. The system runs in under 5ms on the Xbox 360. Our evaluation shows high accuracy on both synthetic and real test sets, and investigates the effect of several training parameters. We achieve state-of-the-art accuracy in our comparison with related work and demonstrate improved generalization over exact whole-skeleton nearest neighbor matching.", "", "Deep neural networks have achieved remarkable performance in both image classification and object detection problems, at the cost of a large number of parameters and computational complexity. In this work, we show how to reduce the redundancy in these parameters using a sparse decomposition. Maximum sparsity is obtained by exploiting both inter-channel and intra-channel redundancy, with a fine-tuning step that minimize the recognition loss caused by maximizing sparsity. This procedure zeros out more than 90 of parameters, with a drop of accuracy that is less than 1 on the ILSVRC2012 dataset. We also propose an efficient sparse matrix multiplication algorithm on CPU for Sparse Convolutional Neural Networks (SCNN) models. Our CPU implementation demonstrates much higher efficiency than the off-the-shelf sparse matrix libraries, with a significant speedup realized over the original dense network. In addition, we apply the SCNN model to the object detection problem, in conjunction with a cascade model and sparse fully connected layers, to achieve significant speedups.", "", "", "", "We propose a method to address challenges in unconstrained face detection, such as arbitrary pose variations and occlusions. First, a new image feature called Normalized Pixel Difference (NPD) is proposed. NPD feature is computed as the difference to sum ratio between two pixel values, inspired by the Weber Fraction in experimental psychology. The new feature is scale invariant, bounded, and is able to reconstruct the original image. Second, we propose a deep quadratic tree to learn the optimal subset of NPD features and their combinations, so that complex face manifolds can be partitioned by the learned rules. This way, only a single soft-cascade classifier is needed to handle unconstrained face detection. Furthermore, we show that the NPD features can be efficiently obtained from a look up table, and the detection template can be easily scaled, making the proposed face detector very fast. Experimental results on three public face datasets (FDDB, GENKI, and CMU-MIT) show that the proposed method achieves state-of-the-art performance in detecting unconstrained faces with arbitrary pose variations and occlusions in cluttered scenes." ], "cite_N": [ "@cite_13", "@cite_11", "@cite_8", "@cite_9", "@cite_21", "@cite_1", "@cite_6", "@cite_3", "@cite_24", "@cite_5", "@cite_10", "@cite_17" ], "mid": [ "", "2617247391", "2513491952", "1998294030", "", "2060280062", "", "1935978687", "", "", "", "2247274765" ] }
Deep Hierarchical Machine: a Flexible Divide-and-Conquer Architecture
Divide-and-conquer is a widely-adopted problemsolving philosophy which has been demonstrated to be successful in many computer vision tasks, e.g. object detection and tracking [9] [21]. Instead of solving a complete and huge problem, divide-and-conquer suggests decomposing the problem into several sub-problems and solving them in different constrained contexts. Figure 1 illustrates this idea with a binary classification problem. Finding a decision boundary in the original problem space is difficult and leads to a sophisticated nonlinear model, but linear decision models could be more easily obtained when solving the subproblems. The traditional decision tree, which splits the input feature space at each splitting node and gives the prediction at a leaf node, inherently uses the divide-and-conquer strategy as an inductive bias. The designs of input features and splitting functions are key to the success of this model. Conventional methods usually employ hand-crafted features such as the pixel-difference feature [10,7,14,23] and Harr-like feature [24]. However, the input space for vision tasks are usually high-dimensional and often lead to a huge pool of candidate features and splitting functions that are impractical for an exhaustive evaluation. In practice the huge candidate pool is randomly sampled to form a small candidate set of splitting functions and a local greedy heuristic such as entropy minimization is adopted to choose the "best" splitting function which maximizes data "purity", limiting the representation learning ability of the traditional decision tree. Deep neural decision forests [8] was proposed to enable a decision tree with deep representation learning ability. In [8], the outputs of the last fully connected layer of a CNN are utilized as stochastic splitting functions. A global loss function is differentiable with respect to the network parameters in this framework, enabling greater representation learning ability than the local greedy heuristics in conventional decision trees. Deep regression forests [19] was later proposed for regression problems based on the general framework of [8]. However, the success in introducing representation learning ability comes with the price of transforming decision trees into stochastic trees which make soft decision at each splitting node. As a result, all splitting functions have to be evaluated as every leaf node contributes to the final prediction, yielding a significant time cost. Pruning branches that contribute little to the final prediction should effectively reduce the computational cost with little accuracy degradation. Unfortunately, the network topology used in previous works [8,19] requires a complete forward pass of the entire CNN to compute the routing probability for each splitting node, making pruning impractical. A major advantage of the divide-and-conquer strategy (e.g. random forests) is its high efficiency in many timeconstraint vision tasks such as face detection and face alignment. Simple and ultrafast-to-compute features such as pixel difference, only extract sparse information (e.g. two pixels) from the image space. However, existing deep neural decision/regression forests [8,19] completely ignore the computational complexity of splitting nodes and in turn greatly limit their efficiency. In this work, we propose a general tree-like model architecture, named Deep Hierarchical Machine (DHM), which utilizes a flexible model topology to decouple the evaluation of splitting nodes and a probabilistic pruning strategy to avoid the evaluation of unnecessary paths. For the splitting nodes, we also explore the feasibility of inheriting the sparse feature extraction process (i.e. the pixel-difference feature) of the traditional random forests and design a deep sparse hierarchical machine (DSHM) for high efficiency. We evaluate our method on standard image classification and facial landmark coordinate regression tasks and show its effectiveness. Our implementation can be easily incorporated into any deep learning frameworks and the source code and pre-trained models will be available on the website 1 . In summary, our contributions are: 1. We propose Deep Hierarchical Machine (DHM) with a flexible model topology and probabilistic pruning strategy to avoid evaluating unnecessary paths. The DHM enjoys a unified framework for both classification and regression tasks. 2. We introduce sparse feature extraction process into DHM, which to our best knowledge is the first attempt to mimic traditional decision trees with pixeldifference feature in deep models. 1 The website address is currently unavailable. 3. For the first time, we study using deep regression tree for a multi-task problem, i.e., regressing multiple facial landmarks. Traditional divide-and-conquer models Traditional decision trees or random forests [18,1] can be naturally viewed as divide-and-conquer models, where each non-leaf node in the tree splits the input feature space and route the input deterministically to one of its children nodes. These models employ a greedy heuristic training procedure which randomly samples a huge pool of candidate splitting functions to minimize a local loss function. The parameter sampling procedure is sub-optimal compared to using optimization techniques, which in combination of the hand-crafted nature of the used features, limit these models' representation learning ability. Hierarchal mixture of experts [5] also partitions the problem space in a tree-like structure using some gating models and distribute inputs to each expert model with a probability. A global maximum likelihood estimation task was formulated under a generative model framework, and EM algorithm was proposed to optimize linear gating and expert models. This work inspires our methodology but deep representation learning and probabilistic pruning was not studied at that time. [8,19] proposed to extract deep features to divide the problem space and use simple probabilistic distribution at leaf nodes. These models enabled traditional decision/regression trees with deep representation learning ability. Leaf node update rules were proposed based on convex optimization techniques, and they out-performed deep models without divide-and-conquer strategy. However, since the last layer of a deep model was used to divide the problem space, every path in the tree needs to be computed. Even when a branch of computation contributes little to the final prediction, it stills need evaluation because each splitting node requires the full forward-pass of the deep neural network. A model structure where each splitting node is separately evaluated was used [17] for depth estimation, but a general framework was missing and the effect of computation path pruning was not investigated. Deep decision/regression tree Sparse feature extraction Pixel-difference feature is a special type of hand-crafted feature where only several pixels from an input are considered during its evaluation. They are thus efficient to compute and succeeded in computer vision tasks such as face detection [10], face alignment [14,7,3,23,4], pose estimation [20,22] and body part classification [15]. These features were also naturally incorporated into decision/regression trees to divide the input feature space. A counterpart of sparse feature extraction process in CNNs is sparse convolution where the few non-zero entries in the convolution kernel determine the feature extraction process. To obtain a sparse convolution kernel, sparse decomposition [11] and pruning [13] techniques were proposed to sparsify a pre-trained dense CNN. [6] proposed an alternative where random sparse kernel was initialized before the training process. While they focus on speeding up CNNs, there have not been study on using these sparse convolutional layers in problem space dividing process, as traditional pixeldifference feature was used in decision trees. Methodology We first formulate the general DHM based on a hierarchical mixture of experts (HME) framework, then we specify the model for classification and regression experiments. General framework of DHM The general divide-and-conquer strategy consists of multiple levels of dividing operations and one final conquering step. The computation process is depicted as a tree where all leaf nodes are called conquering nodes while the others are named as dividing nodes. We index a node by a tuple subscript (i, s) where s denotes the vertical stage depth (see Figure 1) and i denotes the horizontal index of the node. Every node has a non-negative integer number of children nodes, which forms a sequence K i,s = {K 1 i,s , K 2 i,s , ..., K |Ki,s| i,s }. Each node has exactly one input I i,s and one output O i,s . A dividing node D i,s is composed of a tuple of functions (R i,s , M i,s ). The first function is called the recommendation function which judges the node input and gives the recommendation score vector s i,s = R i,s (I i,s ) whose length equals the children sequence length |K i,s | and the jth entry s i,s (j) is a real number associated with the jth child node. We require 0 ≤ s i,s (j) ≤ 1, |Ki,s| j=1 s i,s (j) = 1(1) so that s i,s (j) can be considered as the significance or probability of recommending the input I i,s to the jth child node. The second function M i,s is called mapping function and maps the input to form the output of the node O i,s = M i,s (I i,s ), which is allowed to be copied and sent to all its children nodes K i,s . We name the unique path from the root node to one conquering (leaf) node a computation path P i,s . Each conquering node only stores one function M i,s that maps its input into a prediction vector p i,s = M i,s (I i,s ), which is considered the termination of its computation path. To get the final prediction P, each conquering node contributes its output weighted by the probability of taking its computation path as P = (i,s)∈Nc w i,s p i,s(2) and N c is the set of all conquering nodes. The weight can be obtained by multiplying all the recommendation scores along the path given by each dividing node. Assume the path P i,s consists of a sequence of s dividing nodes and one conquering node as {D j1 i1,s1 , D j2 i2,s2 , . . . , C i,s }, where the superscript for a dividing node denotes which child node to choose. Then the weight can be expressed as w i,s = s m=1 s im,sm (j m )(3) Note that the weights of all conquering nodes sum to 1 due to (1) and the final prediction is hence a convex combination of all the outputs of conquering nodes. In addition, we assume every function mentioned above is a differentiable function parametrized by θ R i,s or θ M i,s for recommendation or mapping function at node (i, s). Thus the final prediction is a differentiable function with respect to all the parameters which we omit above to ensure clarity. A loss function defined upon the final prediction can hence be optimized with back-propagation algorithm and benefit from some frameworks that provide automatic differentiation. A flexible feature in this framework is that, the recommendation functions R i,s are in general not coupled with each other. [8,19] pass the last fully-connected layer to sigmoid gates, whose results are used as recommendation scores in the dividing nodes (Figure 2 left). In this way all recommendation functions are evaluated simultaneously to give probabilities of taking all computation paths, even when most of the paths contribute little to the final results. On the other hand, our framework allows separation of the recommendation functions (Figure 2 right) so that we can avoid evaluating unnecessary computation paths. We define a Probabilistic Pruning (PP) strategy based on the separability of the recommendation functions. Starting from the root dividing node, its children node will not be visited if their corresponding recommendation scores are lower than a pruning threshold P th . This process recursively applies to its descendant dividing nodes and finally Classification For classification problem, the output p i,s for each conquering node C i,s is a discrete probability distribution vector whose length equals the number of classes. The yth entry p i,s (y) gives the probability P(y|I 0,0 ) that the root node input I 0,0 belongs to class y . To train the model, we adopt the probabilistic generative model formulation [5] which leads to a maximum likelihood optimization problem. For one training instance which is composed of an input vector and a class label {x i , y i }, the likelihood of generating it is, P(y i |x i ) = (i,s)∈Nc s m=1 s im,sm (j m )p i,s (y i )(4) The optimization target is to minimize the negative loglikelihood loss over the whole training set containing N in- stances D = {x i , y i } N i=1 , L(D) = − N i=1 log(P(y i |x i ))(5) In this study, we constrain each dividing node to have exactly two children since we do not assume any prior knowledge on how many parts the input feature space should to be split into. We also assume a full binary-tree structure for simplicity. If some application-specific information such as clustering results are available, the tree structure can be adjusted accordingly. In the case of full binary tree, we can index each node with a single non-negative integer i for convenience. The recommendation function in each divingnode only needs to give a 2-vector s i and we use the shorthand s i to denote the probability the current dividing node input I i is recommended to the left sub-tree. For a dividing node D i , we denote nodes in its left and right sub-trees as node sets D l i and D r i , respectively. Then the probability of recommending the input x to a conquering node C i can be expressed as, P(C i |x) = Dj ∈N d s 1(Ci∈D l j ) j (1 − s j ) 1(Ci∈D r j )(6) where N d is the set of all dividing nodes and 1 is an indicator variable for the expression inside the parenthesis to hold. For the classification experiments we use the simplest conquering strategy for each conquering node as in [8], where each conquering node gives a constant probability distribution p i . The loss function is differentiable with respect to each s i , and the gradient for this full binary tree structure ∂L(D) ∂si is [8,19,17], in [8] and [19] all D i come from the last layer of a deep model and are hence coupled. When the dividing nodes are fixed, the distribution at each conquering node can be updated iteratively [8], N t=1 ( Cj ∈D l i p j (y t )P(C j |x t ) s i P(y t |x t ) − Cj ∈D r i p j (y t )P(C j |x t ) (1 − s i )P(y t |x t ) )(7)p t+1 j (y) = 1 Q t j N i=0 1(y i = y)p t j (y i )P(C j |x i ) P(y i |x i )(8) where Q t j is a normalization factor to ensure |pj | y=1 p t+1 j (y) = 1. The backward propagation and the conquering nodes update are carried out alternately to train the model. Regression For regression problems, the output of a conquering node C i,s is also a real-valued vector p i,s but the entries do not necessarily sum to 1. The final prediction vector P i for input x i is, P i = (i,s)∈Nc s m=1 s im,sm (j m )p i,s(9) For a multi-task regression dataset with N instances D = {x i , y i } N i=1 , we directly use the squared loss function, L(D) = 1 2 N i=1 ||P i − y i || 2(10) which was also used in the mixture of experts framework [12]. Here we use the same full binary tree structure and assume simple conquering nodes which have constant mapping functions just as the classification case. Similarly, ∂L(D) ∂si is computed as, ∂L(D) ∂s i = N t=1 (P t − y t ) T ( A l s i − A r (1 − s i ) ) (11) where A l = Cj ∈D l i P(C j |x i )p j and A r = Cj ∈D r i P(C j |x i )p j . Similar to 8, we update the conquering node prediction as p t+1 j = N i=0 y i P(C j |x i ) N i=0 P(C j |x i )(12) This update rule is inspired from traditional regression trees which compute an average of target vectors that are routed to a leaf node. Here the target vectors are weighted by how likely it is recommended into this conquering node. Experiments Classification for MNIST We start with an illustration using MNIST. We compare the model architecture of [8,19] with two variants of our proposed DHM as shown in Figure 3. The original architecture [8,19] is denoted as NDF. NDF passes some randomly chosen outputs from the last fully-connected layer to sigmoid gates, whose outputs are used as the recommendation scores s i of each dividing node. The other two structures are detailed in the following subsections. The MNIST data set contains 60000 training images and 10000 testing images of size 28 by 28 2 . During the experiment, binary tree depth and tree number are set to 7 and 1, respectively. Adam optimizer is used with learning rate specified as 0.001. Batch size is set to 500 and the training time is fixed to 50 epochs. Every experiment is repeated 10 times and averaged results with standard deviation are reported. Separated Recommendation Functions This type of DHM separates each dividing node's input and output, as shown in the middle column of Figure 3. Each dividing node processes the raw input image and produces a single number after the fully-connected layer, which is passed through a sigmoid function to give s i . One can think of this structure as the mapping functions for all dividing nodes are identity mappings M i,s (I i,s ) = I i,s . We denote this type as DHM (separated). The final test accuracy of this and other types of models are summarized in Table 2. The number of multiplication (NOM) before and after probabilistic pruning. multiplication (NOM) operation needed in the convolution and linear layers, which is shown in Table 2. Deeper Feature Along the Path In this type of architecture, the root dividing node does more initial processing and reduces the size of the input images (See the right column of Figure 3). Other dividing nodes pass the processed feature maps to its children dividing nodes as inputs. Every dividing node also sends their flattened outputs to a linear and sigmoid layer to produce s i . The mapping function in this case can be seen as the local network without the last fully-connected layer. The intuition to use this topology is that the node input at larger depth will pass more dividing nodes and be processed more times. This type of model is denoted as DHM (connected). Probabilistic Pruning The distribution of s i during the training process is shown in Figure 4. Every bar plot contains 500 bins to quantize all dividing nodes' s i values from 60000 training images. After initialization the distribution is centered around 0.5 while after longer training time, the dividing nodes are more decisive to recommend their inputs. When s i is very close to 1 or 0, the contribution from one of the two sub-trees is too low to be worthwhile for extra evaluation. This motivates the Probabilistic Pruning (PP) strategy which gives up evaluation of a sub-tree dynamically if the recommendation score of entering it is too low. NDF does not support PP even if the distribution strongly encourages it (see Figure 4 left), since all dividing nodes are coupled to the last fullyconnected layer of the network. On the other hand, DHM can support PP naturally. In the experiment, we set the pruning threshold as 0.5 so that only one computation path is taken for every input image. The resulting test accuracy and NOM are shown in Table 1 and Applying PP only sacrifices the testing accuracy negligibly but the computational cost is reduced from exponential to linear since now the most significant computation path determines the result. These results prove that DHM can take advantage of the distribution of recommendation scores. The recommendation scores distribution for testing images before and after pruning is shown in Figure 5. Surprisingly, when a large amount of "hesitating" dividing nodes are deterministically given which child-node to use, the accuracy was not affected significantly. Adding Sparsity Here we use local binary convolution [6] to add sparse feature extraction process into DHM, making it DSHM. Every original convolution layer is replaced by two convolution layers and a ReLU gate. The first convolution layer is fixed and does not introduce any learnable parameters. The output feature maps of the first layer is passed to the ReLU gate, whose outputs are linear combined by the second 1 by 1 convolution layer. During initialization, some entries in the convolution kernel of the first layer are randomly assigned to be zero. The remaining entries are randomly assigned to 1 or -1 with probability 0.5 for each option. The percentage of non-zero entries in the fixed convolution kernel is defined as the sparsity level. In the experiment, we use 16 intermediate channels (output feature map number of the first layer) for all local binary convolution layers. DHM (separated) is used and other network parameters are consistent with the former experiments without sparse convolution layer. The resulting test accuracy and NOM is shown in Table 3. Since convolution with binary kernel can be implemented by addition and subtraction, the required NOM is further reduced. This experiment shows sparse feature extraction process can be seamlessly incorporated into DHM, which can be used in devices with limited computational resources. Cascaded regression with DHM Here we compare DHM with NDF architecture for a regression task, i.e., cascaded regression based face alignment. For an input image x i , the goal of face alignment is to predict the facial landmark position vector y i . Cascaded re- gression method starts with an initialized shapeŷ 0 and use a cascade of regressors to update the estimated facial shape stage by stage. The final predictionŷ =ŷ 0 + K t=1 ∆y t where K is the total stage number and ∆y t is the shape update at stage t. In [7,23], every regressor was an ensemble of regression trees whose leaf nodes give the shape update vector. Every splitting node in the regression tree locates two pixels around one current estimated landmark, whose difference was used to route the input in a hard-splitting manner. (see Figure 6) We replace the the traditional regression trees with our DHMs that use a full binary tree structure so as to extend [7] with deep representation learning ability. During initialization, every dividing node is randomly assigned a landmark index. The input to a dividing node is then a cropped region centered around its indexed landmark. In the experiment we use a crop size of 60 by 60 and a simple CNN to compute the recommendation score, whose structure is shown in Figure 6. The comparison group uses traditional NDF architecture and we feed entire image as input (see Figure 7). Similarly, every conquering node store a shape update vector as in [7,23] and (12) is used to update them. We use a large scale synthetic 3D face alignment dataset 300W-LP [25] for training and the AFLW3D dataset (reannotated by [2]) for testing. We use 57559 training images in 300W-LP and the whole 1998 images in the AFLW3D for testing. The images are cropped and resized to 224 by 224 patch using the same initial processing procedure in [2]. To rule out the influences of face detectors as mentioned in [16], a bounding box for a face is assumed to be centered at the centroid of the facial landmarks and encloses all the facial landmarks inside. We use the same error metric as Method NOM (After Pruning) Error (After Pruning) NDF 254M (Not able) 0.0643 (Not able) DHM 228M (35.6M) 0.0628 (0.06382) Table 4. The comparison of traditional NDF architecture and our DHM for regression task. Numbers in the parentheses give the results with PP and only one computational path was taken. [2] where the landmark prediction error is normalized by the bounding box size. In the experiment we use a cascade length of 10 and tree depth of 5 and in each stage we use an ensemble of 5 DHMs. We use the ADAM optimizer with a learning rate at 0.01 (0.001 for NDF as it works better for it in the experiment) and train 10 epochs for each stage. The average test errors of the two different architectures are shown in Table 4. Again, DHM supports PP to greatly reduce the computational cost and the performance only drops gracefully. This experiment validates again the strength of DHM over traditional NDF architecture in regression problems. Figure 8 shows some success and failure cases of this model. Compared with NDF, our DHM can significantly reduce the computational complexity after pruning with even slightly better alignment accuracy. Conclusion We proposed Deep Hierarchical Machine (DHM), a flexible framework for combining divide-and-conquer strategy and deep representation learning. Unlike recently proposed deep neural decision/regression forest, DHM can take advantage of the distribution of recommendation scores and a probabilistic pruning strategy is proposed to avoid un-necessary path evaluation. We also showed the feasibility of introducing sparse feature extraction process into DHM by using local binary convolution, which mimics traditional decision tree with pixel-difference feature and has potential for devices with limited computing resources.
4,221
1812.00448
2902657131
For precision medicine and personalized treatment, we need to identify predictive markers of disease. We focus on Alzheimer's disease (AD), where magnetic resonance imaging scans provide information about the disease status. By combining imaging with genome sequencing, we aim at identifying rare genetic markers associated with quantitative traits predicted from convolutional neural networks (CNNs), which traditionally have been derived manually by experts. Kernel-based tests are a powerful tool for associating sets of genetic variants, but how to optimally model rare genetic variants is still an open research question. We propose a generalized set of kernels that incorporate prior information from various annotations and multi-omics data. In the analysis of data from the Alzheimer's Disease Neuroimaging Initiative (ADNI), we evaluate whether (i) CNNs yield precise and reliable brain traits, and (ii) the novel kernel-based tests can help to identify loci associated with AD. The results indicate that CNNs provide a fast, scalable and precise tool to derive quantitative AD traits and that new kernels integrating domain knowledge can yield higher power in association tests of very rare variants.
Popular kernel-based tests include FaST-LMM-Set @cite_0 @cite_19 , the sequence kernel association test (SKAT, @cite_7 ) and optimal SKAT (SKAT-O, @cite_4 ), which are based on weighted linear kernels @math @cite_7 @cite_8 @cite_19 , or a linear combination of weighted linear and collapsing kernels @cite_4 . Newer approaches @cite_17 derive further data-adaptive combinations of linear, quadratic, IBS, and collapsing kernels. However, all these kernels provide suboptimal performance for the analysis of very rare genetic variants. Here, linear and quadratic kernels yield uninformative similarity measures (i.e., diagonal kernel matrices for singletons, which are variants with only one observed copy of the minor allele) and collapsing kernels often yield unspecific signals and aggregate noise.
{ "abstract": [ "SUMMARY With development of massively parallel sequencing technologies, there is a substantial need for developing powerful rare variant association tests. Common approaches include burden and non-burden tests. Burden tests assume all rare variants in the target region have effects on the phenotype in the same direction and of similar magnitude. The recently proposed sequence kernel association test (SKAT) (Wu, M. C., and others, 2011. Rare-variant association testing for sequencing data with the SKAT.The American Journal of HumanGenetics89,82–93],anextensionoftheC-alphatest(Neale,B.M.,andothers,2011.Testingforan unusual distribution of rare variants. PLoS Genetics 7, 161–165], provides a robust test that is particularly powerful in the presence of protective and deleterious variants and null variants, but is less powerful than burden tests when a large number of variants in a region are causal and in the same direction. As the underlying biological mechanisms are unknown in practice and vary from one gene to another across the genome,itisofsubstantialpracticalinteresttodevelopatestthatisoptimalforbothscenarios.Inthispaper, we propose a class of tests that include burden tests and SKAT as special cases, and derive an optimal test within this class that maximizes power. We show that this optimal test outperforms burden tests and SKAT in a wide range of scenarios. The results are illustrated using simulation studies and triglyceride data from the Dallas Heart Study. In addition, we have derived sample size power calculation formula for SKAT with a new family of kernels to facilitate designing new sequence association studies.", "Motivation: Set-based variance component tests have been identified as a way to increase power in association studies by aggregating weak individual effects. However, the choice of test statistic has been largely ignored even though it may play an important role in obtaining optimal power. We compared a standard statistical test— a score test—with a recently developed likelihood ratio (LR) test. Further, when correction for hidden structure is needed, or gene– gene interactions are sought, state-of-the art algorithms for both the score and LR tests can be computationally impractical. Thus we develop new computationally efficient methods. Results: After reviewing theoretical differences in performance between the score and LR tests, we find empirically on real data that the LR test generally has more power. In particular, on 15 of 17 real datasets, the LR test yielded at least as many associations as the score test—up to 23 more associations—whereas the score test yielded at most one more association than the LR test in the two remaining datasets. On synthetic data, we find that the LR test yielded up to 12 more associations, consistent with our results on real data, but also observe a regime of extremely small signal where the score test yielded up to 25 more associations than the LR test, consistent with theory. Finally, our computational speedups now enable (i) efficient LR testing when the background kernel is full rank, and (ii) efficient score testing when the background kernel changes with each test, as for gene–gene interaction tests. The latter yielded a factor of 2000 speedup on a cohort of size 13 500. Availability: Software available at http: research.microsoft.com enus um redmond projects MSCompBio Fastlmm .", "", "Motivation: Approaches for testing sets of variants, such as a set of rare or common variants within a gene or pathway, for association with complex traits are important. In particular, set tests allow for aggregation of weak signal within a set, can capture interplay among variants and reduce the burden of multiple hypothesis testing. Until now, these approaches did not address confounding by family relatedness and population structure, a problem that is becoming more important as larger datasets are used to increase power. Results: We introduce a new approach for set tests that handles confounders. Our model is based on the linear mixed model and uses two random effects—one to capture the set association signal and one to capture confounders. We also introduce a computational speedup for two random-effects models that makes this approach feasible even for extremely large cohorts. Using this model with both the likelihood ratio test and score test, we find that the former yields more power while controlling type I error. Application of our approach to richly structured Genetic Analysis Workshop 14 data demonstrates that our method successfully corrects for population structure and family relatedness, whereas application of our method to a 15 000 individual Crohn’s disease case–control cohort demonstrates that it additionally recovers genes not recoverable by univariate analysis. Availability: A Python-based library implementing our approach is available at http: mscompbio.codeplex.com. Contact:[email protected] or [email protected] or [email protected] Supplementary information:Supplementary data are available at Bioinformatics online.", "", "Analysis of rare genetic variants has focused on region-based analysis wherein a subset of the variants within a genomic region is tested for association with a complex trait. Two important practical challenges have emerged. First, it is difficult to choose which test to use. Second, it is unclear which group of variants within a region should be tested. Both depend on the unknown true state of nature. Therefore, we develop the Multi-Kernel SKAT (MK-SKAT) which tests across a range of rare variant tests and groupings. Specifically, we demonstrate that several popular rare variant tests are special cases of the sequence kernel association test which compares pair-wise similarity in trait value to similarity in the rare variant genotypes between subjects as measured through a kernel function. Choosing a particular test is equivalent to choosing a kernel. Similarly, choosing which group of variants to test also reduces to choosing a kernel. Thus, MK-SKAT uses perturbation to test across a range of kernels. Simulations and real data analyses show that our framework controls type I error while maintaining high power across settings: MK-SKAT loses power when compared to the kernel for a particular scenario but has much greater power than poor choices." ], "cite_N": [ "@cite_4", "@cite_7", "@cite_8", "@cite_0", "@cite_19", "@cite_17" ], "mid": [ "2100909778", "2169438557", "", "2153037482", "", "2460516726" ] }
Integrating omics and MRI data with kernel-based tests and CNNs to identify rare genetic markers for Alzheimer's disease
In this study, we focus on Alzheimer's disease (AD) as outcome of interest, which is a progressive neurodegenerative disease, appears late-onset and sporadic in most cases, and is the main cause of dementia in the elderly. As the cognitive symptoms emerge years after the appearance of brain atrophy and exhibit close correlation with the structural changes, brain magnetic resonance imaging (MRI) scans provide a direct way to obtain informative quantitative traits, and fast automated approaches are necessary for large-scale studies. AD has a high estimated heritability of 74% [1] and a prevalence of 4.4% in Europe [2]. However, the biological pathways underlying AD have not been well-understood and there is yet no known cure. Hence, the identification of AD markers for early detection and as targets for treatment is important. For the detection of causal genetic loci, recent sequencing efforts allow in-depth analyses of rare variants in large cohorts, and kernel-based gene-level tests have been proposed for the analysis [3,4,5,6,7]. They derive similarity scores between samples in the form of a kernel matrix which is computed on a particular genomic locus or functional unit in the genome. Then, kernel-based variance-component test statistics are derived that yield robust and powerful tests. Kernel functions provide a highly flexible way to model genetic variation. However, their full capabilities have not been leveraged and existing approaches still provide suboptimal performance for the analysis of sequencing data [8], where the overwhelming majority of genetic variation is extremely rare. Hence extensions to the existing methods are warranted that leverage the full power of kernels to aggregate the signal of very rare single nucleotide variants (SNVs). Our contributions in this paper are in two areas. First, we use a convolutional neural network (CNN) to derive quantitative traits from MRI scans, and evaluate if precise traits are obtained on data from the Alzheimer's Disease Neuroimaging Initiative (ADNI), where also traits obtained by the popular yet computationally expensive FreeSurfer software [9] are available. Second, we propose novel kernels for association tests of rare genetic variants that incorporate prior biological knowledge from annotations and multi-omics measures. We perform association analyses between these novel kernels computed on sequencing data and CNN-derived traits to identify genetic loci associated with AD. New kernel-based tests for very rare genetic variants To leverage the full power of kernels computing similarities in high-dimensional Hilbert space, whereto genetic variants are mapped through a potentially infinite-dimensional basis function φ, we consider the more general linear mixed model Y i = X i α + Z i β + φ(G i )γ + ε i , i = 1 . . . n.(3) Here, β and γ are normally distributed random effects. After integrating out β, γ and , it follows that Y is normally distributed with covariance σ 2 z ZZ T + σ 2 g K + σ 2 ε I, where we have defined the kernel matrix K := φ(G)φ(G) T . In this model, established tests [4,5,6] can be used to test the association between sets of SNVs and the phenotype, see Figure 1 for an illustration. Examples of new kernels Let G be the matrix of the m SNVs of interest. We define a class of n × n kernel matrices K as K = GV W V T G T (4) where different instances are obtained by setting the m × m weight and similarity matrices W , V to the identity, to the matrices outlined below, or any combination of these. See Appendix A for details. Incorporate annotations Set W = AA T where A is the numeric m × p matrix encoding p characteristics of the m SNVs, such as the minor allele frequency (MAF), genomic position, or functional annotations from PolyPhen2 [10], RegulomeDB [11], or others. Set the elements ij of V to (i) describe the similarity of SNVs i and j in terms of genomic closeness, or (ii) indicate whether SNVs i and j have a (or the same) functional annotation. Incorporate information from available omics data Set W = AA T where A is the m × p matrix (i) containing -log 10 p-values of association tests of the m SNVs with omics data e.g. gene expression levels of p genes or (ii) indicating for each of these p-values if they are < α, where α is pre-specified constant. Set the elements ij of V to be indicators whether SNVs i and j both have p-value < α. Application: analysis of ADNI study In the application, we analyzed whole-genome-sequencing data, gene expression measures, MRI data as well as AD biomarkers in n = 556 participants from ADNI, which is a longitudinal study to detect biomarkers and risk factors for AD [12,13]. In a first step, we designed a 3-dimensional CNN comprising seven convolutional layers followed by a max pooling layer and a final fully-connected layer to predict the volume of the 3 rd ventricle from the MRI scans (see Figure 2 for an illustration and see Appendix B, Figure S1 for details). To evaluate the approach, we chose the 3 rd ventricle, as we found that the ventricular regions were displayed with a higher contrast and presumably easier to identify. The CNN predicted volume was then used as a quantitative trait in the following genetic association analyses, and evaluated against the predictions by the FreeSurfer software. Both models where trained on a dual Intel Xeon 6148 workstation equipped with an NVidia Titan-V graphics card. In the main genetic association analysis we analyzed 17,013 (quality-controlled, biallelic, missingness <5%, of any MAF) SNVs in 125 genes in the 1Mbp region around the APOE gene on chromosome 19, similar to the study in [14], to investigate rare variants in a genomic region where several common variants have been associated with AD. We performed cross-sectional association tests of these 125 genes with 9 different AD traits (peptides CSF Aβ, t-tau, p-tau, and the provided brain volumes of entorhinal cortex, hippocampus, medial temporal lobe, ventricles, 3 rd ventricle predicted by FreeSurfer, and 3 rd ventricle predicted from the CNN) adjusting for the covariates age, gender, education, ethnicity, and APOE4 allele. The association tests were performed based on different combinations of the new proposed kernels (in Appendix A) and using standard SKAT and SKAT-O. CSF t-tau 9.1 ×10 −5 5.8 ×10 −5 5.2 ×10 −5 6.4 ×10 −4 4.0 ×10 −4 CSF p-tau 1.5 ×10 −3 9.3 ×10 −4 8.5 ×10 −4 1.2 ×10 −3 1.9 ×10 −3 CSF Aβ 4.9 ×10 −3 9.9 ×10 −3 4.9 ×10 −3 2.1 ×10 −3 1.7 ×10 −3 Entorhinal cortex 7.0 ×10 −4 3.3 ×10 −4 4.2 ×10 −4 1.6 ×10 −2 3.4 ×10 −3 Hippocampus 6.6 ×10 −2 3.7 ×10 −2 3.8 ×10 −2 5.6 ×10 −4 2.5 ×10 −2 Med-temporal lobe 1.1 ×10 −3 3.7 ×10 −3 1.4 ×10 −3 2.3 ×10 −2 4.6 ×10 −3 Ventricles 1.5 ×10 −2 9.3 ×10 −3 1.0 ×10 −2 9.6 ×10 −4 5.5 ×10 −3 FreeS 3 rd Ventricle 6.8 ×10 −2 6.5 ×10 −2 8.0 ×10 −2 3.4 ×10 −2 5.1 ×10 −3 CNN 3 rd Ventricle 1.9 ×10 −2 5.9 ×10 −2 5.9 ×10 −2 1.7 ×10 −2 2.0 ×10 −2 Results The 125 genes contained on average 220 SNVs (min = 1, max = 1759). Of the 17,013 SNVs, 7575 were singletons, 1740 doubletons, and 12,337 SNVs had MAF < 0.01. 24 participants had dementia, 338 mild cognitive impairment, 194 were cognitive normal (see Table S1 for descriptive statistics). In an evaluation of the predicted volume of the 3 rd ventricle, CNN and FreeSurfer predictions showed a high correlation (Pearson r = 0.92, see Figure S2). For small/large volumes, compared to FreeSurfer, CNN slightly over-/underestimated the volume, which we expect to disappear with larger training data. On the other hand, CNN was much faster (1 second versus 16 hours per scan). In the main genetic association analyses, a first comparison showed that analyses using the CNNpredicted trait as outcome generally yielded similar and often smaller p-values compared to the FreeSurfer-predicted trait ( Figures S3-S4). Preliminary comparisons of all new kernels indicated that the three kernels reported in Table 1 yielded often the smallest p-values in gene-based tests, hence they are reported here. Tests based on the new kernel 1 yielded consistently smaller or similar p-values for the top genes compared to SKAT and SKAT-O for 8 out of 9 traits (Table 1). More detailed comparisons ( Figure S5) indicated that while often the same genes were identified with smallest p-value by tests based on the new kernel 1 and by SKAT or SKAT-O, the new kernel 1 also yielded different candidate genes that would not have been identified by SKAT or SKAT-O (and vice versa). The new kernels 2 and 3 yielded sometimes larger but also sometimes much smaller p-values. Using a Bonferroni correction (for the 125 tests) of the p-values of the new kernel-based tests, we identified 3 candidate genes for AD with adjusted p-values 0.007, 0.05, 0.07: PVR for CSF t-tau, SIX5 for entorhinal cortex and PVRL2 for hippocampus. Discussion The empirical analyses indicated that (i) CNNs provide a precise, fast and scalable tool to derive quantitative traits from MRI scans and that (ii) new kernels integrating domain knowledge and omics data constitute a promising approach for the analysis of very rare variants. There is previous evidence for the association of the identified genes with AD [15,16,17,18] to support our findings, and of note, the p-values are much smaller using the new kernels here compared to regular kernels [14]. Limitations of the current analyses are that only few functional annotations are available for rare SNVs, and that only a basic control for population stratification was used. In the interpretation of the results regarding their biological relevance, it can be noted that the analyses were adjusted for the risk factor APOE4, so that the identified genes and SNVs represent markers with independent effects on AD. Future research can investigate kernels measuring the similarity between the bivariate allelic sequences directly, and data-adaptive optimal combinations of different kernels. Acknowledgments Data used in the preparation of this article were obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database adni.loni.usc.edu. As such, the investigators within the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in analysis or writing of this report. Supplementary material A Details on new kernels Let n be the number of observations and m the number of SNVs of interest. Define the kernel K as K = GV W V T G T in equation (4) by setting the m × m matrices W and V to I or the following. Consider the weight matrices W = AA T where A is 1. a m × 1 vector with entries Beta(MAF i , 1, 25), i = 1 . . . m, where Beta is the probability density function of the beta distribution with parameters 1 and 25, and MAF i is the minor allele frequency of SNV i. 2. a m×1 vector where each entry A i is an indicator whether SNV i has a functional annotation in, for example, the PolyPhen2 database. 3. a m × 1 vector where each entry A i is a numeric encoding of the functional annotation of SNV i, for example, in the PolyPhen2 database: A i =      0 if SNV i is not annotated 1 if SNV i has annotation "benign" 2 if SNV i has annotation "possibly damaging" 3 if SNV i has annotation "probably damaging" 4. a m × 1 vector where each entry A i is the -log 10 p-value from a hypothesis test of the association between SNV i and a variable Z providing relevant information about its biological function, e.g. where Z is the gene expression of the gene in which the SNV lies. a m × 1 vector where each entry A i is the sum of 1 and an indicator variable whether SNV i is associated with a variable Z providing relevant information about its biological function as in the bullet point above, e.g. evaluating whether the p-value from a hypothesis test of the association between SNV i with a variable Z is smaller than 0.05. 6. the Hadamard product of the vectors in bullet points (1 and 4) or (1 and 5). 7. the sum of the vectors in bullet points (2 or 3) and (4 or 5). 8. the sum of the vectors in bullet points (2 or 3) and 6. Consider the m × m matrices V describing the similarity of SNVs where 1. V ij = similarity of SNVs i and j in terms of genomic closeness: V ij = 1 if i = j 1/d ij else where d ij is the genomic distance between SNVs i and j in base pairs. 2. V ij = indicator whether SNVs i and j both have a functional annotation: V ij = 1 if i = j 1 if SNVs i,V ij = 1 if i = j 1 if SNVs i, j both have p-value < α 0 else. where the p-value of SNV i is from an association test with a variable Z that provides relevant information about its biological function, e.g. where Z is the gene expression of the gene in which the SNV lies. 5. V is the product of the matrices in bullet points (1 and (2 or 3)), (1 and 4), or (4 and (2 or 3)) 6. V is the product of the matrices in bullet points 1 and (2 or 3) and 4. B Details on convolutional neural networks Model architecture The model architecture is illustrated in Figure 2, and in more detail in Figure S1. We designed a CNN made of a sequence of seven convolutional layers followed by a max pooling layer and a fully-connected layer. We used two types of convolutional layers: Regular and Down-Convolution. Regular convolutional layers comprised a 3 x 3 x 3 convolutional operation with 1 x 1 x 1 strides. The down-convolutional layers comprised a 2 x 2 x 2 convolutional operation with 2 x 2 x 2 strides. Each convolutional layer was followed by a Rectified Linear Unit non-linearity [19]. After the last convolutional layer, we used a max pooling layer with a filter size of 2 x 2 x 2. Subsequently, this layer was converted into a fully-connected layer, followed by the output layer containing a single node with a linear activation function. Model implementation The MRI scans were standardized to the spatial resolution of 1 x 1 x 1 millimeters and the size of 256 x 256 x 256 voxels. Additionally, for computational efficiency, they were cropped and down-sampled to 96 x 109 x 96 voxels. The model was trained on 2100 MRI scans (from 411 subjects) for 200 epochs with the loss function set to the mean absolute error using the Adaptive Moment Estimation optimizer [20], a learning rate of 10 −4 and a 3D spatial drop out regularization of 0.9. Hyperparameter tuning was carried out on a validation dataset comprising 550 scans of 129 subjects that all had MRI data but did not have genetic data available so that they could not be included in the main analysis. The final evaluation was done on the test set including the 556 subjects of the main analysis that had all MRI, genetic, and gene expression data available. The model performance on the test set is visualized in Figure S2. Computational comparison with FreeSurfer Both models where trained on a dual 20 core Intel Xeon 6148 workstation with 768GB RAM equipped with an NVidia Titan-V graphics card. CNN computations made use of GPU optimization, taking 1 second for the prediction of the volume of the third ventricle per MRI scan. FreeSurfer, which did not utilize the GPU, took 16 hours per MRI scan. (None, 1) Figure S1: Graphical visualization of the 3D convolutional neural network model in Keras. Shown are input and output of the different layers, and the respective voxels and channels. For example, the input volume had 96 × 109 × 96 voxels and 1 channel. As all computations were done in one batch, the batch size was not specified (noted as "None" in the graph). q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q Figure S5: Scatterplot of the -log 10 p-values from association tests of the 125 genes using SKAT (x axis) and the new kernel-based test 1 (y axis), for each of the 9 traits in separate panels. In addition, the diagonal is printed for a comparison of both tests.
3,016
1812.00344
2903401627
Understanding web instructional videos is an essential branch of video understanding in two aspects. First, most existing video methods focus on short-term actions for a-few-second-long video clips; these methods are not directly applicable to long videos. Second, unlike unconstrained long videos, e.g., movies, instructional videos are more structured in that they have step-by-step procedure constraining the understanding task. In this paper, we study reasoning on instructional videos via question-answering (QA). Surprisingly, it has not been an emphasis in the video community despite its rich applications. We thereby introduce YouQuek, an annotated QA dataset for instructional videos based on the recent YouCook2. The questions in YouQuek are not limited to cues on one frame but related to logical reasoning in the temporal dimension. Observing the lack of effective representations for modeling long videos, we propose a set of carefully designed models including a novel Recurrent Graph Convolutional Network (RGCN) that captures both temporal order and relation information. Furthermore, we study multiple modalities including description and transcripts for the purpose of boosting video understanding. Extensive experiments on YouQuek suggest that RGCN performs the best in terms of QA accuracy and a better performance is gained by introducing human annotated description.
Instructional video understanding has received much attention recently. @cite_26 and @cite_1 both leverage the natural language annotation of the videos to learn the instructional procedure in videos. @cite_19 , however, propose to learn the temporal boundaries of different steps in a supervised manner without the aid of textual information. Dense captioning is also posed on instructional videos in @cite_24 , which aims at localizing temporal events from a video, and describing them with natural language sentences. Visual-linguistic ambiguities can be a common problem in instructional videos with narratives. @cite_7 focus on such ambiguities caused by the changing in visual appearance and referring expression, and aim to resolve references with no supervision. @cite_14 perform visual grounding task in instructional videos, also coping with visual-linguistic ambiguities. Yet, none of these works have tackled the QA problem on instructional videos, despite the uniqueness for instructional videos to perform reasoning.
{ "abstract": [ "We address the problem of automatically learning the main steps to complete a certain task, such as changing a car tire, from a set of narrated instruction videos. The contributions of this paper are three-fold. First, we develop a new unsupervised learning approach that takes advantage of the complementary nature of the input video and the associated narration. The method solves two clustering problems, one in text and one in video, applied one after each other and linked by joint constraints to obtain a single coherent sequence of steps in both modalities. Second, we collect and annotate a new challenging dataset of real-world instruction videos from the Internet. The dataset contains about 800,000 frames for five different tasks that include complex interactions between people and objects, and are captured in a variety of indoor and outdoor settings. Third, we experimentally demonstrate that the proposed method can automatically discover, in an unsupervised manner, the main steps to achieve the task and locate the steps in the input videos.", "Grounding textual phrases in visual content with standalone image-sentence pairs is a challenging task. When we consider grounding in instructional videos, this problem becomes profoundly more complex: the latent temporal structure of instructional videos breaks independence assumptions and necessitates contextual understanding for resolving ambiguous visual-linguistic cues. Furthermore, dense annotations and video data scale mean supervised approaches are prohibitively costly. In this work, we propose to tackle this new task with a weakly-supervised framework for reference-aware visual grounding in instructional videos, where only the temporal alignment between the transcription and the video segment are available for supervision. We introduce the visually grounded action graph, a structured representation capturing the latent dependency between grounding and references in video. For optimization, we propose a new reference-aware multiple instance learning (RA-MIL) objective for weak supervision of grounding in videos. We evaluate our approach over unconstrained videos from YouCookII and RoboWatch, augmented with new reference-grounding test set annotations. We demonstrate that our jointly optimized, reference-aware approach simultaneously improves visual grounding, reference-resolution, and generalization to unseen instructional video categories.", "We propose an unsupervised method for reference resolution in instructional videos, where the goal is to temporally link an entity (e.g., \"dressing\") to the action (e.g., \"mix yogurt\") that produced it. The key challenge is the inevitable visual-linguistic ambiguities arising from the changes in both visual appearance and referring expression of an entity in the video. This challenge is amplified by the fact that we aim to resolve references with no supervision. We address these challenges by learning a joint visual-linguistic model, where linguistic cues can help resolve visual ambiguities and vice versa. We verify our approach by learning our model unsupervisedly using more than two thousand unstructured cooking videos from YouTube, and show that our visual-linguistic model can substantially improve upon state-of-the-art linguistic only model on reference resolution in instructional videos.", "This paper describes a framework for modeling human activities as temporally structured processes. Our approach is motivated by the inherently hierarchical nature of human activities and the close correspondence between human actions and speech: We model action units using Hidden Markov Models, much like words in speech. These action units then form the building blocks to model complex human activities as sentences using an action grammar. To evaluate our approach, we collected a large dataset of daily cooking activities: The dataset includes a total of 52 participants, each performing a total of 10 cooking activities in multiple real-life kitchens, resulting in over 77 hours of video footage. We evaluate the HTK toolkit, a state-of-the-art speech recognition engine, in combination with multiple video feature descriptors, for both the recognition of cooking activities (e.g., making pancakes) as well as the semantic parsing of videos into action units (e.g., cracking eggs). Our results demonstrate the benefits of structured temporal generative approaches over existing discriminative approaches in coping with the complexity of human daily life activities.", "Dense video captioning aims to generate text descriptions for all events in an untrimmed video. This involves both detecting and describing events. Therefore, all previous methods on dense video captioning tackle this problem by building two models, i.e. an event proposal and a captioning model, for these two sub-problems. The models are either trained separately or in alternation. This prevents direct influence of the language description to the event proposal, which is important for generating accurate descriptions. To address this problem, we propose an end-to-end transformer model for dense video captioning. The encoder encodes the video into appropriate representations. The proposal decoder decodes from the encoding with different anchors to form video event proposals. The captioning decoder employs a masking network to restrict its attention to the proposal event over the encoding feature. This masking network converts the event proposal to a differentiable mask, which ensures the consistency between the proposal and captioning during training. In addition, our model employs a self-attention mechanism, which enables the use of efficient non-recurrent structure during encoding and leads to performance improvements. We demonstrate the effectiveness of this end-to-end model on ActivityNet Captions and YouCookII datasets, where we achieved 10.12 and 6.58 METEOR score, respectively.", "The potential for agents, whether embodied or software, to learn by observing other agents performing procedures involving objects and actions is rich. Current research on automatic procedure learning heavily relies on action labels or video subtitles, even during the evaluation phase, which makes them infeasible in real-world scenarios. This leads to our question: can the human-consensus structure of a procedure be learned from a large set of long, unconstrained videos (e.g., instructional videos from YouTube) with only visual evidence? To answer this question, we introduce the problem of procedure segmentation--to segment a video procedure into category-independent procedure segments. Given that no large-scale dataset is available for this problem, we collect a large-scale procedure segmentation dataset with procedure segments temporally localized and described; we use cooking videos and name the dataset YouCook2. We propose a segment-level recurrent network for generating procedure segments by modeling the dependencies across segments. The generated segments can be used as pre-processing for other tasks, such as dense video captioning and event parsing. We show in our experiments that the proposed model outperforms competitive baselines in procedure segmentation." ], "cite_N": [ "@cite_26", "@cite_14", "@cite_7", "@cite_1", "@cite_24", "@cite_19" ], "mid": [ "2185243164", "2798708692", "2949642982", "2099614498", "2795840542", "2952132648" ] }
How to Make a BLT Sandwich? Learning to Reason towards Understanding Web Instructional Videos
Humans can acquire knowledge by watching instructional videos online. A typical situation is that people confused by specific problems try to look for solutions in related instructional videos. For example, while learning to cook new dishes, they may wonder why a specific ingredient is added, and what happens between the two procedures. Watching instructional videos can often clarify these questions and hence, assist humans in accomplishing tasks. We hereby propose the question: can machines also understand instructional videos as humans do, which requires not only accurate recognition of objects, actions, and events but also the higher-order inference of any relations therein, e.g., spatial, temporal, correlative and casual? Here we use higherorder inference to refer to the inference that cannot be completed immediately by direct observations and thus requires stronger semantics for video modeling (see Fig. 1). Current instructional video understanding studies focus on various tasks e.g., reference resolution [6], procedure localization [23,1], dense captioning [24,16], activity detection [13,11] and visual grounding [7,18]. Despite the rich literature and applications, question-answering (QA) task in instructional videos explored in our work is less developed, which acts as a proxy to benchmark the higher-order inference in machine intelligence. Previous works, e.g., Im-ageQA [2,12,10] and VideoQA [15,21], also leverage the QA task as automatic evaluation method, but QA on instructional videos has never been tackled before. Observing the lack of suitable dataset on instructional videos, we propose YouCook Question Answering (YouQuek) dataset based on YouCook2 [23] which is the largest instructional video dataset. Our YouQuek dataset is the first reasoning-oriented dataset aimed for instructional videos. We employ question-answering as intuitive interpretations for various styles of reasoning. Figure 1 presents two exemplar QA pairs in our dataset along with the corresponding example human reasoning procedure involved to answer the questions. YouQuek dataset contains 15,355 manually-collected QA pairs that are divided into different categories regarding different reasoning styles, e.g., counting, ordering, comparison, and changing of properties. Upon the newly built dataset, we explore in two directions. The first one concerns effective representations of modeling instructional videos. The videos in our consideration have an average length of 5.27 min and as instructional videos, they are structured and have step-by-step procedure constraining the understanding task. By modeling the temporal relations among different procedures, we are expecting valuable information to be extracted from the instructional videos, for which we study various model structures and propose a novel Recurrent Graph Convolutional Network (RGCN). The RGCN deals with complex reasoning by message passing in the graph, but also maintains the sequential ordering information by a supporting RNN. In this design, graph and RNN can boost each other since the information can be swapped between the two pathways. Second, we explore the use of different modalities in video modeling. Apart from visual information, temporal boundaries, descriptions for each procedure, and transcripts are explored. In this direction, we want to test the effect of combining various types of available annotations with our developed video models on understanding instructional videos. Given that modeling instructional videos from vision alone is hard, combining such information approximates better the human learning experiences and it, in turn, gives us a hint for devising better models for machine intelligence. We conduct extensive experiments on the YouQuek dataset. In the ablation study, we find that attention mechanism helps boost the performance. Our proposed RGCN model outperforms all other models with respect to the overall accuracy, even without attention. From the multimodality perspective, modeling instructional videos using temporal boundaries together with descriptions can help dig more valuable information from videos. We also conduct human quiz on the QAs in our dataset. Results show that machines still have a large gap to human performance in that even without visual information, humans still can answer some questions correctly using life experience, or common sense, which hints us that incorporating the external knowledge with video models will be helpful for future works. Our main contributions are summarized as follows. • We propose YouQuek dataset, the first reasoningoriented dataset for understanding instructional videos. • We propose both models with various structures, especially a novel RGCN model, for video modeling. Our RGCN outperforms all other models even without attention. • We incorporate multi-modal information to perform extensive experiments on YouQuek showing that description can boost the video understanding capability, while transcripts could not. The rest of the paper is organized as the following. We first discuss some related works in Sec. 2, and introduce the proposed YouQuek dataset in Sec. 3. Then in Sec. 4, we set up series of baseline models for the dataset, and propose RGCN as a new model for instructional video reasoning. In Sec. 5, we demonstrate and discuss the experiment results. Conclusions are drawn in Sec. 6. The YouQuek dataset and our code for all methods will be released upon acceptance. YouQuek Dataset To validate the proposed task on instructional video reasoning, we introduce YouQuek dataset, a reasoningoriented video question answering dataset based on YouCook2 dataset. The dataset contains 15,355 questionanswer (QA) pairs in total. Tailored for our dataset, we annotate the QA pairs with six different tags, where each QA pair could be labeled with more than one tag. In supplementary material, we show example QA pairs for each tag described below. Counting: This tag annotates a QA pair that involves counting. One may count the occurrence time of certain actions or the number of certain ingredients. E.g., "How many white ingredients are used in the recipe?" Apart from counting, we also need to find out the target ingredients according to their colors. Time: Time is a distinguishing feature in videos compared to images. This category of questions are mainly about timing and duration. A typical example is, "Which one is faster: adding water or adding salt?". To answer this question, we not only need to know how long it takes for both actions, but also need to make comparison of the duration. Order: Long-term temporal order is a unique feature for instructional videos, because instructional videos come with step-by-step procedures, and the order information matters. E.g., in YouCook2, the ordering of procedure is critical to the success of one recipe. Therefore, we stress out questions related to action orders, e.g., "What happens before/after/between ...?", and "Does it matter to change the order of ... and ...?" Taste: YouCook2 is an instructional cooking video dataset, so we bring up with the taste questions. This type of QA pairs is about the flavor and the texture of the dish. Taste can also be related to reasoning in that one can infer the taste from the ingredients used, and the texture from the cooking methods applied. Note that we avoid questions that are subjective such as "Is this burger tasty?", which cannot be answered by reasoning, but by subjective inspection. Complex: This tag presents a broader concept than all other tags above. By "complex", we emphasize a multistep reasoning process instead of one-step reasoning. This type of questions overlaps with all other types. Property: Cooking usually involves changes of ingredients. The properties of ingredients, e.g. their shape, color, size, location, etc., may vary at different time points as the cooking procedure goes on. This type of questions is differ- ent from "order" questions since we are asking about certain ingredients rather than actions. In Tab. 1, we contrast our dataset to some other VideoQA datasets. Our dataset is unique in that we not only build the dataset based on instructional videos, but also focus on long-term ordering and higher-order inference. QA collection Many existing VideoQA datasets [21,19,22,20,25] adopt an automatic question-answer (QA) generation technique proposed by [4] to generate QA pairs from texts. However, QA pairs obtained via this method suffer from extremely low diversity. Also, automatic methods cannot generate questions involving complex reasoning, which goes against our goal of constructing the dataset. Therefore, we apply Amazon Mechanical Turk (AMT) to collect question and answer pairs. For details about the collection of QA and multiple choice alternatives, please refer to supplementary material. Statistics In Fig. 2a, we show the statistics of six different categories of questions. We have 7,200 complex reasoning QA pairs, consisting nearly half of our dataset. Other questions involve simpler reasoning procedure, but still cannot be answered by direct observation from the videos. On average, we have 1.478 tags per QA pair, 2.289 words per answer, and 7.678 QA pairs per video. To illustrate our dataset better, we split the QA pairs into four categories with respect to answer types, namely "Yes/No" for answers containing yes or no; "Numeric" for answers containing numbers, mostly related to counting and time; "Single word" for answers with only one word, excluding QA pairs in "Yes/No" and "Numeric"; "Text" for answers with multiple words, excluding QA pairs in "Yes/No" and "Numeric". Fig. 2b shows the distribution of four different types of answers in our dataset. Table 1: Comparison among different video question answering datasets. The first four columns are: "Inst." for whether it is based on instructional videos; "Natural" for whether videos are of natural world settings; "Reason" for whether questions are related to reasoning; "Human" for whether QA pairs are collected through human labor. Inst. Natural Reason Human # of QA Per video length Answering form VTW [21] 174955 1.5 min Open-ended Xu et al. [19] 294185 14.07 sec K-Space Zhu et al. [25] 390744 >33 sec Fill in blank Zhao et al. [22] 54146 3.10 sec Open-ended SVQA [14] 118680 -K-Space MovieQA [15] 6462 Instructional Video Reasoning With the newly collected YouQuek dataset, we perform reasoning tasks by answering questions on instructional videos. We first formally define our problem in Sec. 4.1. Then in Sec. 4.2, based on attention mechanism, we design sequential model (SEQ-SA) and graph convolutional model (GCN-SA). We also propose Recurrent Graph Convolutional Network (RGCN) which captures both temporal order and complex relations to overcome the limitation of SEQ-SA and GCN-SA. In Sec. 4.3, additional modalities such as description and transcripts are added to the reasoning model to help gain better performance. Problem Formalization Multiple Choice: Since the questions in the YouQuek dataset have alternative choices, we can use a three-way score function f (v, q, a) to evaluate each alternative and choose the one with the highest score as correct answer: j * = arg max j=1,...,M f (v, q, a j ) ,(1) where M = 5 in our case, and v, q, a represent the feature of video, question and answer respectively. In this work, q and a are the final hidden states by encoding the question and answer via RNNs. Here, f (·, ·, ·) denotes a MLP whose input is the concatenation of v, q, and a and output is a single neuron classifying how likely the given answer a is the correct one. K-Space: Similar to other visual QA problems, the reasoning task can also be formulated as a classification problem on the answer space. Then the alternative (negative) answers are all other answers in the training set. Here, K types of distinct answers are assigned to K categories {A i } K i=1 . A MLP with K output neurons is tasked to predict the correct answer A * by taking in v and q: A * = arg max j=1,...,K g j (v, q) ,(2) where g j denotes the output score of the j-th neuron. Models In this section, we mainly focus on the design of video models that can capture procedure relations in instructional events. Their generated video feature v will be used for question answering. First, we describe how we pre-process the videos. Then, we introduce the architecture of proposed models that are suitable for VideoQA. Especially, we propose a novel RGCN architecture that can perform message passing between two paths: RNN and GCN, in order to capture both time series and global properties for modeling instructional videos. Pre-processing: The videos in our consideration have an average length of 5.27 minutes, which requires us to process the videos into more tractable representations before any sophisticated modelings. Following [23], we define procedure as the sequence of necessary steps comprising a complex instructional event and segment a video into N procedure segments (see Fig.3a). To directly benchmark the reasoning ability, we use the ground truth provided by [23] instead to avoid any errors caused by intermediate processing. Note that one can apply method developed in [23] for automatically segmentation. The frames within each segment are sampled, of which the features are then extracted by ResNet [3] and encoded by a RNN model. Therefore, we can obtain the features of the procedure segments {X i } N i=1 ∈ R d and use them for relation modeling. SEQ-SA: We first propose an attention-based RNN model (see Fig. 3b for an example of N = 4) to model video representation v, where the encoded question feature is used to attend all video features at different time steps. The similarity a i between question feature q and segment feature X i is computed by taking the dot product of q and X i : followed by a soft-max normalization: a i = exp(q T Xi) exp(q T Xi) . Then we multiply each X i by a i to obtain the question-attended video feature X i : X i = a i X i . Finally, we feed X i into an RNN model of which the final hidden state h N of RNN is taken as the video feature representation v. GCN-SA: We consider a fully-connected graph (see Fig. 3c) to model complex relations among the procedure In (a), we demonstrate the pre-processing procedure. We show an example video on how to make hash brown potatoes (YouTube ID: kj5y 71bsJM). It demonstrates the basic concepts of instructional videos in YouCook2 dataset. Temporal boundaries means the human annotated start/end time stamp of a procedure, which is well defined in [23]. Video are segmented into several segments (procedures) by the temporal boundaries. Descriptions are also annotated by human, corresponding to each procedure. Transcripts are auto-generated by speech recognition on YouTube. An example QA pair for the video in (a) is, Q:"How many actions involving physical changes to potatoes are done before adding salt?" A:"2.". In (b) and (c), we have question feature attending on each segment. In (d), we illustrate the structure of our proposed RGCN model, where GCN interacts with RNN via "swap" operation which takes in RNN's hidden state h t−1 and outputs the graph node S t t−1 of GCN. We zoom in the first swap operation to provide an intuitive visualization. segments. Although the time dependencies defined by the original video are omitted, different edges in the graph can mine different relations for various reasoning tasks. We use a multi-layer GCN model for this purpose. We define {S j i } N i=1 M j=1 , where S j i ∈ R d , as the graph nodes, where N is the number of nodes within one layer, M is the number of layers. We first initialize nodes {S 1 i } N i=1 in the first layer by segment features {X i } N i=1 correspondingly. We adopt the same GCN structure as described in [17]: Z = ReLU{GSW} ,(3) where G ∈ R N ×N represents the adjacency graph, S ∈ R N ×d denotes the concatenation of all node features {S i } N i=1 in one arbitrary layer, and W ∈ R d×d is the weight matrix which is different for each layer. Each element G ij in G is the dot product similarity S T i S j . Three GCN layers are used in this work, where the output of the previous layer serves as the input of the next layer. To apply the attention mechanism, we add an additional node in the last layer of the GCN to represent the question feature q, and this question node is connected with all other graph nodes {S M i } N i=1 through N edges. Question node attends to each graph node through different weights on the edges. Similar to SEQ-SA, the weights between q and {S M i } N i=1 are the dot products of corresponding node pairs, followed by a soft-max normalization. Finally, we use an average pooling operation to compress the output of the last layer Z ∈ R N ×d to v ∈ R d . RGCN: Since the aforementioned GCN-SA is unable to capture the temporal order of video features [17], and SEQ-SA cannot model the relations between segments with long time spans, we propose a novel Recurrent Graph Convolutional Network (RGCN) architecture (see Fig. 3d) to overcome such limitation. The RGCN is a recurrent model that consists of two pathways: RNN and GCN. RNN interacts with GCN mainly through a swap operation (see Fig. 3d). The details are as follows. The RNN pathway with N time steps takes in the seg-ment features X i at each time step. The GCN pathway has N layers, each of which contains N graph nodes. Note that the GCN has the same number of layers as the time steps in RNN pathway. We adopt the same GCN architecture as described in GCN-SA model except that a recurrent computation paradigm is applied here, where the weights W is shared among all layers. The computation within the RNN memory cell at each time step and the computation of each GCN layer are performed alternatively. For each time step t, we first concatenate together the segment feature X t and the feature of node S t t−1 in GCN, which is then used as the input to RNN memory cell at the t-th time step. Following [5], we update the hidden state h t of RNN: h t = RNN{[X t , S t t−1 ], h t−1 } ,(4) Then we replace GCN's graph node S t t with the updated hidden state h t of RNN. This swap operation act as a bridge between RNN and GCN for message passing. Finally, the (t + 1)-th GCN layer takes all {S t i } N i=1 as input to compute the response {S t+1 i } N i=1 : Z t+1 = ReLU{GZ t W} ,(5)where Z t is the concatenation of {S t i } N i=1 . We take the final hidden state h N of RNN as the video representation v. Additionally, we extend the proposed RGCN with attention mechanism. The two pathways corresponds to the SEQ and GCN model, so we simply adopt how attention is cast on both pathways, and obtain RGCN-SA. Multiple modalities Besides videos and questions, we further investigate how much benefit we can obtain from other modalities such as narratives, which is very common in instructional videos. We are interested in two types of narratives, namely transcripts and descriptions. Transcripts: The audio signal is an important modality for videos. In our dataset, the valuable audio information in videos is all chefs speaking. Therefore, we substitute audio with auto-generated transcripts on YouTube. Transcripts, which can be seen as describing the corresponding procedures, are highly unstructured, noisy, and misaligned narratives [6] in that chefs may talk about things not related to the cooking procedure, or that the speech recognition on YouTube may generate some unexpected sentences. Nevertheless, it can provide extra information to solve visual ambiguities, e.g., distinguishing water from white vinegar, which both look transparent. Descriptions: In YouCook2 dataset, each procedure in a video corresponds to a sentence of natural language description annotated by a human. Different from transcripts, descriptions are much less dense with respect to time, and can be seen as highly constrained narratives because human labor is applied to extract the essence of the corresponding procedures. Each piece of description is associated with the procedure it describes because they are highly related semantically . For each individual modality (which can be description or transcripts), we aim to model a feature representation m, then fuse it with v and q to predict the answer A * . To achieve this goal, we make use of a hierarchical RNN structure: a lower-level RNN models the natural language words within each segment, and a higher level RNN models the gloabal feature of the video. Experiments First, we introduce the implementation details of the training process. Then some baseline models are described, followed by results analysis. Also, we explored the benefit introduced by other modalities such as description and transcripts. All experiments conducted in this work are evaluated on both multiple choice and K-Space evaluation metrics. In Tab. 2, only multiple choice accuracy is provided for discussion. All other results on K-Space are in supplementary material. Implementation details Our codes are based on PyTorch deep learning framework. ResNet is used to extract visual features of 500 frames in each video, producing a 512-d vector. By using embedding layers, the question words are transformed into 300-d vectors which are optimized during the training process. For all models involving RNNs in this work, we apply single direction LSTMs [5] (an improved version of vanilla RNN) with 512 hidden units. Adam optimizer is used with the learning rate of 0.0001. We split the training/testing set according to the original YouCook2 dataset. All videos in the YouCook2 training set are used as training videos in our dataset. Therefore, there are 10,179 QA pairs in our training set, and the rest are treated as testing set. Baselines We set up some baseline models which takes no instructional information. In other words, only the original video is presented to the models without temporal boundaries or descriptions. Bare QA: First, we build the QA model which predicts answers based on questions only (without videos). Then for multiple choice, the answer is predicted by a similar way as Eq. 1: j * = arg max j=1.. .M f (q, a). For K-Space, we adopt a similar formula as Eq. 2: A * = arg max j=1,...,K g j (q). Naive RNN: RNN is a base of most state-of-the-art Im-ageQA [2,12,10] and VideoQA [19,21] models. Instead of } N i=1 , where X i ∈ R d (N is the number of sampled frames). Human quiz: Apart from using deep learning models to complete VideoQA tasks, we also invite ten human annotators to perform human test. First, they are asked to answer the questions without any other information, but by guessing or using common sense. Second, they are allowed to watch the videos without audio. Finally, audio is also turned on to correspond with transcripts. Details of the setting are in supplementary material. Results Analysis Tab. 2 shows the experiment results on all models and baselines. We start with the comparison among baseline models that are without temporal boundary information (i.e., Bare QA, Naive RNN and MAC). As we can see from row 2 to row 4 of Tab. 2 that the three baselines have very close overall accuracy. Though Naive RNN take in the video stream, it cannot achieve better results than the bare QA. Therefore, we claim that as the base of most stateof-the-art visual QA models, RNN fails to extract meaningful visual information for instructional video reasoning. The reason is that it is difficult for RNN to model complex relations due to its sequential structure. Another reason is that RNN cannot capture long time dependencies of videos due to the memory limitation, even for RNN variants such as LSTM and GRU. As the best model on CLEVR, MAC achieves the same overall accuracy with Bare QA on YouQuek, which demonstrates the special difficulty of video understanding compared with ImageQA Then we analyze the performance of models proposed in Sec 4.2, which incorporate temporal boundary information of instructional videos to boost the performance. Recall that the temporal boundaries are provided by the ground truth in [23]. First, to evaluate the improvement introduced by attention mechanism, we remove the question attention operation to formulate the models: SEQ, GCN, RGCN, the results of which are shown in row 5 to row 7 of Tab. 2. We can see from row 5 to row 10 of Tab. 2 that the margins gained by introducing attention are from 1.1% to 2.1%, which demonstrates that question can guide the models to extract more meaningful features, and all these models outperform baselines by a big margin. Especially, RGCN-SA achieves the highest overall accuracy of 40.3%, 5.5% higher than MAC, and SEQ-SA ranks second among the attention based models with an overall accuracy of 37.3%. This demonstrates that the procedure segmentation helps models make better use of video streams. Finally, we investigate the performance of attention based models on various question categories. The comparison between SEQ-SA and GCN-SA shows that GCN-SA achieves higher accuracy scores on "count" and "taste" questions, while SEQ-SA performs better on all other categories. Intuitively, "order", "property" questions require temporal order information to be answered, because the questions usually contain sequence-related keywords, e.g., "before/after/between". Graph structure can hardly capture such ordering information. Nevertheless, the capability of modeling relations gives graph structure a reasonably good performance, especially on "count" and "taste" questions which challenge less on ordering. Since both sequence and graph models show advantages on different categories of questions, we take the advantages of both two models to Multimodalities Based on temporal boundary annotations, we further explore other modalities. As described in Sec. 4.3, we experiment on two types of narratives, unconstrained transcripts and concentrated descriptions. Descriptions are already associated with video segments in the YouCook2 dataset, so we only need to align the transcripts with segments by selecting transcripts that lay between the temporal boundaries. Results are shown in Tab. 3. As for different modalities, we first compare visual information, transcripts and description separately. Although descriptions are human annotated, highly refined reconstruction of the content of instructional videos, mere description seems not helpful when compared with visual information. Transcripts, to be worse, always decrease the performance. However, when narratives and visual information are combined together, we can see a significant increase in accuracy scores. SEQ-SA, GCN-SA, RGCN and RGCN-SA all achieve highest multiple choice accuracy when trained with both visual features and descriptions. SEQ with visual and description information also gets the highest K-Space accuracy compared to SEQ models trained on other modalities. However, transcripts still fail to provide as much valuable information as descriptions on videos, since the performance of models with visual and transcript information is worse than visual plus description. Transcripts even have a negative effect on SEQ and RGCN in that multiple choice accuracy is dropped when transcripts are added to visual information. Possible reasons are that the transcripts are too dense, and the quality of auto-generated transcripts are uncontrollable. As for different structures, we can see that our RGCN-SA still achieves the highest performance, while all attention models provides reasonable results. Human quiz In the human quiz part, participants are asked to do three sets of tests, namely guessing with common sense, with vi-sual information, and with both visual and audio information. The results of the guessing step are shown at the top row in Tab. 2. As we can see, even without any video information, human can achieve an accuracy as high as 52.8%. An interesting fact here is that human participants did a good job on the "when" questions, which is unexpected because one cannot know the exact time point of what is going to happen without watching the video. The reason is that humans have an intuition of which ingredients is more likely to be added first, or which step is less likely to happen at the beginning, owing to their common sense or life experience. Another support for the power of common sense is the high accuracy score for "taste" questions. For machines, the taste can only possibly be learned from the relations between ingredients and correct answers. However, for human beings, the tastes of different ingredients is already known in daily life. Given visual information, the human performance becomes almost perfect (97.0%), so the accuracy scores are not provided in the form of tables. This is reasonable because human has a powerful visual understanding and comprehending system. Given that the accuracy is already very high and that the dataset is collected without audio information, the improvement is minor (97.7%) after adding audio information. It is worth mention that RGCN-SA outperforms the human baseline on "count" questions, yet there is still a long way to go in visual reasoning tasks. Conclusion In this paper, we emphasize reasoning on instructional videos. We construct YouCook Question Answering (YouQuek) dataset, and three models with sequence (SEQ), graph (GCN), and fused (recurrent graph convolutional network, RGCN) structures are proposed to explore the instructional information. Attention mechanism is applied on the proposed models to boost performance, and RGCN-SA achieves the best accuracy on both multiple choice and K-Space evaluation metrics. Experiment results show that the proposed RGCN successfully fuse the order and relation information together for modeling instructional videos. Also, multiple modalities for instructional videos are analyzed, showing that human annotated temporal boundaries and descriptions are critical for instructional video reasoning.
4,838
1812.00344
2903401627
Understanding web instructional videos is an essential branch of video understanding in two aspects. First, most existing video methods focus on short-term actions for a-few-second-long video clips; these methods are not directly applicable to long videos. Second, unlike unconstrained long videos, e.g., movies, instructional videos are more structured in that they have step-by-step procedure constraining the understanding task. In this paper, we study reasoning on instructional videos via question-answering (QA). Surprisingly, it has not been an emphasis in the video community despite its rich applications. We thereby introduce YouQuek, an annotated QA dataset for instructional videos based on the recent YouCook2. The questions in YouQuek are not limited to cues on one frame but related to logical reasoning in the temporal dimension. Observing the lack of effective representations for modeling long videos, we propose a set of carefully designed models including a novel Recurrent Graph Convolutional Network (RGCN) that captures both temporal order and relation information. Furthermore, we study multiple modalities including description and transcripts for the purpose of boosting video understanding. Extensive experiments on YouQuek suggest that RGCN performs the best in terms of QA accuracy and a better performance is gained by introducing human annotated description.
People are gaining interests in video question answering (VideoQA) in recent years. Most of the current VideoQA tasks are focusing on direct facts in short videos @cite_10 @cite_21 @cite_3 @cite_22 @cite_11 . They all automatically generate QA pairs using a state-of-the-art question generation algorithm proposed in @cite_29 . However, such auto-generation mechanism often generates QA pairs with poor quality and low diversity, though grammatically correct. Worse still, auto-generated QA pairs cannot involve reasoning. From the reasoning point of view, MovieQA @cite_31 use human annotated QA pairs on movies to evaluate automatic story comprehension. SVQA @cite_23 , following the step of @cite_2 , extend the CLEVR dataset to the video version. Yet, it still focuses on short-term relations, and does not fit natural settings.
{ "abstract": [ "", "We address the challenge of automatically generating questions from reading materials for educational practice and assessment. Our approach is to overgenerate questions, then rank them. We use manually written rules to perform a sequence of general purpose syntactic transformations (e.g., subject-auxiliary inversion) to turn declarative sentences into questions. These questions are then ranked by a logistic regression model trained on a small, tailored dataset consisting of labeled output from our system. Experimental results show that ranking nearly doubles the percentage of questions rated as acceptable by annotators, from 27 of all questions to 52 of the top ranked 20 of questions.", "", "", "Video question answering (VideoQA) always involves visual reasoning. When answering questions composing of multiple logic correlations, models need to perform multi-step reasoning. In this paper, we formulate multi-step reasoning in VideoQA as a new task to answer compositional and logical structured questions based on video content. Existing VideoQA datasets are inadequate as benchmarks for the multi-step reasoning due to limitations such as lacking logical structure and having language biases. Thus we design a system to automatically generate a large-scale dataset, namely SVQA (Synthetic Video Question Answering). Compared with other VideoQA datasets, SVQA contains exclusively long and structured questions with various spatial and temporal relations between objects. More importantly, questions in SVQA can be decomposed into human readable logical tree or chain layouts, each node of which represents a sub-task requiring a reasoning operation such as comparison or arithmetic. Towards automatic question answering in SVQA, we develop a new VideoQA model. Particularly, we construct a new attention module, which contains spatial attention mechanism to address crucial and multiple logical sub-tasks embedded in questions, as well as a refined GRU called ta-GRU (temporal-attention GRU) to capture the long-term temporal dependency and gather complete visual cues. Experimental results show the capability of multi-step reasoning of SVQA and the effectiveness of our model when compared with other existing models.", "When building artificial intelligence systems that can reason and answer questions about visual data, we need diagnostic tests to analyze our progress and discover short-comings. Existing benchmarks for visual question answering can help, but have strong biases that models can exploit to correctly answer questions without reasoning. They also conflate multiple sources of error, making it hard to pinpoint model weaknesses. We present a diagnostic dataset that tests a range of visual reasoning abilities. It contains minimal biases and has detailed annotations describing the kind of reasoning each question requires. We use this dataset to analyze a variety of modern visual reasoning systems, providing novel insights into their abilities and limitations.", "We introduce the MovieQA dataset which aims to evaluate automatic story comprehension from both video and text. The dataset consists of 14,944 questions about 408 movies with high semantic diversity. The questions range from simpler \"Who\" did \"What\" to \"Whom\", to \"Why\" and \"How\" certain events occurred. Each question comes with a set of five possible answers; a correct one and four deceiving answers provided by human annotators. Our dataset is unique in that it contains multiple sources of information -- video clips, plots, subtitles, scripts, and DVS. We analyze our data through various statistics and methods. We further extend existing QA techniques to show that question-answering with such open-ended semantics is hard. We make this data set public along with an evaluation benchmark to encourage inspiring work in this challenging domain.", "Video question answering is an important task toward scene understanding and visual data retrieval. However, current visual question answering works mainly focus on a single static image, which is distinct from the dynamic and sequential visual data in the real world. Their approaches cannot utilize the temporal information in videos. In this paper, we introduce the task of free-form open-ended video question answering. The open-ended answers enable wider applications compared with the common multiple-choice tasks in Visual-QA. We first propose a data set for open-ended Video-QA with the automatic question generation approaches. Then, we propose our sequential video attention and temporal question attention models. These two models apply the attention mechanism on videos and questions, while preserving the sequential and temporal structures of the guides. The two models are integrated into the model of unified attention. After the video and the question are encoded, the answers are generated wordwisely from our models by a decoder. In the end, we evaluate our models on the proposed data set. The experimental results demonstrate the effectiveness of our proposed model.", "" ], "cite_N": [ "@cite_22", "@cite_29", "@cite_21", "@cite_3", "@cite_23", "@cite_2", "@cite_31", "@cite_10", "@cite_11" ], "mid": [ "", "1531374185", "", "", "2897857500", "2561715562", "2190067570", "2751525844", "" ] }
How to Make a BLT Sandwich? Learning to Reason towards Understanding Web Instructional Videos
Humans can acquire knowledge by watching instructional videos online. A typical situation is that people confused by specific problems try to look for solutions in related instructional videos. For example, while learning to cook new dishes, they may wonder why a specific ingredient is added, and what happens between the two procedures. Watching instructional videos can often clarify these questions and hence, assist humans in accomplishing tasks. We hereby propose the question: can machines also understand instructional videos as humans do, which requires not only accurate recognition of objects, actions, and events but also the higher-order inference of any relations therein, e.g., spatial, temporal, correlative and casual? Here we use higherorder inference to refer to the inference that cannot be completed immediately by direct observations and thus requires stronger semantics for video modeling (see Fig. 1). Current instructional video understanding studies focus on various tasks e.g., reference resolution [6], procedure localization [23,1], dense captioning [24,16], activity detection [13,11] and visual grounding [7,18]. Despite the rich literature and applications, question-answering (QA) task in instructional videos explored in our work is less developed, which acts as a proxy to benchmark the higher-order inference in machine intelligence. Previous works, e.g., Im-ageQA [2,12,10] and VideoQA [15,21], also leverage the QA task as automatic evaluation method, but QA on instructional videos has never been tackled before. Observing the lack of suitable dataset on instructional videos, we propose YouCook Question Answering (YouQuek) dataset based on YouCook2 [23] which is the largest instructional video dataset. Our YouQuek dataset is the first reasoning-oriented dataset aimed for instructional videos. We employ question-answering as intuitive interpretations for various styles of reasoning. Figure 1 presents two exemplar QA pairs in our dataset along with the corresponding example human reasoning procedure involved to answer the questions. YouQuek dataset contains 15,355 manually-collected QA pairs that are divided into different categories regarding different reasoning styles, e.g., counting, ordering, comparison, and changing of properties. Upon the newly built dataset, we explore in two directions. The first one concerns effective representations of modeling instructional videos. The videos in our consideration have an average length of 5.27 min and as instructional videos, they are structured and have step-by-step procedure constraining the understanding task. By modeling the temporal relations among different procedures, we are expecting valuable information to be extracted from the instructional videos, for which we study various model structures and propose a novel Recurrent Graph Convolutional Network (RGCN). The RGCN deals with complex reasoning by message passing in the graph, but also maintains the sequential ordering information by a supporting RNN. In this design, graph and RNN can boost each other since the information can be swapped between the two pathways. Second, we explore the use of different modalities in video modeling. Apart from visual information, temporal boundaries, descriptions for each procedure, and transcripts are explored. In this direction, we want to test the effect of combining various types of available annotations with our developed video models on understanding instructional videos. Given that modeling instructional videos from vision alone is hard, combining such information approximates better the human learning experiences and it, in turn, gives us a hint for devising better models for machine intelligence. We conduct extensive experiments on the YouQuek dataset. In the ablation study, we find that attention mechanism helps boost the performance. Our proposed RGCN model outperforms all other models with respect to the overall accuracy, even without attention. From the multimodality perspective, modeling instructional videos using temporal boundaries together with descriptions can help dig more valuable information from videos. We also conduct human quiz on the QAs in our dataset. Results show that machines still have a large gap to human performance in that even without visual information, humans still can answer some questions correctly using life experience, or common sense, which hints us that incorporating the external knowledge with video models will be helpful for future works. Our main contributions are summarized as follows. • We propose YouQuek dataset, the first reasoningoriented dataset for understanding instructional videos. • We propose both models with various structures, especially a novel RGCN model, for video modeling. Our RGCN outperforms all other models even without attention. • We incorporate multi-modal information to perform extensive experiments on YouQuek showing that description can boost the video understanding capability, while transcripts could not. The rest of the paper is organized as the following. We first discuss some related works in Sec. 2, and introduce the proposed YouQuek dataset in Sec. 3. Then in Sec. 4, we set up series of baseline models for the dataset, and propose RGCN as a new model for instructional video reasoning. In Sec. 5, we demonstrate and discuss the experiment results. Conclusions are drawn in Sec. 6. The YouQuek dataset and our code for all methods will be released upon acceptance. YouQuek Dataset To validate the proposed task on instructional video reasoning, we introduce YouQuek dataset, a reasoningoriented video question answering dataset based on YouCook2 dataset. The dataset contains 15,355 questionanswer (QA) pairs in total. Tailored for our dataset, we annotate the QA pairs with six different tags, where each QA pair could be labeled with more than one tag. In supplementary material, we show example QA pairs for each tag described below. Counting: This tag annotates a QA pair that involves counting. One may count the occurrence time of certain actions or the number of certain ingredients. E.g., "How many white ingredients are used in the recipe?" Apart from counting, we also need to find out the target ingredients according to their colors. Time: Time is a distinguishing feature in videos compared to images. This category of questions are mainly about timing and duration. A typical example is, "Which one is faster: adding water or adding salt?". To answer this question, we not only need to know how long it takes for both actions, but also need to make comparison of the duration. Order: Long-term temporal order is a unique feature for instructional videos, because instructional videos come with step-by-step procedures, and the order information matters. E.g., in YouCook2, the ordering of procedure is critical to the success of one recipe. Therefore, we stress out questions related to action orders, e.g., "What happens before/after/between ...?", and "Does it matter to change the order of ... and ...?" Taste: YouCook2 is an instructional cooking video dataset, so we bring up with the taste questions. This type of QA pairs is about the flavor and the texture of the dish. Taste can also be related to reasoning in that one can infer the taste from the ingredients used, and the texture from the cooking methods applied. Note that we avoid questions that are subjective such as "Is this burger tasty?", which cannot be answered by reasoning, but by subjective inspection. Complex: This tag presents a broader concept than all other tags above. By "complex", we emphasize a multistep reasoning process instead of one-step reasoning. This type of questions overlaps with all other types. Property: Cooking usually involves changes of ingredients. The properties of ingredients, e.g. their shape, color, size, location, etc., may vary at different time points as the cooking procedure goes on. This type of questions is differ- ent from "order" questions since we are asking about certain ingredients rather than actions. In Tab. 1, we contrast our dataset to some other VideoQA datasets. Our dataset is unique in that we not only build the dataset based on instructional videos, but also focus on long-term ordering and higher-order inference. QA collection Many existing VideoQA datasets [21,19,22,20,25] adopt an automatic question-answer (QA) generation technique proposed by [4] to generate QA pairs from texts. However, QA pairs obtained via this method suffer from extremely low diversity. Also, automatic methods cannot generate questions involving complex reasoning, which goes against our goal of constructing the dataset. Therefore, we apply Amazon Mechanical Turk (AMT) to collect question and answer pairs. For details about the collection of QA and multiple choice alternatives, please refer to supplementary material. Statistics In Fig. 2a, we show the statistics of six different categories of questions. We have 7,200 complex reasoning QA pairs, consisting nearly half of our dataset. Other questions involve simpler reasoning procedure, but still cannot be answered by direct observation from the videos. On average, we have 1.478 tags per QA pair, 2.289 words per answer, and 7.678 QA pairs per video. To illustrate our dataset better, we split the QA pairs into four categories with respect to answer types, namely "Yes/No" for answers containing yes or no; "Numeric" for answers containing numbers, mostly related to counting and time; "Single word" for answers with only one word, excluding QA pairs in "Yes/No" and "Numeric"; "Text" for answers with multiple words, excluding QA pairs in "Yes/No" and "Numeric". Fig. 2b shows the distribution of four different types of answers in our dataset. Table 1: Comparison among different video question answering datasets. The first four columns are: "Inst." for whether it is based on instructional videos; "Natural" for whether videos are of natural world settings; "Reason" for whether questions are related to reasoning; "Human" for whether QA pairs are collected through human labor. Inst. Natural Reason Human # of QA Per video length Answering form VTW [21] 174955 1.5 min Open-ended Xu et al. [19] 294185 14.07 sec K-Space Zhu et al. [25] 390744 >33 sec Fill in blank Zhao et al. [22] 54146 3.10 sec Open-ended SVQA [14] 118680 -K-Space MovieQA [15] 6462 Instructional Video Reasoning With the newly collected YouQuek dataset, we perform reasoning tasks by answering questions on instructional videos. We first formally define our problem in Sec. 4.1. Then in Sec. 4.2, based on attention mechanism, we design sequential model (SEQ-SA) and graph convolutional model (GCN-SA). We also propose Recurrent Graph Convolutional Network (RGCN) which captures both temporal order and complex relations to overcome the limitation of SEQ-SA and GCN-SA. In Sec. 4.3, additional modalities such as description and transcripts are added to the reasoning model to help gain better performance. Problem Formalization Multiple Choice: Since the questions in the YouQuek dataset have alternative choices, we can use a three-way score function f (v, q, a) to evaluate each alternative and choose the one with the highest score as correct answer: j * = arg max j=1,...,M f (v, q, a j ) ,(1) where M = 5 in our case, and v, q, a represent the feature of video, question and answer respectively. In this work, q and a are the final hidden states by encoding the question and answer via RNNs. Here, f (·, ·, ·) denotes a MLP whose input is the concatenation of v, q, and a and output is a single neuron classifying how likely the given answer a is the correct one. K-Space: Similar to other visual QA problems, the reasoning task can also be formulated as a classification problem on the answer space. Then the alternative (negative) answers are all other answers in the training set. Here, K types of distinct answers are assigned to K categories {A i } K i=1 . A MLP with K output neurons is tasked to predict the correct answer A * by taking in v and q: A * = arg max j=1,...,K g j (v, q) ,(2) where g j denotes the output score of the j-th neuron. Models In this section, we mainly focus on the design of video models that can capture procedure relations in instructional events. Their generated video feature v will be used for question answering. First, we describe how we pre-process the videos. Then, we introduce the architecture of proposed models that are suitable for VideoQA. Especially, we propose a novel RGCN architecture that can perform message passing between two paths: RNN and GCN, in order to capture both time series and global properties for modeling instructional videos. Pre-processing: The videos in our consideration have an average length of 5.27 minutes, which requires us to process the videos into more tractable representations before any sophisticated modelings. Following [23], we define procedure as the sequence of necessary steps comprising a complex instructional event and segment a video into N procedure segments (see Fig.3a). To directly benchmark the reasoning ability, we use the ground truth provided by [23] instead to avoid any errors caused by intermediate processing. Note that one can apply method developed in [23] for automatically segmentation. The frames within each segment are sampled, of which the features are then extracted by ResNet [3] and encoded by a RNN model. Therefore, we can obtain the features of the procedure segments {X i } N i=1 ∈ R d and use them for relation modeling. SEQ-SA: We first propose an attention-based RNN model (see Fig. 3b for an example of N = 4) to model video representation v, where the encoded question feature is used to attend all video features at different time steps. The similarity a i between question feature q and segment feature X i is computed by taking the dot product of q and X i : followed by a soft-max normalization: a i = exp(q T Xi) exp(q T Xi) . Then we multiply each X i by a i to obtain the question-attended video feature X i : X i = a i X i . Finally, we feed X i into an RNN model of which the final hidden state h N of RNN is taken as the video feature representation v. GCN-SA: We consider a fully-connected graph (see Fig. 3c) to model complex relations among the procedure In (a), we demonstrate the pre-processing procedure. We show an example video on how to make hash brown potatoes (YouTube ID: kj5y 71bsJM). It demonstrates the basic concepts of instructional videos in YouCook2 dataset. Temporal boundaries means the human annotated start/end time stamp of a procedure, which is well defined in [23]. Video are segmented into several segments (procedures) by the temporal boundaries. Descriptions are also annotated by human, corresponding to each procedure. Transcripts are auto-generated by speech recognition on YouTube. An example QA pair for the video in (a) is, Q:"How many actions involving physical changes to potatoes are done before adding salt?" A:"2.". In (b) and (c), we have question feature attending on each segment. In (d), we illustrate the structure of our proposed RGCN model, where GCN interacts with RNN via "swap" operation which takes in RNN's hidden state h t−1 and outputs the graph node S t t−1 of GCN. We zoom in the first swap operation to provide an intuitive visualization. segments. Although the time dependencies defined by the original video are omitted, different edges in the graph can mine different relations for various reasoning tasks. We use a multi-layer GCN model for this purpose. We define {S j i } N i=1 M j=1 , where S j i ∈ R d , as the graph nodes, where N is the number of nodes within one layer, M is the number of layers. We first initialize nodes {S 1 i } N i=1 in the first layer by segment features {X i } N i=1 correspondingly. We adopt the same GCN structure as described in [17]: Z = ReLU{GSW} ,(3) where G ∈ R N ×N represents the adjacency graph, S ∈ R N ×d denotes the concatenation of all node features {S i } N i=1 in one arbitrary layer, and W ∈ R d×d is the weight matrix which is different for each layer. Each element G ij in G is the dot product similarity S T i S j . Three GCN layers are used in this work, where the output of the previous layer serves as the input of the next layer. To apply the attention mechanism, we add an additional node in the last layer of the GCN to represent the question feature q, and this question node is connected with all other graph nodes {S M i } N i=1 through N edges. Question node attends to each graph node through different weights on the edges. Similar to SEQ-SA, the weights between q and {S M i } N i=1 are the dot products of corresponding node pairs, followed by a soft-max normalization. Finally, we use an average pooling operation to compress the output of the last layer Z ∈ R N ×d to v ∈ R d . RGCN: Since the aforementioned GCN-SA is unable to capture the temporal order of video features [17], and SEQ-SA cannot model the relations between segments with long time spans, we propose a novel Recurrent Graph Convolutional Network (RGCN) architecture (see Fig. 3d) to overcome such limitation. The RGCN is a recurrent model that consists of two pathways: RNN and GCN. RNN interacts with GCN mainly through a swap operation (see Fig. 3d). The details are as follows. The RNN pathway with N time steps takes in the seg-ment features X i at each time step. The GCN pathway has N layers, each of which contains N graph nodes. Note that the GCN has the same number of layers as the time steps in RNN pathway. We adopt the same GCN architecture as described in GCN-SA model except that a recurrent computation paradigm is applied here, where the weights W is shared among all layers. The computation within the RNN memory cell at each time step and the computation of each GCN layer are performed alternatively. For each time step t, we first concatenate together the segment feature X t and the feature of node S t t−1 in GCN, which is then used as the input to RNN memory cell at the t-th time step. Following [5], we update the hidden state h t of RNN: h t = RNN{[X t , S t t−1 ], h t−1 } ,(4) Then we replace GCN's graph node S t t with the updated hidden state h t of RNN. This swap operation act as a bridge between RNN and GCN for message passing. Finally, the (t + 1)-th GCN layer takes all {S t i } N i=1 as input to compute the response {S t+1 i } N i=1 : Z t+1 = ReLU{GZ t W} ,(5)where Z t is the concatenation of {S t i } N i=1 . We take the final hidden state h N of RNN as the video representation v. Additionally, we extend the proposed RGCN with attention mechanism. The two pathways corresponds to the SEQ and GCN model, so we simply adopt how attention is cast on both pathways, and obtain RGCN-SA. Multiple modalities Besides videos and questions, we further investigate how much benefit we can obtain from other modalities such as narratives, which is very common in instructional videos. We are interested in two types of narratives, namely transcripts and descriptions. Transcripts: The audio signal is an important modality for videos. In our dataset, the valuable audio information in videos is all chefs speaking. Therefore, we substitute audio with auto-generated transcripts on YouTube. Transcripts, which can be seen as describing the corresponding procedures, are highly unstructured, noisy, and misaligned narratives [6] in that chefs may talk about things not related to the cooking procedure, or that the speech recognition on YouTube may generate some unexpected sentences. Nevertheless, it can provide extra information to solve visual ambiguities, e.g., distinguishing water from white vinegar, which both look transparent. Descriptions: In YouCook2 dataset, each procedure in a video corresponds to a sentence of natural language description annotated by a human. Different from transcripts, descriptions are much less dense with respect to time, and can be seen as highly constrained narratives because human labor is applied to extract the essence of the corresponding procedures. Each piece of description is associated with the procedure it describes because they are highly related semantically . For each individual modality (which can be description or transcripts), we aim to model a feature representation m, then fuse it with v and q to predict the answer A * . To achieve this goal, we make use of a hierarchical RNN structure: a lower-level RNN models the natural language words within each segment, and a higher level RNN models the gloabal feature of the video. Experiments First, we introduce the implementation details of the training process. Then some baseline models are described, followed by results analysis. Also, we explored the benefit introduced by other modalities such as description and transcripts. All experiments conducted in this work are evaluated on both multiple choice and K-Space evaluation metrics. In Tab. 2, only multiple choice accuracy is provided for discussion. All other results on K-Space are in supplementary material. Implementation details Our codes are based on PyTorch deep learning framework. ResNet is used to extract visual features of 500 frames in each video, producing a 512-d vector. By using embedding layers, the question words are transformed into 300-d vectors which are optimized during the training process. For all models involving RNNs in this work, we apply single direction LSTMs [5] (an improved version of vanilla RNN) with 512 hidden units. Adam optimizer is used with the learning rate of 0.0001. We split the training/testing set according to the original YouCook2 dataset. All videos in the YouCook2 training set are used as training videos in our dataset. Therefore, there are 10,179 QA pairs in our training set, and the rest are treated as testing set. Baselines We set up some baseline models which takes no instructional information. In other words, only the original video is presented to the models without temporal boundaries or descriptions. Bare QA: First, we build the QA model which predicts answers based on questions only (without videos). Then for multiple choice, the answer is predicted by a similar way as Eq. 1: j * = arg max j=1.. .M f (q, a). For K-Space, we adopt a similar formula as Eq. 2: A * = arg max j=1,...,K g j (q). Naive RNN: RNN is a base of most state-of-the-art Im-ageQA [2,12,10] and VideoQA [19,21] models. Instead of } N i=1 , where X i ∈ R d (N is the number of sampled frames). Human quiz: Apart from using deep learning models to complete VideoQA tasks, we also invite ten human annotators to perform human test. First, they are asked to answer the questions without any other information, but by guessing or using common sense. Second, they are allowed to watch the videos without audio. Finally, audio is also turned on to correspond with transcripts. Details of the setting are in supplementary material. Results Analysis Tab. 2 shows the experiment results on all models and baselines. We start with the comparison among baseline models that are without temporal boundary information (i.e., Bare QA, Naive RNN and MAC). As we can see from row 2 to row 4 of Tab. 2 that the three baselines have very close overall accuracy. Though Naive RNN take in the video stream, it cannot achieve better results than the bare QA. Therefore, we claim that as the base of most stateof-the-art visual QA models, RNN fails to extract meaningful visual information for instructional video reasoning. The reason is that it is difficult for RNN to model complex relations due to its sequential structure. Another reason is that RNN cannot capture long time dependencies of videos due to the memory limitation, even for RNN variants such as LSTM and GRU. As the best model on CLEVR, MAC achieves the same overall accuracy with Bare QA on YouQuek, which demonstrates the special difficulty of video understanding compared with ImageQA Then we analyze the performance of models proposed in Sec 4.2, which incorporate temporal boundary information of instructional videos to boost the performance. Recall that the temporal boundaries are provided by the ground truth in [23]. First, to evaluate the improvement introduced by attention mechanism, we remove the question attention operation to formulate the models: SEQ, GCN, RGCN, the results of which are shown in row 5 to row 7 of Tab. 2. We can see from row 5 to row 10 of Tab. 2 that the margins gained by introducing attention are from 1.1% to 2.1%, which demonstrates that question can guide the models to extract more meaningful features, and all these models outperform baselines by a big margin. Especially, RGCN-SA achieves the highest overall accuracy of 40.3%, 5.5% higher than MAC, and SEQ-SA ranks second among the attention based models with an overall accuracy of 37.3%. This demonstrates that the procedure segmentation helps models make better use of video streams. Finally, we investigate the performance of attention based models on various question categories. The comparison between SEQ-SA and GCN-SA shows that GCN-SA achieves higher accuracy scores on "count" and "taste" questions, while SEQ-SA performs better on all other categories. Intuitively, "order", "property" questions require temporal order information to be answered, because the questions usually contain sequence-related keywords, e.g., "before/after/between". Graph structure can hardly capture such ordering information. Nevertheless, the capability of modeling relations gives graph structure a reasonably good performance, especially on "count" and "taste" questions which challenge less on ordering. Since both sequence and graph models show advantages on different categories of questions, we take the advantages of both two models to Multimodalities Based on temporal boundary annotations, we further explore other modalities. As described in Sec. 4.3, we experiment on two types of narratives, unconstrained transcripts and concentrated descriptions. Descriptions are already associated with video segments in the YouCook2 dataset, so we only need to align the transcripts with segments by selecting transcripts that lay between the temporal boundaries. Results are shown in Tab. 3. As for different modalities, we first compare visual information, transcripts and description separately. Although descriptions are human annotated, highly refined reconstruction of the content of instructional videos, mere description seems not helpful when compared with visual information. Transcripts, to be worse, always decrease the performance. However, when narratives and visual information are combined together, we can see a significant increase in accuracy scores. SEQ-SA, GCN-SA, RGCN and RGCN-SA all achieve highest multiple choice accuracy when trained with both visual features and descriptions. SEQ with visual and description information also gets the highest K-Space accuracy compared to SEQ models trained on other modalities. However, transcripts still fail to provide as much valuable information as descriptions on videos, since the performance of models with visual and transcript information is worse than visual plus description. Transcripts even have a negative effect on SEQ and RGCN in that multiple choice accuracy is dropped when transcripts are added to visual information. Possible reasons are that the transcripts are too dense, and the quality of auto-generated transcripts are uncontrollable. As for different structures, we can see that our RGCN-SA still achieves the highest performance, while all attention models provides reasonable results. Human quiz In the human quiz part, participants are asked to do three sets of tests, namely guessing with common sense, with vi-sual information, and with both visual and audio information. The results of the guessing step are shown at the top row in Tab. 2. As we can see, even without any video information, human can achieve an accuracy as high as 52.8%. An interesting fact here is that human participants did a good job on the "when" questions, which is unexpected because one cannot know the exact time point of what is going to happen without watching the video. The reason is that humans have an intuition of which ingredients is more likely to be added first, or which step is less likely to happen at the beginning, owing to their common sense or life experience. Another support for the power of common sense is the high accuracy score for "taste" questions. For machines, the taste can only possibly be learned from the relations between ingredients and correct answers. However, for human beings, the tastes of different ingredients is already known in daily life. Given visual information, the human performance becomes almost perfect (97.0%), so the accuracy scores are not provided in the form of tables. This is reasonable because human has a powerful visual understanding and comprehending system. Given that the accuracy is already very high and that the dataset is collected without audio information, the improvement is minor (97.7%) after adding audio information. It is worth mention that RGCN-SA outperforms the human baseline on "count" questions, yet there is still a long way to go in visual reasoning tasks. Conclusion In this paper, we emphasize reasoning on instructional videos. We construct YouCook Question Answering (YouQuek) dataset, and three models with sequence (SEQ), graph (GCN), and fused (recurrent graph convolutional network, RGCN) structures are proposed to explore the instructional information. Attention mechanism is applied on the proposed models to boost performance, and RGCN-SA achieves the best accuracy on both multiple choice and K-Space evaluation metrics. Experiment results show that the proposed RGCN successfully fuse the order and relation information together for modeling instructional videos. Also, multiple modalities for instructional videos are analyzed, showing that human annotated temporal boundaries and descriptions are critical for instructional video reasoning.
4,838
1811.12772
2902377675
Current Visual Question Answering (VQA) systems can answer intelligent questions about Known' visual content. However, their performance drops significantly when questions about visually and linguistically Unknown' concepts are presented during inference ( Open-world' scenario). A practical VQA system should be able to deal with novel concepts in real world settings. To address this problem, we propose an exemplar-based approach that transfers learning (i.e., knowledge) from previously Known' concepts to answer questions about the Unknown'. We learn a highly discriminative joint embedding space, where visual and semantic features are fused to give a unified representation. Once novel concepts are presented to the model, it looks for the closest match from an exemplar set in the joint embedding space. This auxiliary information is used alongside the given Image-Question pair to refine visual attention in a hierarchical fashion. Since handling the high dimensional exemplars on large datasets can be a significant challenge, we introduce an efficient matching scheme that uses a compact feature description for search and retrieval. To evaluate our model, we propose a new split for VQA, separating Unknown visual and semantic concepts from the training set. Our approach shows significant improvements over state-of-the-art VQA models on the proposed Open-World VQA dataset and standard VQA datasets.
VQA is an AI complete task that requires high-level multimodal reasoning both in visual and language domains. The recent literature in VQA mostly focus on the optimal mechanisms to fuse multimodal cues. A simple fusion approach was used by Lu al @cite_23 that progressively combines multimodal features using concatenations and sum-pooling operations. Xu al @cite_28 proposed a recurrent neural network to generate intelligent image captions by considering the previously predicted words and targeted visual content. Bilinear models provide an effective way to model complex interactions, but impose restrictions due to computational intractability for high-dimensional inputs. Efficient versions of bilinear pooling were used in @cite_2 @cite_24 to learn the second-order interactions between visual and language features. To further speed-up the computations, Ben-younes al @cite_25 introduced a Tucker fusion scheme that first projects the individual modalities to low dimensions and subsequently learns full bilinear relationships. Recently, Farazi al @cite_8 fused complementary object level features alongside image level descriptors to achieve superior performance.
{ "abstract": [ "Existing attention mechanisms either attend to local image grid or object level features for Visual Question Answering (VQA). Motivated by the observation that questions can relate to both object instances and their parts, we propose a novel attention mechanism that jointly considers reciprocal relationships between the two levels of visual details. The bottom-up attention thus generated is further coalesced with the top-down information to only focus on the scene elements that are most relevant to a given question. Our design hierarchically fuses multi-modal information i.e., language, object- and gird-level features, through an efficient tensor decomposition scheme. The proposed model improves the state-of-the-art single model performances from 67.9 to 68.2 on VQAv1 and from 65.3 to 67.4 on VQAv2, demonstrating a significant boost.", "Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.", "", "A number of recent works have proposed attention models for Visual Question Answering (VQA) that generate spatial maps highlighting image regions relevant to answering the question. In this paper, we argue that in addition to modeling \"where to look\" or visual attention, it is equally important to model \"what words to listen to\" or question attention. We present a novel co-attention model for VQA that jointly reasons about image and question attention. In addition, our model reasons about the question (and consequently the image via the co-attention mechanism) in a hierarchical fashion via a novel 1-dimensional convolution neural networks (CNN). Our model improves the state-of-the-art on the VQA dataset from 60.3 to 60.5 , and from 61.6 to 63.3 on the COCO-QA dataset. By using ResNet, the performance is further improved to 62.1 for VQA and 65.4 for COCO-QA.", "", "Bilinear models provide an appealing framework for mixing and merging information in Visual Question Answering (VQA) tasks. They help to learn high level associations between question meaning and visual concepts in the image, but they suffer from huge dimensionality issues. We introduce MUTAN, a multimodal tensor-based Tucker decomposition to efficiently parametrize bilinear interactions between visual and textual representations. Additionally to the Tucker framework, we design a low-rank matrix-based decomposition to explicitly constrain the interaction rank. With MUTAN, we control the complexity of the merging scheme while keeping nice interpretable fusion relations. We show how our MUTAN model generalizes some of the latest VQA architectures, providing state-of-the-art results." ], "cite_N": [ "@cite_8", "@cite_28", "@cite_24", "@cite_23", "@cite_2", "@cite_25" ], "mid": [ "2799345082", "2950178297", "", "2963668159", "", "2616125804" ] }
From Known to the Unknown: Transferring Knowledge to Answer Questions about Novel Visual and Semantic Concepts
Machine vision algorithms have significantly evolved various industries such as internet commerce, personal digital assistants and web-search. A major component of machine intelligence comprises of how well it can comprehend visual content. A Visual Turing Test to assess a machine's ability to understand visual content is performed with the Visual Question Answering (VQA) task. Here, machine vision algorithms are expected to answer intelligent questions about visual scenes. The current VQA paradigm is 1 Dataset and models will be available at: TBA Figure 1. Open World VQA for Novel Concepts: Our model learns to represent multi-modal information (Image (I)-Question (Q) pairs) as a joint embedding (Φ). Once presented with novel concepts, the proposed model learns to effectively use past knowledge accumulated over the training set to answer intelligent questions. not without its grave weaknesses. One key limitation is that the questions asked at inference time only relate to the concepts that have already been seen during the training stage (closed-world assumption). In the real world, humans can easily reason about visually and linguistically Unknown concepts based on previous knowledge about the Known. For instance, having seen visual examples of 'lion' and 'tiger', a person can recognize an unknown 'liger' by associating visual similarities with a new compositional semantic concept and answer intelligent questions about their count, visual attributes, state and actions. In order to design machines to mimic human visual comprehension abilities, we must impart lifelong learning mechanisms that allow them to accumulate and use past knowledge to relate Unknown concepts. In this paper, we introduce a novel VQA problem setting that evaluates models in an 'Open-World' dynamic scenario where previously unknown concepts show up during the test phase (Fig. 1). An open-world VQA setting requires a vision system to acquire knowledge over time and later use it intelligently to answer complex questions about Unknown concepts for which no linguistic+visual examples were available during training. Existing VQA systems lack this capability as they use a 'fixed model' to acquire learning and envisage answers without explicitly considering closely related examples from the knowledge base. This can lead to 'catastrophic forgetting' [18] as the object/question set is altered with updated categories. Here, we develop a flexible knowledge base (only comprising of the training examples) that stores the joint embeddings of visual and textual features. Our proposed approach learns to utilize past knowledge to answer questions about unknown concepts. Related to our work, we note a few recent efforts in the literature that aim to extend VQA beyond the already known concepts [27,1,23,20,2]. A major limitation of these approaches is that they introduce novel concepts only on the language side (i.e., new questions/answers), either to re-balance the split or to prevent the model cheating by removing biases [1,23,2]. Further, they rely on external data sources (both visual and semantic) and consider training splits that contain visual instances of 'novel objects', thereby violating the unknown assumption [20,23]. To bridge this gap, we propose a new Open World VQA (OW-VQA) protocol for novel concepts based on MS-COCO and VQAv1-v2 datasets. Our major contributions are: • We reformulate VQA in a transfer learning setup that uses closely related Known instances from the exemplar set to reason about Unknown concepts. • We present a novel network architecture and training schedule that maintains a knowledge base of exemplars in a rich joint embedding space that aggregates visual and semantic information. • We propose a hierarchical search and retrieval scheme to enable efficient exemplar matching on a high dimensional joint embedding space. • We propose a new OW-VQA split to enable impartial evaluation of VQA algorithms in a real-world scenario and report impressive improvements over recent approaches with our proposed model. Methods Given a question Q about an image I, an AI agent designed for the VQA task will predict answer a * based on the learning acquired from training examples. This task can be formulated as: a * = arg max a∈D P (â|Q, I; θ)(1) where θ denotes the model parameters and a * is predicted from the dictionary of possible answers D. An ideal VQA system should effectively model the complex interactions between the language and visual domains to acquire useful knowledge and use it to answer newly presented questions at test time. Towards this end, we propose a framework to answer questions about novel concepts in Fig. 2. The overall pipeline is based on four main steps: (1) Joint Feature Embedding: The given IQ pair is processed to extract visual v and language q features. These features are jointly embedded into a common space through multi-modal fusion. In the next stage, the proposed network selectively attends to visual scene details using the joint feature embedding of given inputs and the exemplars (output of step 1 and 2 respectively). This ensures that the model learns to identify salient features to answer a specific question. (4) Answer Prediction: During the final stage, the refined joint embedding is calculated by the model to predict the correct answer a * from the answer set A, minimizing a cross entropy loss. Joint Feature Embedding From the given image I, the n v -dimensional visual feature embedding v ∈ R nv is extracted from the last convolutional layers just before the global pooling and classification layer of the feature extraction model (i.e. ResNet [12]). The language feature embedding q ∈ R nq is generated from Q by first encoding the question in a one-hot-vector representation and then embedded into vector space using Gated Recurrent Units (GRUs) [6,9]. In order to predict a correct answer, a VQA model needs to generate a joint embedding: e = Φ(v, q; τ ) ∈ R ne . A naive approach that models visual-semantic interactions using a tensor τ ∈ R nq×nv×ne will result in an unrealistic number of trainable parameters (e.g., ∼ 9.83 billion for our baseline model). To reduce the dimensionality of the tensor, we use Tucker decomposition [25] which can be seen a high-order principal component analysis operation. This technique has been proven effective in embedding visual and textual features for VQA [8,5]. It approximates τ as follows: τ = tq i=1 tv j=1 te k=1 ω ijk τ i q • τ j v • τ k e = ω × 1 τ q × 2 τ v × 3 τ e = [[ω; τ q , τ v , τ e ]](2) where × i denotes n-mode product of a tensor with a matrix and • denotes the outer vector product. Eq. 2 means that tensor τ is decomposed into a core tensor ω ∈ R tq×tv×te and orthonormal factor matrices τ q ∈ R nq×tq , τ v ∈ R nv×tv , τ e ∈ R ne×te . Intuitively, by setting t q < n q , t v < n v and t e < n e of the factored matrices, one can approximate τ with only a fraction of originally required number of trainable parameters. The output embedding e from the Tucker decomposition effectively captures the interactions between semantic and visual features for a given Image-Question-Answer (IQA) triplet. Such joint embedding for VQA is specific to the given IQ pair because the same visual feature associated with different semantics (and vice-versa) will result in a different joint embedding specific for that pair. For example, given an image that captures children playing in the backyard, when asked 'How many children are in the picture?' and 'Are the children happy?', requires two very different joint embeddings e even though they use the same visual feature, v. Building on this rich joint embedding, we develop a transfer learning module based on exemplars. Exemplar based Learning Transfer Given a question about an Unknown concept, our model identifies a similar joint embedding of Known concepts from the training set. Since, visual/semantic examples of unknown concepts are not available to us during training, first we learn a generic attention function A that can transfer knowledge from the Known concepts to Unknown. The attention function is learned on the training set, where it identifies the useful features from closely related exemplars to answer questions. The function A is agnostic to specific IQ pairs and provides a generalizable mechanism to identify relevant information from related examples. Therefore, at inference time, we use the same exemplar based attention function to obtain refined attention maps by using the closely related joint embedding of known concepts. We design the training schedule in two stages to facilitate knowledge transfer. During the first stage, only the Visual-Semantic embedding part of the network is trained end-toend and the joint feature embedding tensor e is stored in memory ξ ∈ R d×n , where n is the number of training IQA triplets and d denotes the embedding dimension. In the second stage, both the visual-semantic embedding and the exemplar-embedding segment of the model are trained end-to-end where the model performs a nearest neighbour (NN) search on ξ to find the most similar joint embedding e ξ . Further, the network learns to use the exemplar embedding to refine the attention on visual scene details. This can be represented as: v E = A(v, e E ), where, e E = N (e, ξ, κ),(3) where, e E is the exemplar-embedding found using nearest neighbour search (N ) on a set of compact embeddings κ. There are two main motivations for not performing the NN search directly on ξ and instead using a set of compact embeddings κ. Firstly, searching in the joint embedding space would allow the model to overfit when searching for the closest match. However, when searching for the closet match for a compact representation of the joint embedding, the reduced dimensionality of the representa-tion avoids overfitting. Secondly, storing and performing NN search directly on the joint embedding exemplar space is extremely memory and time intensive. For example, if visual features are extracted from the second-last convolution layer of ResNet152 [12] for evaluation on VQAv2 [11] dataset, ξ will be R n×d dimensional where n ≈ 400K training examples and d = t e × G such that grid locations G = 14 × 14 and t e ≈ 500 for a standard setting. Doing a similarity match on such a large space has practical memory and computational limitations. Due to the above-mentioned reasons, we generate a coarse representation of ξ by passing each of its elements through a max-pooling layer. We empirically found maxpooling to perform well in our case. The set of max-pooled embeddings is represented by κ, whose entries act as softkeys for the exemplar-embeddings. When a query embedding e is presented, we calculate its compact feature e κ by applying a max-pooling operation. The NN search is performed between e κ and each element of κ to find the embedding e κ E . As the elements of ξ and κ have a one-to-one relationship, by matching the maxpooled version of e to κ, the model finds the exemplar embedding E . Notably, with this setup, we do not require the large set of exemplars ξ to be loaded into memory, instead a much more compact representation is used for efficient search and retrieval. Visual Attention Attention is applied at two levels in our network. At the first level, the joint embedding e is used to apply attention on visual features to focus model attention on more important features. The joint embedding is passed through a FC (fully connected) layer followed by a softmax layer to generate an attention vector α v IQ ∈ R G which scores each spatial grid location G of visual feature v according to input IQ pair. At the second level, we select e E , the most similar exemplar embedding of e, and follow the same protocol to generate another attention score α v E over spatial grid locations. The attention vectors α v IQ and α v E signify complementary proposals generated using a given IQ pair and the most similar visual-semantic embedding from the exemplar set respectively. Such a complementary attention mechanism allows the model to reason about unknown concepts using the attention calculated from the combined effect of the input IQ pair and further refine it by looking at the closest example from the exemplar set. Both the IQ pair and the exemplar-based attention vectors are used to take a weighted sum at each location g of the input visual representation v (i.e., v g ) to create an attended visual representation. This can be formulated as: v IQ = G g=1 v g α v,g IQ andṽ E = G g=1 v g α v,g E ,(4) whereṽ IQ andṽ E denote the attended visual features generated from the IQ pair and exemplar embedding respectively. We concatenate the two to create an overall attended visual featureṽ and again apply Φ in a similar manner as described in Sec. 3.1 to generate the final vision-semantic embedding. We then project the embedding to the prediction space that is passed through a classifier to predict the final answer a * ∈ D. OW-VQA dataset generation protocol Motivation: When a VQA system is subjected to an open-world setting, it can encounter numerous visual and semantic concepts that it has not seen during training. To help VQA systems develop capability to handle unknown visual and semantic concepts, we propose a new split that contains known/unknown concepts for training/testing respectively. Our dataset generation protocol builds on the fact that images in VQA datasets [4,11] are repurposed from MSCOCO images [16] and paired with crowd sourced Q&A. Even though, MSCOCO images have rich object level annotation for 12 super-categories and 80 object categories, the VQA dataset annotations include only information related to Q&A, excluding any link to object level annotation. This constitutes a significant knowledge gap which if addressed, would allow for more subtle understanding of the scene even if it contains previously unknown visual and semantic concepts. To bridge the gap, we propose to use Objects categories as the core entity to develop a true Known-Unknown split that constitutes both visual and semantic domains. First, we propose an known-unknown split for the MSCOCO object categories, which leads to a well-founded split that separates known/unknown concepts in IQA triplets on VQA datasets. Known/Unknown Object Split: At the first stage, from each MSCOCO super-category (except for person which has no sub-category), we select the rarest category as Unknown and the rest as Known. This choice is motivated by the fact that rare classes are most likely to be unknown. For each category c, we calculate N i and N t which represent the total number of images that c appears in, and total number of instances of c respectively. We define N = N i × N t as the measure of occurrence for each category. We select the category with the smallest N as Unknown category. Fig. 3 shows the normalized N for categories in each super category and respective Unknown categories (more details in supplementary material). This ensures that the unknown category appears in least number of images least number of times. Such a measure is particularly necessary for datasets that are used to perform high level vision tasks associated with a language component. For example in super-category vehicle, train is less frequent compared to airplane in terms of instances (4,761 vs 5,278). If the split was solely based on number of instances N t , then train would have been selected as an unseen class even though it appears in 662 less number of images than airplane. When human annotators are tasked with generating language components (i.e. Q&A or captions), the rarest language cues are often associated with categories that appears in the least number of images, the least number of times. Thus, selecting the category with the least occurrence measure N ensures that categories with least language representation are selected as Unknown. Image-Question-Answer Split: Building on the Known-Unknown categories, we repurpose IQA triplets from VQAv1 [4] and VQAv2 [11], and propose training (known) and test (unknown) splits called OW-VQAv1 and OW-VQAv2. For this purpose, we combine training and validation sets of respective datasets (test split cannot be used as they are not publicly available). We use two steps to ensure that both visual and semantic concepts associated with the Unknown categories are completely absent in the training set. Firstly, we place an IQA triplet in the training set if there is no instance of any unseen category in the image of the corresponding triplet. This ensures that the new visual concepts are unknown to the model during training. Secondly, we focus on the semantic part and filter out the IQA triplets from the training set that have any un- Table 2. Evaluation on our proposed OW-VQA split when trained on Trainvalset (Trainset+Valset-Known) and evaluated on Testset. Table 3. Performance drop when trained on known concepts and validated on unseen concepts. known category names or synonyms in the questions. Such visual and semantic confinement of concepts in train/test split is the major advantage that our proposed dataset has over other approaches [20,23,1] where the unseen 'objects/concepts' are only defined at semantic-level. For example, airplane is an 'unseen category' in our proposed dataset and a 'novel object' in the dataset proposed by Ramakrishnan et al. [20]. A semantically motivated dataset generation protocol would place an IQA triplet that does not have the keyword airplane in the question, in the training set. However, there are several IQA triplets in VQA dataset that shows an airplane being serviced by a car, truck or a person at an airport, and do not ask about the airplane. Just ensuring that the semantic concepts are not present during training only addresses a naive version of the challenge an open-world VQA system would face. Experiments In this section, we first describe the experimental setup and implementation details of our proposed model. Then we present the baseline model architectures useing different combinations of visual and semantic features to generate joint embedding. Then we present the results of our experiments which includes benchmarking of VQA models on OW-VQA dataset, ablation and performance analysis of our proposed model on semantically motivated VQA splits and standard VQA setting. Experimental Setup Feature Extraction and Fusion: We use Facebook's implementation of ResNet152 [12] to extract multilevel visual features from the input image by taking the output of the two last convolutional layers, v 1 ∈ R 2048×14×14 and v 2 ∈ R 2048 where v 1 represents spatial visual features at each image gird location G = 14×14 and v 2 represents the pooled visual features at an image level. We use different combinations of v 1 and v 2 in the baseline models that undergo joint embedding with the question features. Semantic feature q ∈ R 2400 is generated in a manner similar to [8,5,9] where the question is encoded with skip-thought vectors [14] and passed through GRUs. When generating the visual-semantic embedding, we set t v =t q =310, t e =510 and use two glimpse attention following the literature [8,5], making the joint embedding G × 510 dimensional. Exemplar Implementation: We store the joint embeddings e of randomly selected 10% IQ pairs from the training set in ξ. Our experiments show that such a sub-sampling does not degrade the performance while significantly improving computational efficiency. To generate the compact embedding or Soft-key set κ, Max Pooling is applied on each entry of ξ i ∈ R 196×510 to generate the compressed embedding κ i ∈ R ρ for ξ i . For our experiments we set ρ = 140 which was found optimal through our experiments. We represent κ using a K-D tree data structure. During testing and second stage of training, we query on κ to find the index of the closest match to the max-pooled e by performing k-nearest neighbour search (k = 1), and get the joint embedding from ξ for that index and set as e ξ . Answer Classifier: We create the answer set D with most frequent 2000 answers from the training set and formulate the VQA task as a multi-class classification problem on the answer set D ∈ R 2000 following VQA benchmark [4]. The final attended visual-semantic feature representationṽ is passed through a fully connected layer to project to answer embedding space where softmax cross entropy loss is applied to predict the most probable answer from D. Baseline Models We first propose three strong baselines that build on the state-of-the-art Tucker fusion technique [8,5] to generate visual-semantic embeddings for VQA. These baselines are: (a) In the Concatenation Model, we concatenate q and v 2 and generate the joint embedding e by applying multimodal fusion Φ on q and (v 2 ⊕ q). The joint embedding is used to refine grid level feature v 1 by applying attention α. (b) For Dual Attention Model, the e is generated as e = Φ(q, v 2 ). This joint embedding generated from pooled image feature is used to apply attention on the grid level image feature v 1 , and is thus called the dual attention model. (Table 3) which shows its effectiveness in jointly embedding visualsemantic features. Thus our proposed model uses v 1 to generate the joint embedding of the exemplars E. Results Benchmarking VQA models on OW-VQA: We benchmark existing VQA models on OW-VQA dataset and report their performance on both versions of our proposed VQA dataset split. From Table 2, we can see VQA models that incorporate multimodal (visual-semantic) embedding (i.e. pooling [9] or fusion [5]) compared to the models which only use semantic embedding to generate visual attention achieves higher performance in both versions of OW-VQA. Our exemplar based approach further refines the visual attention by transferring knowledge from exemplar set and we report 1.4% and 0.9% overall accuracy gain over the closet state-of-the-art method on both v1 and v2 respectively. Such an improvement without using any external knowledge base (i.e. complementary training on Visual Genome [15], external image and text corpora) and/or model ensemble justifies our approach of transferring knowledge from exemplars. Furthermore, the accuracy scores of VQA models reported in Table 2, drop significantly when evaluated OW-VQAv2 compared to v1 as the IQA triplets in v2 have less semantic bias. It can also be seen that the joint embedding attention models are more robust against semantic bias than semantic attention models (overall accuracy drop of ∼3.5% compared to ∼5.1%). This further strengthens our motivation to make use of such joint embedding space which capture highly discriminative multi-modal features. Performance drop when evaluated on Unknown: We perform an ablation study to quantify the role of different components of our proposed model on OW-VQA-v2 dataset and compare the performance of our baseline models and full model, along with exemplar-attention-only variant. In this experiment, we train the models on Trainset and evaluate on Valset, Valset-known and Valset-Unknown which enables us to perform a comparative analysis on the models' ability to reason about Known and Unknown concepts (see Table 3). We also report the number of trainable parameters required for each model. From the bar plot, it can be observed that all model variants achieved higher accuracy on Valset compared to Valset-Unknown and lower accuracy when compared with Valset-Known when trained with only Known concepts. Among the baseline methods, the grid attention variant achieves the highest accuracy with the least number of trainable parameters. Interestingly, when only the joint feature encoding from exemplar (exemplarattention-only variant) is used, it achieves a relatively reasonable overall accuracy of ∼51%. This shows that the exemplar feature indeed encapsulates valuable information for the VQA task. Our full model incorporates both grid attention and exemplar attention with a small increase in the Table. 3, note that accuracy difference between Known and Unknown concepts is 6.1 for JEX which is 12.68% lower than that of the gird attention model. This quantifies the value added by using exemplars to bridge that gap in comprehending Unknown concepts. Evaluation on semantically separated VQA splits: We evaluate our exemplar based approach on semantically motivated VQA-CP [1] and Novel VQA [20] datasets where the former separated the challenging semantic concepts in the testset and the latter placed least frequent nouns and associated IQA triplets in the testset. Although, our motivation is orthogonal and our definition of Novel Concepts is heterogeneous to these semantically motivated approaches, we showcase the effectiveness of our exemplar based approach on their settings. In Table 5, we compare the performance of JEX model on Novel-VQA split with performances of baseline and proposed methods (Arch-1 and Arch-2) reported in [20]. Our exemplar based approach outperforms the best variant of Arch-1 and Arch-2 by 13.3% and 15.2%. This is to be noted that even if the proposed approaches by Ramakrishnan et al. [20] incorporate external knowledge, both semantic (i.e. books) and visual (i.e. examples from ImageNet [7]), our model achieves superior performance by only leveraging information from training examples. We also evaluate our model on both versions of VQA-CP dataset and report performance against other benchmarks and their proposed GVQA [1] dataset in Table 4. It shows that in VQA-CPv1, GVQA achieves a slightly higher (0.9%) Overall accuracy than JEX, but performs significantly low (18.8%) compared to JEX for Other type questions. GVQA employs separate question classifiers for Y/N and non-Y/N (i.e. Num, Other) questions that account for its high accuracy in Y/N questions which results in higher 2 Compared with k=1, where only one nearest neighbour was used. Overall accuracy. However, when evaluated on VQA-CPv2, JEX outperforms GVQA in both Overall and Other question accuracy by 5.5% and 19.3% respectively because VQA-CPv2 has a more balanced distribution of question categories and a considerably lower language bias [11]. Evaluation on standard VQA setting: We evaluate our model on VQAv2 validation set [11] and compare its performance with other attention based models. It is worth noting that we only compare with their single model without data augmentation which is similar to our setting for fair comparison. From Table 6 it can be seen that our model outperforms the Tucker decomposition based model by Benyounes et al. [5] which has a similar architecture to our baseline models. Further, it also outperforms the Support-Set model proposed by Teney et al. [24] in a similar setting where the support set contains example representation of question, answers and image. Interestingly, the overall accuracy of GVQA [1] without an ensemble and/or oracles is 18.9% lower than JEX in a standard VQA setting. Conclusion Existing VQA systems lack the ability to generalize their knowledge from training to answer questions about novel concepts encountered during inference. In this paper, we propose an exemplar-based transfer learning approach that utilizes the closest Known examples to answer questions about Unknown concepts. A joint embedding space is central to our approach, that effectively encodes the complex relationships between semantic, visual and output domains. Given the IQ pair and exemplar embedding in this space, the proposed approach hierarchically attends to visual details and focuses attention on the regions that are most useful to predict the correct answer. We propose a new Open-World VQA train/test split to fairly compare the performance of VQA systems on Known and Unknown concepts. Our exemplar based approach achieves significant improvements over the state-of-the-art techniques on the proposed OW-VQA setting as well as standard VQA setting, which reinforces the notion of transferring knowledge from rich joint embedding space to reason about Unknown concepts. Supplementary Material A. Dataset generation protocol for OW-VQA For each MSCOCO [16] category c, N i represents the number of images in which c appears and N t represents the number of times c appears in the dataset (i.e. total instances). These statistics are calculated after merging the MSCOCO Train2014 and Val2014 splits. Fig. 6 shows N i and N t for categories within each super-category of MSCOCO [16]. The category names are color-coded to represent the super-category labels and respective Unknown categories. From this figure, we can see that the categories which appear in the least number of images, the least number of times are selected as Unknown. Table 7 presents statistics of VQA dataset following the proposed Known/Unknown concept separation protocol described in 'Image-Question-Answer Split' of Sec. 4 of the main paper. We can see from the statistics that Unknown categories are present in ∼16% of training and validation images. Furthermore, it can be observed, when IQA triplets from the training and validation splits of the VQA datasets are separated on the basis of Known and Unknown concepts, the Unknown IQA triplets also amount to ∼16% of the total. This is an indication that our dataset preparation protocol is able to uniformly separate Known and Unknown concepts even from crowd-sourced, complex, multi-modal dataset like VQA. Such uniform split allows for effective evaluation of a VQA models' ability to reason with Unknown concepts. The Trainset and Testset of the OW-VQA dataset consists of Known and Unknown IQA triplets from corresponding Train splits of VQA datasets respectively. We also propose two validation splits called Valset-Known and Valset-Unknown from the Val splits of VQA datasets. The Valset-Known contains Known IQA triplets and the Valset-Unknown contains Unknown IQA triplets from the Valset of respective version. The subdivision of Valset into Known and Unknown splits allows evaluation on both concept types. B. Evaluation protocol for OW-VQA There are two main ways to evaluate a models performance on the proposed OW-VQA dataset. (a) For the purpose of debugging and running validation experiments, one can train a VQA model on OW-VQA Trainset and evaluate on Valset-Known or Valset-Unknown or the whole Valset. The OW-VQAv1 Trainset contains ∼187k IQA triplets, and Valset-Known and Valset-Unknown contains ∼101k and ∼19k IQA pairs respectively. The OW-VQAv2 has more IQA triplets, where the Trainset contains ∼336k IQA triplets, and Valset-Known and Valset-Unknown contains ∼178k and ∼ 34k. (b) To do a more comprehensive evaluation, it is recommended to train the model on OW-VQA Trainset and evaluate on Testset or Testset+Valset-Unknown, as they have more Unknown IQA pairs than Valset-Unknown. For OW-VQAv1 and v2, the Testset contains ∼36k and ∼66k IQA respectively. When combined with respective Valset-Unknown it presents an even larger setting to evaluate on Unknown concepts. Fig. 5 reports the overall accuracy of Grid Attention baseline model trained on Trainset and evaluated on validation splits of OW-VQAv1 and OW-VQAv2. It can be seen that the Known-Unknown accuracy gap is lower in v1 and higher in v2. This is due to the language bias present in VQAv1 dataset and the model used this bias to correctly answer questions about Unknown concepts. Table 8 reports the comparison of proposed JEX model and other contemporary VQA models on both versions of VQA-CPv1 and v2 [1], including accuracy scores of all question categories. It can be seen that GVQA [1] achieved higher accuracy on the Y/N questions than the proposed JEX model. As mentioned in the 'Evaluation on semantically separated VQA splits' part of Section 5.3 of the main paper, GVQA employs a separate training module for Y/N questions which helps achieve higher accuracy for Y/N questions. However, for all other question categories the proposed JEX model achieved higher accuracy than GVQA.
5,178
1811.12772
2902377675
Current Visual Question Answering (VQA) systems can answer intelligent questions about Known' visual content. However, their performance drops significantly when questions about visually and linguistically Unknown' concepts are presented during inference ( Open-world' scenario). A practical VQA system should be able to deal with novel concepts in real world settings. To address this problem, we propose an exemplar-based approach that transfers learning (i.e., knowledge) from previously Known' concepts to answer questions about the Unknown'. We learn a highly discriminative joint embedding space, where visual and semantic features are fused to give a unified representation. Once novel concepts are presented to the model, it looks for the closest match from an exemplar set in the joint embedding space. This auxiliary information is used alongside the given Image-Question pair to refine visual attention in a hierarchical fashion. Since handling the high dimensional exemplars on large datasets can be a significant challenge, we introduce an efficient matching scheme that uses a compact feature description for search and retrieval. To evaluate our model, we propose a new split for VQA, separating Unknown visual and semantic concepts from the training set. Our approach shows significant improvements over state-of-the-art VQA models on the proposed Open-World VQA dataset and standard VQA datasets.
Although most VQA approaches only work with the given training set, some efforts explore the use of supplementary information to help the VQA system. Generally, such methods employ external knowledge sources (both textual and visual) to augment the training set. For example, Teney al in @cite_30 @cite_21 used web searches to find related images which were used for answer prediction. Language based external knowledge bases were used by Wang al @cite_7 and Wu al @cite_20 to provide logical reasons for each answer choice and to answer a more diverse set of questions. More recently, Teney al @cite_16 proposed a meta-learning approach that learns to use an externally supplied support set comprising of example questions-answers. In contrast to these approaches, we do not use any external data, rather learn an attention function that learns to use similar examples from the training set to provide better inference-time predictions. Patro al @cite_0 proposed a differential attention mechanism that uses an exemplar from the training set to generates human-like attention maps, however does not consider a transferable attention function that can reason about new visual semantic concepts.
{ "abstract": [ "Part of the appeal of Visual Question Answering (VQA) is its promise to answer new questions about previously unseen images. Most current methods demand training questions that illustrate every possible concept, and will therefore never achieve this capability, since the volume of required training data would be prohibitive. Answering general questions about images requires methods capable of Zero-Shot VQA, that is, methods able to answer questions beyond the scope of the training questions. We propose a new evaluation protocol for VQA methods which measures their ability to perform Zero-Shot VQA, and in doing so highlights significant practical deficiencies of current approaches, some of which are masked by the biases in current datasets. We propose and evaluate several strategies for achieving Zero-Shot VQA, including methods based on pretrained word embeddings, object classifiers with semantic embeddings, and test-time retrieval of example images. Our extensive experiments are intended to serve as baselines for Zero-Shot VQA, and they also achieve state-of-the-art performance in the standard VQA evaluation setting.", "", "Deep Learning has had a transformative impact on Computer Vision, but for all of the success there is also a significant cost. This is that the models and procedures used are so complex and intertwined that it is often impossible to distinguish the impact of the individual design and engineering choices each model embodies. This ambiguity diverts progress in the field, and leads to a situation where developing a state-of-the-art model is as much an art as a science. As a step towards addressing this problem we present a massive exploration of the effects of the myriad architectural and hyperparameter choices that must be made in generating a state-of-the-art model. The model is of particular interest because it won the 2017 Visual Question Answering Challenge. We provide a detailed analysis of the impact of each choice on model performance, in the hope that it will inform others in developing models, but also that it might set a precedent that will accelerate scientific progress in the field.", "We study the problem of answering questions about images in the harder setting, where the test questions and corresponding images contain novel objects, which were not queried about in the training data. Such setting is inevitable in real world&#x2013;owing to the heavy tailed distribution of the visual categories, there would be some objects which would not be annotated in the train set. We show that the performance of two popular existing methods drop significantly (21&#x2013;28 ) when evaluated on novel objects cf. known objects. We propose methods which use large existing external corpora of (i) unlabeled text, i.e. books, and (ii) images tagged with classes, to achieve novel object based visual question answering. We systematically study both, an oracle case where the novel objects are known textually, as well as a fully automatic case without any explicit knowledge of the novel objects, but with the minimal assumption that the novel objects are semantically related to the existing objects in training. The proposed methods for novel object based visual question answering are modular and can potentially be used with many visual question answering architectures. We show consistent improvements with the two popular architectures and give qualitative analysis of the cases where the model does well and of those where it fails to bring improvements.", "The predominant approach to Visual Question Answering (VQA) demands that the model represents within its weights all of the information required to answer any question about any image. Learning this information from any real training set seems unlikely, and representing it in a reasonable number of weights doubly so. We propose instead to approach VQA as a meta learning task, thus separating the question answering method from the information required. At test time, the method is provided with a support set of example questions answers, over which it reasons to resolve the given question. The support set is not fixed and can be extended without retraining, thereby expanding the capabilities of the model. To exploit this dynamically provided information, we adapt a state-of-the-art VQA model with two techniques from the recent meta learning literature, namely prototypical networks and meta networks. Experiments demonstrate the capability of the system to learn to produce completely novel answers (i.e. never seen during training) from examples provided at test time. In comparison to the existing state of the art, the proposed method produces qualitatively distinct results with higher recall of rare answers, and a better sample efficiency that allows training with little initial data. More importantly, it represents an important step towards vision-and-language methods that can learn and reason on-the-fly.", "We propose a method for visual question answering which combines an internal representation of the content of an image with information extracted from a general knowledge base to answer a broad range of image-based questions. This allows more complex questions to be answered using the predominant neural network-based approach than has previously been possible. It particularly allows questions to be asked about the contents of an image, even when the image itself does not contain the whole answer. The method constructs a textual representation of the semantic content of an image, and merges it with textual information sourced from a knowledge base, to develop a deeper understanding of the scene viewed. Priming a recurrent neural network with this combined information, and the submitted question, leads to a very flexible visual question answering approach. We are specifically able to answer questions posed in natural language, that refer to information not contained in the image. We demonstrate the effectiveness of our model on two publicly available datasets, Toronto COCO-QA and MS COCO-VQA and show that it produces the best reported results in both cases." ], "cite_N": [ "@cite_30", "@cite_7", "@cite_21", "@cite_0", "@cite_16", "@cite_20" ], "mid": [ "2555661914", "", "2964345214", "2605963463", "2770653590", "2176212817" ] }
From Known to the Unknown: Transferring Knowledge to Answer Questions about Novel Visual and Semantic Concepts
Machine vision algorithms have significantly evolved various industries such as internet commerce, personal digital assistants and web-search. A major component of machine intelligence comprises of how well it can comprehend visual content. A Visual Turing Test to assess a machine's ability to understand visual content is performed with the Visual Question Answering (VQA) task. Here, machine vision algorithms are expected to answer intelligent questions about visual scenes. The current VQA paradigm is 1 Dataset and models will be available at: TBA Figure 1. Open World VQA for Novel Concepts: Our model learns to represent multi-modal information (Image (I)-Question (Q) pairs) as a joint embedding (Φ). Once presented with novel concepts, the proposed model learns to effectively use past knowledge accumulated over the training set to answer intelligent questions. not without its grave weaknesses. One key limitation is that the questions asked at inference time only relate to the concepts that have already been seen during the training stage (closed-world assumption). In the real world, humans can easily reason about visually and linguistically Unknown concepts based on previous knowledge about the Known. For instance, having seen visual examples of 'lion' and 'tiger', a person can recognize an unknown 'liger' by associating visual similarities with a new compositional semantic concept and answer intelligent questions about their count, visual attributes, state and actions. In order to design machines to mimic human visual comprehension abilities, we must impart lifelong learning mechanisms that allow them to accumulate and use past knowledge to relate Unknown concepts. In this paper, we introduce a novel VQA problem setting that evaluates models in an 'Open-World' dynamic scenario where previously unknown concepts show up during the test phase (Fig. 1). An open-world VQA setting requires a vision system to acquire knowledge over time and later use it intelligently to answer complex questions about Unknown concepts for which no linguistic+visual examples were available during training. Existing VQA systems lack this capability as they use a 'fixed model' to acquire learning and envisage answers without explicitly considering closely related examples from the knowledge base. This can lead to 'catastrophic forgetting' [18] as the object/question set is altered with updated categories. Here, we develop a flexible knowledge base (only comprising of the training examples) that stores the joint embeddings of visual and textual features. Our proposed approach learns to utilize past knowledge to answer questions about unknown concepts. Related to our work, we note a few recent efforts in the literature that aim to extend VQA beyond the already known concepts [27,1,23,20,2]. A major limitation of these approaches is that they introduce novel concepts only on the language side (i.e., new questions/answers), either to re-balance the split or to prevent the model cheating by removing biases [1,23,2]. Further, they rely on external data sources (both visual and semantic) and consider training splits that contain visual instances of 'novel objects', thereby violating the unknown assumption [20,23]. To bridge this gap, we propose a new Open World VQA (OW-VQA) protocol for novel concepts based on MS-COCO and VQAv1-v2 datasets. Our major contributions are: • We reformulate VQA in a transfer learning setup that uses closely related Known instances from the exemplar set to reason about Unknown concepts. • We present a novel network architecture and training schedule that maintains a knowledge base of exemplars in a rich joint embedding space that aggregates visual and semantic information. • We propose a hierarchical search and retrieval scheme to enable efficient exemplar matching on a high dimensional joint embedding space. • We propose a new OW-VQA split to enable impartial evaluation of VQA algorithms in a real-world scenario and report impressive improvements over recent approaches with our proposed model. Methods Given a question Q about an image I, an AI agent designed for the VQA task will predict answer a * based on the learning acquired from training examples. This task can be formulated as: a * = arg max a∈D P (â|Q, I; θ)(1) where θ denotes the model parameters and a * is predicted from the dictionary of possible answers D. An ideal VQA system should effectively model the complex interactions between the language and visual domains to acquire useful knowledge and use it to answer newly presented questions at test time. Towards this end, we propose a framework to answer questions about novel concepts in Fig. 2. The overall pipeline is based on four main steps: (1) Joint Feature Embedding: The given IQ pair is processed to extract visual v and language q features. These features are jointly embedded into a common space through multi-modal fusion. In the next stage, the proposed network selectively attends to visual scene details using the joint feature embedding of given inputs and the exemplars (output of step 1 and 2 respectively). This ensures that the model learns to identify salient features to answer a specific question. (4) Answer Prediction: During the final stage, the refined joint embedding is calculated by the model to predict the correct answer a * from the answer set A, minimizing a cross entropy loss. Joint Feature Embedding From the given image I, the n v -dimensional visual feature embedding v ∈ R nv is extracted from the last convolutional layers just before the global pooling and classification layer of the feature extraction model (i.e. ResNet [12]). The language feature embedding q ∈ R nq is generated from Q by first encoding the question in a one-hot-vector representation and then embedded into vector space using Gated Recurrent Units (GRUs) [6,9]. In order to predict a correct answer, a VQA model needs to generate a joint embedding: e = Φ(v, q; τ ) ∈ R ne . A naive approach that models visual-semantic interactions using a tensor τ ∈ R nq×nv×ne will result in an unrealistic number of trainable parameters (e.g., ∼ 9.83 billion for our baseline model). To reduce the dimensionality of the tensor, we use Tucker decomposition [25] which can be seen a high-order principal component analysis operation. This technique has been proven effective in embedding visual and textual features for VQA [8,5]. It approximates τ as follows: τ = tq i=1 tv j=1 te k=1 ω ijk τ i q • τ j v • τ k e = ω × 1 τ q × 2 τ v × 3 τ e = [[ω; τ q , τ v , τ e ]](2) where × i denotes n-mode product of a tensor with a matrix and • denotes the outer vector product. Eq. 2 means that tensor τ is decomposed into a core tensor ω ∈ R tq×tv×te and orthonormal factor matrices τ q ∈ R nq×tq , τ v ∈ R nv×tv , τ e ∈ R ne×te . Intuitively, by setting t q < n q , t v < n v and t e < n e of the factored matrices, one can approximate τ with only a fraction of originally required number of trainable parameters. The output embedding e from the Tucker decomposition effectively captures the interactions between semantic and visual features for a given Image-Question-Answer (IQA) triplet. Such joint embedding for VQA is specific to the given IQ pair because the same visual feature associated with different semantics (and vice-versa) will result in a different joint embedding specific for that pair. For example, given an image that captures children playing in the backyard, when asked 'How many children are in the picture?' and 'Are the children happy?', requires two very different joint embeddings e even though they use the same visual feature, v. Building on this rich joint embedding, we develop a transfer learning module based on exemplars. Exemplar based Learning Transfer Given a question about an Unknown concept, our model identifies a similar joint embedding of Known concepts from the training set. Since, visual/semantic examples of unknown concepts are not available to us during training, first we learn a generic attention function A that can transfer knowledge from the Known concepts to Unknown. The attention function is learned on the training set, where it identifies the useful features from closely related exemplars to answer questions. The function A is agnostic to specific IQ pairs and provides a generalizable mechanism to identify relevant information from related examples. Therefore, at inference time, we use the same exemplar based attention function to obtain refined attention maps by using the closely related joint embedding of known concepts. We design the training schedule in two stages to facilitate knowledge transfer. During the first stage, only the Visual-Semantic embedding part of the network is trained end-toend and the joint feature embedding tensor e is stored in memory ξ ∈ R d×n , where n is the number of training IQA triplets and d denotes the embedding dimension. In the second stage, both the visual-semantic embedding and the exemplar-embedding segment of the model are trained end-to-end where the model performs a nearest neighbour (NN) search on ξ to find the most similar joint embedding e ξ . Further, the network learns to use the exemplar embedding to refine the attention on visual scene details. This can be represented as: v E = A(v, e E ), where, e E = N (e, ξ, κ),(3) where, e E is the exemplar-embedding found using nearest neighbour search (N ) on a set of compact embeddings κ. There are two main motivations for not performing the NN search directly on ξ and instead using a set of compact embeddings κ. Firstly, searching in the joint embedding space would allow the model to overfit when searching for the closest match. However, when searching for the closet match for a compact representation of the joint embedding, the reduced dimensionality of the representa-tion avoids overfitting. Secondly, storing and performing NN search directly on the joint embedding exemplar space is extremely memory and time intensive. For example, if visual features are extracted from the second-last convolution layer of ResNet152 [12] for evaluation on VQAv2 [11] dataset, ξ will be R n×d dimensional where n ≈ 400K training examples and d = t e × G such that grid locations G = 14 × 14 and t e ≈ 500 for a standard setting. Doing a similarity match on such a large space has practical memory and computational limitations. Due to the above-mentioned reasons, we generate a coarse representation of ξ by passing each of its elements through a max-pooling layer. We empirically found maxpooling to perform well in our case. The set of max-pooled embeddings is represented by κ, whose entries act as softkeys for the exemplar-embeddings. When a query embedding e is presented, we calculate its compact feature e κ by applying a max-pooling operation. The NN search is performed between e κ and each element of κ to find the embedding e κ E . As the elements of ξ and κ have a one-to-one relationship, by matching the maxpooled version of e to κ, the model finds the exemplar embedding E . Notably, with this setup, we do not require the large set of exemplars ξ to be loaded into memory, instead a much more compact representation is used for efficient search and retrieval. Visual Attention Attention is applied at two levels in our network. At the first level, the joint embedding e is used to apply attention on visual features to focus model attention on more important features. The joint embedding is passed through a FC (fully connected) layer followed by a softmax layer to generate an attention vector α v IQ ∈ R G which scores each spatial grid location G of visual feature v according to input IQ pair. At the second level, we select e E , the most similar exemplar embedding of e, and follow the same protocol to generate another attention score α v E over spatial grid locations. The attention vectors α v IQ and α v E signify complementary proposals generated using a given IQ pair and the most similar visual-semantic embedding from the exemplar set respectively. Such a complementary attention mechanism allows the model to reason about unknown concepts using the attention calculated from the combined effect of the input IQ pair and further refine it by looking at the closest example from the exemplar set. Both the IQ pair and the exemplar-based attention vectors are used to take a weighted sum at each location g of the input visual representation v (i.e., v g ) to create an attended visual representation. This can be formulated as: v IQ = G g=1 v g α v,g IQ andṽ E = G g=1 v g α v,g E ,(4) whereṽ IQ andṽ E denote the attended visual features generated from the IQ pair and exemplar embedding respectively. We concatenate the two to create an overall attended visual featureṽ and again apply Φ in a similar manner as described in Sec. 3.1 to generate the final vision-semantic embedding. We then project the embedding to the prediction space that is passed through a classifier to predict the final answer a * ∈ D. OW-VQA dataset generation protocol Motivation: When a VQA system is subjected to an open-world setting, it can encounter numerous visual and semantic concepts that it has not seen during training. To help VQA systems develop capability to handle unknown visual and semantic concepts, we propose a new split that contains known/unknown concepts for training/testing respectively. Our dataset generation protocol builds on the fact that images in VQA datasets [4,11] are repurposed from MSCOCO images [16] and paired with crowd sourced Q&A. Even though, MSCOCO images have rich object level annotation for 12 super-categories and 80 object categories, the VQA dataset annotations include only information related to Q&A, excluding any link to object level annotation. This constitutes a significant knowledge gap which if addressed, would allow for more subtle understanding of the scene even if it contains previously unknown visual and semantic concepts. To bridge the gap, we propose to use Objects categories as the core entity to develop a true Known-Unknown split that constitutes both visual and semantic domains. First, we propose an known-unknown split for the MSCOCO object categories, which leads to a well-founded split that separates known/unknown concepts in IQA triplets on VQA datasets. Known/Unknown Object Split: At the first stage, from each MSCOCO super-category (except for person which has no sub-category), we select the rarest category as Unknown and the rest as Known. This choice is motivated by the fact that rare classes are most likely to be unknown. For each category c, we calculate N i and N t which represent the total number of images that c appears in, and total number of instances of c respectively. We define N = N i × N t as the measure of occurrence for each category. We select the category with the smallest N as Unknown category. Fig. 3 shows the normalized N for categories in each super category and respective Unknown categories (more details in supplementary material). This ensures that the unknown category appears in least number of images least number of times. Such a measure is particularly necessary for datasets that are used to perform high level vision tasks associated with a language component. For example in super-category vehicle, train is less frequent compared to airplane in terms of instances (4,761 vs 5,278). If the split was solely based on number of instances N t , then train would have been selected as an unseen class even though it appears in 662 less number of images than airplane. When human annotators are tasked with generating language components (i.e. Q&A or captions), the rarest language cues are often associated with categories that appears in the least number of images, the least number of times. Thus, selecting the category with the least occurrence measure N ensures that categories with least language representation are selected as Unknown. Image-Question-Answer Split: Building on the Known-Unknown categories, we repurpose IQA triplets from VQAv1 [4] and VQAv2 [11], and propose training (known) and test (unknown) splits called OW-VQAv1 and OW-VQAv2. For this purpose, we combine training and validation sets of respective datasets (test split cannot be used as they are not publicly available). We use two steps to ensure that both visual and semantic concepts associated with the Unknown categories are completely absent in the training set. Firstly, we place an IQA triplet in the training set if there is no instance of any unseen category in the image of the corresponding triplet. This ensures that the new visual concepts are unknown to the model during training. Secondly, we focus on the semantic part and filter out the IQA triplets from the training set that have any un- Table 2. Evaluation on our proposed OW-VQA split when trained on Trainvalset (Trainset+Valset-Known) and evaluated on Testset. Table 3. Performance drop when trained on known concepts and validated on unseen concepts. known category names or synonyms in the questions. Such visual and semantic confinement of concepts in train/test split is the major advantage that our proposed dataset has over other approaches [20,23,1] where the unseen 'objects/concepts' are only defined at semantic-level. For example, airplane is an 'unseen category' in our proposed dataset and a 'novel object' in the dataset proposed by Ramakrishnan et al. [20]. A semantically motivated dataset generation protocol would place an IQA triplet that does not have the keyword airplane in the question, in the training set. However, there are several IQA triplets in VQA dataset that shows an airplane being serviced by a car, truck or a person at an airport, and do not ask about the airplane. Just ensuring that the semantic concepts are not present during training only addresses a naive version of the challenge an open-world VQA system would face. Experiments In this section, we first describe the experimental setup and implementation details of our proposed model. Then we present the baseline model architectures useing different combinations of visual and semantic features to generate joint embedding. Then we present the results of our experiments which includes benchmarking of VQA models on OW-VQA dataset, ablation and performance analysis of our proposed model on semantically motivated VQA splits and standard VQA setting. Experimental Setup Feature Extraction and Fusion: We use Facebook's implementation of ResNet152 [12] to extract multilevel visual features from the input image by taking the output of the two last convolutional layers, v 1 ∈ R 2048×14×14 and v 2 ∈ R 2048 where v 1 represents spatial visual features at each image gird location G = 14×14 and v 2 represents the pooled visual features at an image level. We use different combinations of v 1 and v 2 in the baseline models that undergo joint embedding with the question features. Semantic feature q ∈ R 2400 is generated in a manner similar to [8,5,9] where the question is encoded with skip-thought vectors [14] and passed through GRUs. When generating the visual-semantic embedding, we set t v =t q =310, t e =510 and use two glimpse attention following the literature [8,5], making the joint embedding G × 510 dimensional. Exemplar Implementation: We store the joint embeddings e of randomly selected 10% IQ pairs from the training set in ξ. Our experiments show that such a sub-sampling does not degrade the performance while significantly improving computational efficiency. To generate the compact embedding or Soft-key set κ, Max Pooling is applied on each entry of ξ i ∈ R 196×510 to generate the compressed embedding κ i ∈ R ρ for ξ i . For our experiments we set ρ = 140 which was found optimal through our experiments. We represent κ using a K-D tree data structure. During testing and second stage of training, we query on κ to find the index of the closest match to the max-pooled e by performing k-nearest neighbour search (k = 1), and get the joint embedding from ξ for that index and set as e ξ . Answer Classifier: We create the answer set D with most frequent 2000 answers from the training set and formulate the VQA task as a multi-class classification problem on the answer set D ∈ R 2000 following VQA benchmark [4]. The final attended visual-semantic feature representationṽ is passed through a fully connected layer to project to answer embedding space where softmax cross entropy loss is applied to predict the most probable answer from D. Baseline Models We first propose three strong baselines that build on the state-of-the-art Tucker fusion technique [8,5] to generate visual-semantic embeddings for VQA. These baselines are: (a) In the Concatenation Model, we concatenate q and v 2 and generate the joint embedding e by applying multimodal fusion Φ on q and (v 2 ⊕ q). The joint embedding is used to refine grid level feature v 1 by applying attention α. (b) For Dual Attention Model, the e is generated as e = Φ(q, v 2 ). This joint embedding generated from pooled image feature is used to apply attention on the grid level image feature v 1 , and is thus called the dual attention model. (Table 3) which shows its effectiveness in jointly embedding visualsemantic features. Thus our proposed model uses v 1 to generate the joint embedding of the exemplars E. Results Benchmarking VQA models on OW-VQA: We benchmark existing VQA models on OW-VQA dataset and report their performance on both versions of our proposed VQA dataset split. From Table 2, we can see VQA models that incorporate multimodal (visual-semantic) embedding (i.e. pooling [9] or fusion [5]) compared to the models which only use semantic embedding to generate visual attention achieves higher performance in both versions of OW-VQA. Our exemplar based approach further refines the visual attention by transferring knowledge from exemplar set and we report 1.4% and 0.9% overall accuracy gain over the closet state-of-the-art method on both v1 and v2 respectively. Such an improvement without using any external knowledge base (i.e. complementary training on Visual Genome [15], external image and text corpora) and/or model ensemble justifies our approach of transferring knowledge from exemplars. Furthermore, the accuracy scores of VQA models reported in Table 2, drop significantly when evaluated OW-VQAv2 compared to v1 as the IQA triplets in v2 have less semantic bias. It can also be seen that the joint embedding attention models are more robust against semantic bias than semantic attention models (overall accuracy drop of ∼3.5% compared to ∼5.1%). This further strengthens our motivation to make use of such joint embedding space which capture highly discriminative multi-modal features. Performance drop when evaluated on Unknown: We perform an ablation study to quantify the role of different components of our proposed model on OW-VQA-v2 dataset and compare the performance of our baseline models and full model, along with exemplar-attention-only variant. In this experiment, we train the models on Trainset and evaluate on Valset, Valset-known and Valset-Unknown which enables us to perform a comparative analysis on the models' ability to reason about Known and Unknown concepts (see Table 3). We also report the number of trainable parameters required for each model. From the bar plot, it can be observed that all model variants achieved higher accuracy on Valset compared to Valset-Unknown and lower accuracy when compared with Valset-Known when trained with only Known concepts. Among the baseline methods, the grid attention variant achieves the highest accuracy with the least number of trainable parameters. Interestingly, when only the joint feature encoding from exemplar (exemplarattention-only variant) is used, it achieves a relatively reasonable overall accuracy of ∼51%. This shows that the exemplar feature indeed encapsulates valuable information for the VQA task. Our full model incorporates both grid attention and exemplar attention with a small increase in the Table. 3, note that accuracy difference between Known and Unknown concepts is 6.1 for JEX which is 12.68% lower than that of the gird attention model. This quantifies the value added by using exemplars to bridge that gap in comprehending Unknown concepts. Evaluation on semantically separated VQA splits: We evaluate our exemplar based approach on semantically motivated VQA-CP [1] and Novel VQA [20] datasets where the former separated the challenging semantic concepts in the testset and the latter placed least frequent nouns and associated IQA triplets in the testset. Although, our motivation is orthogonal and our definition of Novel Concepts is heterogeneous to these semantically motivated approaches, we showcase the effectiveness of our exemplar based approach on their settings. In Table 5, we compare the performance of JEX model on Novel-VQA split with performances of baseline and proposed methods (Arch-1 and Arch-2) reported in [20]. Our exemplar based approach outperforms the best variant of Arch-1 and Arch-2 by 13.3% and 15.2%. This is to be noted that even if the proposed approaches by Ramakrishnan et al. [20] incorporate external knowledge, both semantic (i.e. books) and visual (i.e. examples from ImageNet [7]), our model achieves superior performance by only leveraging information from training examples. We also evaluate our model on both versions of VQA-CP dataset and report performance against other benchmarks and their proposed GVQA [1] dataset in Table 4. It shows that in VQA-CPv1, GVQA achieves a slightly higher (0.9%) Overall accuracy than JEX, but performs significantly low (18.8%) compared to JEX for Other type questions. GVQA employs separate question classifiers for Y/N and non-Y/N (i.e. Num, Other) questions that account for its high accuracy in Y/N questions which results in higher 2 Compared with k=1, where only one nearest neighbour was used. Overall accuracy. However, when evaluated on VQA-CPv2, JEX outperforms GVQA in both Overall and Other question accuracy by 5.5% and 19.3% respectively because VQA-CPv2 has a more balanced distribution of question categories and a considerably lower language bias [11]. Evaluation on standard VQA setting: We evaluate our model on VQAv2 validation set [11] and compare its performance with other attention based models. It is worth noting that we only compare with their single model without data augmentation which is similar to our setting for fair comparison. From Table 6 it can be seen that our model outperforms the Tucker decomposition based model by Benyounes et al. [5] which has a similar architecture to our baseline models. Further, it also outperforms the Support-Set model proposed by Teney et al. [24] in a similar setting where the support set contains example representation of question, answers and image. Interestingly, the overall accuracy of GVQA [1] without an ensemble and/or oracles is 18.9% lower than JEX in a standard VQA setting. Conclusion Existing VQA systems lack the ability to generalize their knowledge from training to answer questions about novel concepts encountered during inference. In this paper, we propose an exemplar-based transfer learning approach that utilizes the closest Known examples to answer questions about Unknown concepts. A joint embedding space is central to our approach, that effectively encodes the complex relationships between semantic, visual and output domains. Given the IQ pair and exemplar embedding in this space, the proposed approach hierarchically attends to visual details and focuses attention on the regions that are most useful to predict the correct answer. We propose a new Open-World VQA train/test split to fairly compare the performance of VQA systems on Known and Unknown concepts. Our exemplar based approach achieves significant improvements over the state-of-the-art techniques on the proposed OW-VQA setting as well as standard VQA setting, which reinforces the notion of transferring knowledge from rich joint embedding space to reason about Unknown concepts. Supplementary Material A. Dataset generation protocol for OW-VQA For each MSCOCO [16] category c, N i represents the number of images in which c appears and N t represents the number of times c appears in the dataset (i.e. total instances). These statistics are calculated after merging the MSCOCO Train2014 and Val2014 splits. Fig. 6 shows N i and N t for categories within each super-category of MSCOCO [16]. The category names are color-coded to represent the super-category labels and respective Unknown categories. From this figure, we can see that the categories which appear in the least number of images, the least number of times are selected as Unknown. Table 7 presents statistics of VQA dataset following the proposed Known/Unknown concept separation protocol described in 'Image-Question-Answer Split' of Sec. 4 of the main paper. We can see from the statistics that Unknown categories are present in ∼16% of training and validation images. Furthermore, it can be observed, when IQA triplets from the training and validation splits of the VQA datasets are separated on the basis of Known and Unknown concepts, the Unknown IQA triplets also amount to ∼16% of the total. This is an indication that our dataset preparation protocol is able to uniformly separate Known and Unknown concepts even from crowd-sourced, complex, multi-modal dataset like VQA. Such uniform split allows for effective evaluation of a VQA models' ability to reason with Unknown concepts. The Trainset and Testset of the OW-VQA dataset consists of Known and Unknown IQA triplets from corresponding Train splits of VQA datasets respectively. We also propose two validation splits called Valset-Known and Valset-Unknown from the Val splits of VQA datasets. The Valset-Known contains Known IQA triplets and the Valset-Unknown contains Unknown IQA triplets from the Valset of respective version. The subdivision of Valset into Known and Unknown splits allows evaluation on both concept types. B. Evaluation protocol for OW-VQA There are two main ways to evaluate a models performance on the proposed OW-VQA dataset. (a) For the purpose of debugging and running validation experiments, one can train a VQA model on OW-VQA Trainset and evaluate on Valset-Known or Valset-Unknown or the whole Valset. The OW-VQAv1 Trainset contains ∼187k IQA triplets, and Valset-Known and Valset-Unknown contains ∼101k and ∼19k IQA pairs respectively. The OW-VQAv2 has more IQA triplets, where the Trainset contains ∼336k IQA triplets, and Valset-Known and Valset-Unknown contains ∼178k and ∼ 34k. (b) To do a more comprehensive evaluation, it is recommended to train the model on OW-VQA Trainset and evaluate on Testset or Testset+Valset-Unknown, as they have more Unknown IQA pairs than Valset-Unknown. For OW-VQAv1 and v2, the Testset contains ∼36k and ∼66k IQA respectively. When combined with respective Valset-Unknown it presents an even larger setting to evaluate on Unknown concepts. Fig. 5 reports the overall accuracy of Grid Attention baseline model trained on Trainset and evaluated on validation splits of OW-VQAv1 and OW-VQAv2. It can be seen that the Known-Unknown accuracy gap is lower in v1 and higher in v2. This is due to the language bias present in VQAv1 dataset and the model used this bias to correctly answer questions about Unknown concepts. Table 8 reports the comparison of proposed JEX model and other contemporary VQA models on both versions of VQA-CPv1 and v2 [1], including accuracy scores of all question categories. It can be seen that GVQA [1] achieved higher accuracy on the Y/N questions than the proposed JEX model. As mentioned in the 'Evaluation on semantically separated VQA splits' part of Section 5.3 of the main paper, GVQA employs a separate training module for Y/N questions which helps achieve higher accuracy for Y/N questions. However, for all other question categories the proposed JEX model achieved higher accuracy than GVQA.
5,178
1811.12464
2902055211
This paper builds upon the current methods to increase their capability and automation for 3D surface construction from noisy and potentially sparse point clouds. It presents an analysis of an artificial neural network surface regression and mapping method, describing caveats, improvements and justification for the different approach.
An approach was suggested by @cite_8 using an image processing inspiration for surface de-noising. It removes noise by applying a Weiner filter which approximates the components of the surface as a statistical distribution. There are two problems with this algorithm in our context. First, it needs to be decided this algorithm should be applied, unnecessary smoothing might remove features that describe the underlying geometry, although there is some attempt to apply a surface based anisotropic diffusion to preserve edges. In addition, the formula used requires the user to both know and supply the noise denoted by the variance @math @cite_8 . It may not be possible to determine the noise of the data as it is an unstructured point cloud.
{ "abstract": [ "The convex hull of a set of points is the smallest convex set that contains the points. This article presents a practical convex hull algorithm that combines the two-dimensional Quickhull algorithm with the general-dimension Beneath-Beyond Algorithm. It is similar to the randomized, incremental algorithms for convex hull and delaunay triangulation. We provide empirical evidence that the algorithm runs faster when the input contains nonextreme points and that it used less memory. computational geometry algorithms have traditionally assumed that input sets are well behaved. When an algorithm is implemented with floating-point arithmetic, this assumption can lead to serous errors. We briefly describe a solution to this problem when computing the convex hull in two, three, or four dimensions. The output is a set of “thick” facets that contain all possible exact convex hulls of the input. A variation is effective in five or more dimensions." ], "cite_N": [ "@cite_8" ], "mid": [ "2153504150" ] }
Increasing the Capability of Neural Networks for Surface Reconstruction from Noisy Point Clouds
Accurate surface reconstruction from noisy point cloud data is still an unsolved challenge. Raw point cloud data is unstructured and may be noisy or sparse. The challenge can be tackled with a neural network (NN) approach [2], to learn a mapping from a 2D parameterisation of 3D point cloud data, resulting in a surface that is less sensitive to noise. This approach differs to standard NN applications as both the hyperparameters and intrinsic parameters change during training in order to find the optimum model [2]. This paper considers the 2D parameterisation method, directly comparing the current dimension reduction used and other familiar methods. A least squares Spline fitting is applied to define and interpolate the boundaries of the surface mesh. The interpolants are carefully chosen by a method presented to enable a fit that is more faithful to the boundary of objects. A working implementation is demonstrated, producing good quantitative and qualitative results on a range of noisy datasets. III. MATERIALS AND METHODS We aim to create an accurate surface from noisy point cloud data by analysing and improving a NN approach by [2]. The steps required to achieve goal of this paper are: 1) Use the Isomap algorithm to reduce the dimensions of the point cloud R 3 → R 2 2) Train NN to map R 2 → R 3 . This mapping learned by sampling the initial point cloud and using the points as training and test data. 3) Use a multi-depth path method to choose the outer-most points of the 2D manifold to be interpolated 4) Least Squares fit a cubic B-spline to the interpolants chosen with a justifiable choice of regularisation 5) Re-sample points inside of the boundary dictated by Bspline 6) Find the triangular tessellation of the point cloud by Delaunay triangulation 7) Feed NN the points in R 2 to produce the target vertices in R 3 8) Mesh output according to triangular topology produced in R 2 A. Feature Selection and Dimensional Reduction The purpose of dimension reduction in this case is to allow a NN to learn a mapping between the 2D coordinates and the 3D coordinates [2]. Also, it simplifies the problem of surface generation: The boundary of the point set can be easily defined in 2D and the topology of the mesh established. Later the vertices of the mesh are fed to a trained NN. The first step of the proposed algorithm is to embed the points in R 3 to R 2 . Our feature space will only ever be R 3 , as is intrinsic to generating 3D surfaces in Euclidean space. To facilitate the 2D embedding the use of Isomap algorithm is suggested, originally proposed by Tenenbaum et al in 2000 [19]. The holistic reason for using Isomap over other dimension reduction algorithms is because Isomap intends to preserve global geometry [19]. Given the goal is to extract the underlying geometry and not the noise of the point cloud to produce a smooth surface, the 2D embedding must be representative. The hallmark of Isomap is that points are reconstructed according to their pairwise geodesic distance. A graph for each neighborhood is used to represent the distance path where each edge is weighted, usually by euclidean distance. A neighborhood is defined by either by K points or a radius denoted by σ. 1) Determination of neighborhood for each pair of points j,i is given by d x (i, j) and store relations as a weighted graph G. 2) Compute the shortest path distance d G (i, j) using an algorithm such as Floyd-Warshall. Once the graph distances are obtained as matrix D G = d G (i, i) the Multi-dimensional Scaling algorithm is applied. 3) Finally the coordinate vectors in the resulting space Y are reconstructed to reduce the cost function: E = ||τ (D G ) − τ (D Y )|| L 2(1) from [19] D Y represents the matrix of Euclidean distances. τ converts the distances to inner products B. Comparison 1) Qualitative Comparison: To make our investigation more critical we compare Isomap and Locally Linear Embedding (LLE) [2]. Silvia and Tenenbaum, the original authors behind Isomap, published a comparison between Isomap and LLE two years earlier. In the defence of LLE it was suggested that it would be useful on a broader range of data when the local topology is close to Euclidean [10]. LLE is attractive in this regard as many surfaces to be produced will have local geometry that is close to Euclidean. We also desire a representative 2D embedding that is conducive to an accurate output once input to the learned NN. When noise is present it is more important that the general structure of the point cloud is learned and not the noise. Global methods, like Isomap, tend to give a more 'faithful' representation with respect to its global geometry [10]. With two algorithms, and two desired properties, we attempt to evaluate (and decide on the best) applicability to the niche problem in this section. Before a surface was constructed we compared the output of the trained NN against the original data (the Standford Bunny point cloud with 1600 points) to ascertain which two dimensional input gives us results closest to the original. All methods gave expected results on a more complex and dense point cloud. The Hessian eigenmap method by Donoho and Grime [21] performs poorly on capturing the relative scale of the ears. The modified weight [22] LLE method seems to give the most intuitive results. 2) Quantitative Comparison: We conducted further systematic tests to distinguish the proper use of either a global geodesic method like Isomap or the Modified LLE. In order to ensure results weren't reflective of a particular dataset or neural network topology, all combinations of activation functions from a pool of well known functions were chosen and tested. The size constraints of the network were kept relatively small to account for a lengthy training+test run time. Both methods are dependent on the activation functions and both methods give similar qualitative results as error. However, it became apparent that a second unintended independent variable, being the cardinality of the points in the data set, affects the final NN output error for different methods of 2D embedding. The results show that the Modified LLE method has the edge for very sparse data whereas Isomap gave better results on denser (relatively speaking) datasets. Modified LLE often failed to run at all on dense datasets and we have discounted the traditional method of LLE especially given its non deterministic output. Therefore, from the experiments conducted, the use of Isomap for this problem is suggested, unless its known in advance that very sparse data will be used. IO Isomap, however, is not perfect. One problem of Isomap, that the next stage of our method tries to mitigate, is that it often highlights outliers. Outliers from noise that occur outside the manifold may be included in the transformation of the points from 3D to 2D. Both methods suffer the problem of incorrectly choosing 'K' (points in the neighborhood) and lead to poor results. This method employs no heuristic to choose an optimum K. Extreme error values may indicate that the value of K should change however a more concrete system of heuristics must be included to reduce user interaction. IV. TRAINING A NEURAL NETWORK A neural network is used to learn the mapping between our embedded 2D points and the ground truth 3D point cloud. We use a NN in this context as it is hoped this interpolation property captures the general structure of the point cloud and not the noise. Noisy data is approximated by a linear regressor function, resulting in a smooth approximation of the underlying distribution, thus avoiding the scattered nonuniform raw data. Given any point cloud, the NN can fit a function to the noisy data that can represent any general function. This property is most desirable as it implies the NN method can be applied to a huge range of different data. The final form of the whole network as shown for use in [2]. − → D k = f ( n j=1 w kj f ( 2 i=1 w ji − → P i + w j 0) + w k 0)(2) Where f is to be decided. The method of training builds on that of Yumer and Kara's, and this is the only part of the work that has not not been changed in an important way. 1) Segment the input data into random samples of 85% training, 10% test, 5% validation, 2) Initialize a network with a single hidden layer and a single neuron 3) Train the network until the validation set performance converges, with back-propagation and early stopping to prevent overfitting 4) Record the weighted training-test set performance for the current network configuration 5) Increase the number of hidden neurons by 1. Iterate steps 3-5 until the weighted performance converges or the number of neurons reaches a maximum 6) Record the number of neurons and the test performance for the current layer 7) Iterate steps 3-7 until the number of layers reaches a maximum 8) Return the network configuration with the best weighted performance [2] V. SURFACE GENERATION A. Defining the manifold Once the point set is embedded in two dimensions the next task is to sample the edge points of the now 2D point cloud and fit an curve to define the manifold. Once the 2D vertices generated by the procedure are fed to the trained NN the resulting points in 3D represent less noisy version of the intended surface. The output points of the NN become vertices of a triangular mesh. Prior research makes the assumption that the boundary point set of the 2D embedding is a reasonable outline of the expected shape, but with very noisy data outliners outside the expected boundary would still be considered as valid points, causing the manifold to appear perturbed. We use the idea of re-sampling the inner points with a regular grid, based on Yumer and Kara's work [2]. The method chosen here is outlined below. It makes no assumptions about the quality of the boundary points and can handle outlier points not representative of the manifold. 1) Sample a proportion of the outer most points in the cloud using the multi-line sampling method described in 'Choosing the Interpolants' 2) Use sampled points to fit a cubic B-spline curve using Least Squares fit with regularisation [23] 3) Superimpose regular grid on point set and uniformly resample points inside of B-spline loop In order to represent the outline of the 2D point cloud we use a cubic B-spline curve to better fit the local boundary of the dataset [15]. The B-spline is defined as follows: S i (t) = 3 r=0 P i+r B k r (t) for 0 ≤ t ≤ 1(3) where r denotes blending ratio and k the degree of the Bernstein basis B 1 i = t − t i t i+1 − t i + t i+2 − t t i+2 − t i+1(4) when t i ≤ t ≤ t i+1 the Berstein basis and associated control point blend in. when t i ≤ t ≤ t i+2 the control point and Bernstein basis blend out. The task of deciding which p i control points, the sequence of values for t i (knot vector) and additional weights that interpolate the points best will be discussed in surface fitting. B. Choosing the interpolants For the purposes of fitting the B-spline we are reducing the problem to a Least Squares interpolation, and the interpolation is only as good as the interpolants are representative. In similar work it was suggested that the outermost path connected by 4 corners by polyline [2]. This works just fine for relatively noise free data but it became apparent that the method can be improved for noisy data sets and made more precise for nonnoisy data. We can not use a 1-point deep outer loop as the probability of this being true to the noise free representation of the surface outline is very low. With the addition of the noise, the outer-loop will very likely be perturbed. However, unless the points do not even remotely resemble the expected geometry, we do know that somewhere between the outer most points and a few points in the centroid direction lies the outline of the 'perfect' noise free shape. • For each depth • Pick 8 (or more if desired) corners according the furthest distance in circular sector from the centroid • For each corner -Segment point space into rectangles containing all points between one corner and its adjacent corner -Consider points between as weighted graph (using KDTree in this case) -Set weights to be w = c 1 .d s + c 2 .d c where d s , d c are the straight line distance to adjacent point and centroid, respectively. c defining the 'importance' -Find the shortest path to adjacent point using w as criteria for next point selection • Strip path containing selected points away from cloud so as not to be calculated for next depth • Combine path returned for all corners This method differs from previous attempts as more 'anchor' points are selected according to the spoke sampling criteria mentioned earlier. The incorporation of more anchor points gave better local precision for finding a path as the distance, and thus points considered between each anchor, ensures the algorithm has less room for error: by a short circuiting path, for example. Further, our method allows adjustable depth of points to be sampled so more interpolants can be considered when fitting a boundary. C. Surface fitting In order to fit the spline to the selected interpolants we use Dierckx's algorithm for least squares fitting a B-spline with variable knot vectors. Where the knot vector is a vector of initial values for the 'blending ratio' and define the amount of 'blending' for each control point on the curve (equation 8). The general form of the least squares for the fitting a B-spline is arranged by Dierckx as: δ = m r=1 w r y r − g i=−k p i w r B k+1 i (x r ) 2 (5) from [23] • data points: (x r , y r ) • set of weights w r • the control points p i • the number an position of the knots t • substitute the blending ratio and Bernstein basis 'B k i ' from (4). We have only shown a system where the knot vectors are fixed. Here, we will keep the description for picking the appropriate knots brief and suggest readers seek [23] for a more vigorous explanation. Dierckx avoids the problem of coinciding knots, and the existence of knots very near to the basis boundary [a,b], by separating (3) the least squares spline objective function and penalising the overall error using the following heuristic. (t) = σ(t) + pP (t)(6) from [23] where p is not to be confused with a control point and is set according to some heuristic P (t) = g i=0 (t i+1 − t i ) −1(7) from [23] It's plain to see that the penalty is inversely proportional to the 'closeness' of two adjacent knots. Due to local stopping points on the boundary [23] these constraints help avoid poor gradient based minimisation. With the residual error σ from equation 5. Dierckx suggests the objective function can be subject to the constraint that σ ≤ s where s is a user selected constant [23]. This allows the error function some flexibility so that the spline is not forced to traverse every point exactly. For the agenda of this paper a smooth fit is highly desirable. To this end, we attempt to build upon the smoothing property introduced by Dierckx. Picking the value of 's' is the most challenging task. While there exists some heuristics to setting 's' before fitting the spline, there was improvement on setting an arbitrary regularisation term by using the variance of y values in the target point set. δ ≤ λ n i (y i −ŷ) 2 |y|(8) Given that we want a curve that smoothly traverses the interpolants, the rationale behind using the variance is that the larger the discontinuity of values the greater allowance for fitting error thus the smoother the fit for jagged interpolants. λ is set at a default value but can be tweaked should the user desire. The pit fall with this method is that different values of lambda can still be tailored to each dataset thus the selection of the regularisation is not a fully automated process for finding the optimum qualitative results. One problem with triangulation algorithms is that they are not well suited to concave point clouds and cause edges outside the outer loop of the points to be created. Currently the working implementation for surface generation uses a basic method for removing triangles outside of the boundary defined by the fitted B-spline. The algorithm simply: • Compute the centroid of each triangle in the triangulation • Consider spline as polygon loop which encompasses all correct triangulations by centroid • Triangles with centroids that exist outside of B-spline polygon are removed Note that this method assumes that an optimal regularisation term has been picked otherwise jagged and overfit boundaries lead to the removal of triangles that would contribute to good definition of the surface. VI. RESULTS AND OBSERVATIONS The implementation is written in python and has 3 main dependencies: Pybrain, Sklearn [29] and Matplotlib. We show the effect of using a retrained network verses a copy of the best network found in training. If 'Retrained' is 'Yes', this indicates that a new NN was trained on all the data available after the best topology was discovered during training. If 'Retained' was 'No' then an exact copy of the trained network with the best topology during training was used on the whole data set. 'Final Error' reflects the mean squared error of the network output given the whole data set as an input and not just random samples. Its worth noting that the error in this case is compared against the noisy data so a very low error can mean that the points have overfit to the noise. It should also be noted that this wasn't the case for the earlier results in the 'dimensionality reduction comparison' section, where the error was defined against the 'perfect' error free parametric representation of the intended object. Overfitting on the test data needs to be avoided as it is the overall structure of the geometry that must be captured, and the test set, while sampled randomly, will skew the Final Error if we allow the NN to overfit. The initial preconception that a retrained network, with optimum hyperparameters, would perform better. Having been trained on the whole data set an expected a closer representation of the ground truth, but experimentation shows that a larger number of epochs causes the error of the retrained NN to diverge from the improvement shown by the other non-retrained NNs. The quality of the boundary of the point cloud directly affects how perturbed the resulting tessellation will appear on the final output. We will compare other methods of choosing interpolants and show the importance of having more points afforded to interpolation by allowing a variable depth of samples that form the outer loop. In order to keep the independent variables limited to 1, and to show the problem in a simple manner, the following images show the least squares fit of a Bezier curve for simple points sets. This is the same process as [2] -what changes is the sampling method: VII. CONCLUSION Discussed in this paper is the methods for surface generation from noisy point cloud data. Through experimentation we have been able to expose the internals of the algorithms suggested, and give a closer comparative review of this highly specific application of neural networks. The presented method gives good quantitative and qualitative results for a variety of different data sets. With a little more refinement, particularly in the training of the NN, it is hoped that this method can be extended for more complex 3D point clouds. A software improvement which will speed up the training time of the neural network is the use of a more modern library than Pybrain. On top of this, a better training method should be used to reduce training time. Currently, the training method is simply a gradient descent backpropgation. Not much attention has been payed to parameters like the momentum update, learning rate and other hyperparameters. It would be prudent to refine the method of training as this will be the slowest part of the algorithm. While most of the datasets and neural networks in this, and other papers [2], are constrained to small manageable sizes. The potential for larger, that is deep, with many layers defined in [28], should not be overlooked for complex mappings. However, this is only feasible in a reasonable time frame if the selection of the hyperparamters is more efficient than an exhaustive grid search, otherwise segmentation of the problem will need to be employed. Finding the best regularisation value should be also be on the agenda for improvement. The regularisation parameter, while manifold specific, is fixed throughout the fitting of the B-spline. Ideally there need to some variability in the amount of regularisation at the moment of update of least squares. It would be good to the follow a ridge regression trend in further implementations of this algorithm where the regularisation can be more closely dependent on the spline being fit in a similar way as Caitlin et al use the spherical harmonic order to construct the Tikhonov matrix [13]. As mentioned, regularisation strongly affect the qualitative results and currently a free parameter lambda exists that is not automatically decided. There needs to be some decision towards when hole fitting is appropriate. This method will blindly re-sample the inside of the point cloud whether or not holes where intended. To be able to realise more complex surfaces with intentional holes segmentation could be used but this immediately increase the complexity of the algorithm. To find a global method which fills in only unintended holes is a very challenging task but would improve this algorithm.
3,726
1811.12464
2902055211
This paper builds upon the current methods to increase their capability and automation for 3D surface construction from noisy and potentially sparse point clouds. It presents an analysis of an artificial neural network surface regression and mapping method, describing caveats, improvements and justification for the different approach.
Yumer and Kara suggest a NN regression method of surface fitting and hole stitching. The flexibility achieved by an adaptive neural network topology differs from previous attempts as the ideal topology of the network obtained (the hyperaparmeters) are not fixed @cite_11 , meaning the network can be tailored to each point cloud automatically. This method is good for removing noise as the underlying geometry of the point data and not random noise is represented in the final surface.
{ "abstract": [ "Abstract This paper rigorously establishes that standard multilayer feedforward networks with as few as one hidden layer using arbitrary squashing functions are capable of approximating any Borel measurable function from one finite dimensional space to another to any desired degree of accuracy, provided sufficiently many hidden units are available. In this sense, multilayer feedforward networks are a class of universal approximators." ], "cite_N": [ "@cite_11" ], "mid": [ "2137983211" ] }
Increasing the Capability of Neural Networks for Surface Reconstruction from Noisy Point Clouds
Accurate surface reconstruction from noisy point cloud data is still an unsolved challenge. Raw point cloud data is unstructured and may be noisy or sparse. The challenge can be tackled with a neural network (NN) approach [2], to learn a mapping from a 2D parameterisation of 3D point cloud data, resulting in a surface that is less sensitive to noise. This approach differs to standard NN applications as both the hyperparameters and intrinsic parameters change during training in order to find the optimum model [2]. This paper considers the 2D parameterisation method, directly comparing the current dimension reduction used and other familiar methods. A least squares Spline fitting is applied to define and interpolate the boundaries of the surface mesh. The interpolants are carefully chosen by a method presented to enable a fit that is more faithful to the boundary of objects. A working implementation is demonstrated, producing good quantitative and qualitative results on a range of noisy datasets. III. MATERIALS AND METHODS We aim to create an accurate surface from noisy point cloud data by analysing and improving a NN approach by [2]. The steps required to achieve goal of this paper are: 1) Use the Isomap algorithm to reduce the dimensions of the point cloud R 3 → R 2 2) Train NN to map R 2 → R 3 . This mapping learned by sampling the initial point cloud and using the points as training and test data. 3) Use a multi-depth path method to choose the outer-most points of the 2D manifold to be interpolated 4) Least Squares fit a cubic B-spline to the interpolants chosen with a justifiable choice of regularisation 5) Re-sample points inside of the boundary dictated by Bspline 6) Find the triangular tessellation of the point cloud by Delaunay triangulation 7) Feed NN the points in R 2 to produce the target vertices in R 3 8) Mesh output according to triangular topology produced in R 2 A. Feature Selection and Dimensional Reduction The purpose of dimension reduction in this case is to allow a NN to learn a mapping between the 2D coordinates and the 3D coordinates [2]. Also, it simplifies the problem of surface generation: The boundary of the point set can be easily defined in 2D and the topology of the mesh established. Later the vertices of the mesh are fed to a trained NN. The first step of the proposed algorithm is to embed the points in R 3 to R 2 . Our feature space will only ever be R 3 , as is intrinsic to generating 3D surfaces in Euclidean space. To facilitate the 2D embedding the use of Isomap algorithm is suggested, originally proposed by Tenenbaum et al in 2000 [19]. The holistic reason for using Isomap over other dimension reduction algorithms is because Isomap intends to preserve global geometry [19]. Given the goal is to extract the underlying geometry and not the noise of the point cloud to produce a smooth surface, the 2D embedding must be representative. The hallmark of Isomap is that points are reconstructed according to their pairwise geodesic distance. A graph for each neighborhood is used to represent the distance path where each edge is weighted, usually by euclidean distance. A neighborhood is defined by either by K points or a radius denoted by σ. 1) Determination of neighborhood for each pair of points j,i is given by d x (i, j) and store relations as a weighted graph G. 2) Compute the shortest path distance d G (i, j) using an algorithm such as Floyd-Warshall. Once the graph distances are obtained as matrix D G = d G (i, i) the Multi-dimensional Scaling algorithm is applied. 3) Finally the coordinate vectors in the resulting space Y are reconstructed to reduce the cost function: E = ||τ (D G ) − τ (D Y )|| L 2(1) from [19] D Y represents the matrix of Euclidean distances. τ converts the distances to inner products B. Comparison 1) Qualitative Comparison: To make our investigation more critical we compare Isomap and Locally Linear Embedding (LLE) [2]. Silvia and Tenenbaum, the original authors behind Isomap, published a comparison between Isomap and LLE two years earlier. In the defence of LLE it was suggested that it would be useful on a broader range of data when the local topology is close to Euclidean [10]. LLE is attractive in this regard as many surfaces to be produced will have local geometry that is close to Euclidean. We also desire a representative 2D embedding that is conducive to an accurate output once input to the learned NN. When noise is present it is more important that the general structure of the point cloud is learned and not the noise. Global methods, like Isomap, tend to give a more 'faithful' representation with respect to its global geometry [10]. With two algorithms, and two desired properties, we attempt to evaluate (and decide on the best) applicability to the niche problem in this section. Before a surface was constructed we compared the output of the trained NN against the original data (the Standford Bunny point cloud with 1600 points) to ascertain which two dimensional input gives us results closest to the original. All methods gave expected results on a more complex and dense point cloud. The Hessian eigenmap method by Donoho and Grime [21] performs poorly on capturing the relative scale of the ears. The modified weight [22] LLE method seems to give the most intuitive results. 2) Quantitative Comparison: We conducted further systematic tests to distinguish the proper use of either a global geodesic method like Isomap or the Modified LLE. In order to ensure results weren't reflective of a particular dataset or neural network topology, all combinations of activation functions from a pool of well known functions were chosen and tested. The size constraints of the network were kept relatively small to account for a lengthy training+test run time. Both methods are dependent on the activation functions and both methods give similar qualitative results as error. However, it became apparent that a second unintended independent variable, being the cardinality of the points in the data set, affects the final NN output error for different methods of 2D embedding. The results show that the Modified LLE method has the edge for very sparse data whereas Isomap gave better results on denser (relatively speaking) datasets. Modified LLE often failed to run at all on dense datasets and we have discounted the traditional method of LLE especially given its non deterministic output. Therefore, from the experiments conducted, the use of Isomap for this problem is suggested, unless its known in advance that very sparse data will be used. IO Isomap, however, is not perfect. One problem of Isomap, that the next stage of our method tries to mitigate, is that it often highlights outliers. Outliers from noise that occur outside the manifold may be included in the transformation of the points from 3D to 2D. Both methods suffer the problem of incorrectly choosing 'K' (points in the neighborhood) and lead to poor results. This method employs no heuristic to choose an optimum K. Extreme error values may indicate that the value of K should change however a more concrete system of heuristics must be included to reduce user interaction. IV. TRAINING A NEURAL NETWORK A neural network is used to learn the mapping between our embedded 2D points and the ground truth 3D point cloud. We use a NN in this context as it is hoped this interpolation property captures the general structure of the point cloud and not the noise. Noisy data is approximated by a linear regressor function, resulting in a smooth approximation of the underlying distribution, thus avoiding the scattered nonuniform raw data. Given any point cloud, the NN can fit a function to the noisy data that can represent any general function. This property is most desirable as it implies the NN method can be applied to a huge range of different data. The final form of the whole network as shown for use in [2]. − → D k = f ( n j=1 w kj f ( 2 i=1 w ji − → P i + w j 0) + w k 0)(2) Where f is to be decided. The method of training builds on that of Yumer and Kara's, and this is the only part of the work that has not not been changed in an important way. 1) Segment the input data into random samples of 85% training, 10% test, 5% validation, 2) Initialize a network with a single hidden layer and a single neuron 3) Train the network until the validation set performance converges, with back-propagation and early stopping to prevent overfitting 4) Record the weighted training-test set performance for the current network configuration 5) Increase the number of hidden neurons by 1. Iterate steps 3-5 until the weighted performance converges or the number of neurons reaches a maximum 6) Record the number of neurons and the test performance for the current layer 7) Iterate steps 3-7 until the number of layers reaches a maximum 8) Return the network configuration with the best weighted performance [2] V. SURFACE GENERATION A. Defining the manifold Once the point set is embedded in two dimensions the next task is to sample the edge points of the now 2D point cloud and fit an curve to define the manifold. Once the 2D vertices generated by the procedure are fed to the trained NN the resulting points in 3D represent less noisy version of the intended surface. The output points of the NN become vertices of a triangular mesh. Prior research makes the assumption that the boundary point set of the 2D embedding is a reasonable outline of the expected shape, but with very noisy data outliners outside the expected boundary would still be considered as valid points, causing the manifold to appear perturbed. We use the idea of re-sampling the inner points with a regular grid, based on Yumer and Kara's work [2]. The method chosen here is outlined below. It makes no assumptions about the quality of the boundary points and can handle outlier points not representative of the manifold. 1) Sample a proportion of the outer most points in the cloud using the multi-line sampling method described in 'Choosing the Interpolants' 2) Use sampled points to fit a cubic B-spline curve using Least Squares fit with regularisation [23] 3) Superimpose regular grid on point set and uniformly resample points inside of B-spline loop In order to represent the outline of the 2D point cloud we use a cubic B-spline curve to better fit the local boundary of the dataset [15]. The B-spline is defined as follows: S i (t) = 3 r=0 P i+r B k r (t) for 0 ≤ t ≤ 1(3) where r denotes blending ratio and k the degree of the Bernstein basis B 1 i = t − t i t i+1 − t i + t i+2 − t t i+2 − t i+1(4) when t i ≤ t ≤ t i+1 the Berstein basis and associated control point blend in. when t i ≤ t ≤ t i+2 the control point and Bernstein basis blend out. The task of deciding which p i control points, the sequence of values for t i (knot vector) and additional weights that interpolate the points best will be discussed in surface fitting. B. Choosing the interpolants For the purposes of fitting the B-spline we are reducing the problem to a Least Squares interpolation, and the interpolation is only as good as the interpolants are representative. In similar work it was suggested that the outermost path connected by 4 corners by polyline [2]. This works just fine for relatively noise free data but it became apparent that the method can be improved for noisy data sets and made more precise for nonnoisy data. We can not use a 1-point deep outer loop as the probability of this being true to the noise free representation of the surface outline is very low. With the addition of the noise, the outer-loop will very likely be perturbed. However, unless the points do not even remotely resemble the expected geometry, we do know that somewhere between the outer most points and a few points in the centroid direction lies the outline of the 'perfect' noise free shape. • For each depth • Pick 8 (or more if desired) corners according the furthest distance in circular sector from the centroid • For each corner -Segment point space into rectangles containing all points between one corner and its adjacent corner -Consider points between as weighted graph (using KDTree in this case) -Set weights to be w = c 1 .d s + c 2 .d c where d s , d c are the straight line distance to adjacent point and centroid, respectively. c defining the 'importance' -Find the shortest path to adjacent point using w as criteria for next point selection • Strip path containing selected points away from cloud so as not to be calculated for next depth • Combine path returned for all corners This method differs from previous attempts as more 'anchor' points are selected according to the spoke sampling criteria mentioned earlier. The incorporation of more anchor points gave better local precision for finding a path as the distance, and thus points considered between each anchor, ensures the algorithm has less room for error: by a short circuiting path, for example. Further, our method allows adjustable depth of points to be sampled so more interpolants can be considered when fitting a boundary. C. Surface fitting In order to fit the spline to the selected interpolants we use Dierckx's algorithm for least squares fitting a B-spline with variable knot vectors. Where the knot vector is a vector of initial values for the 'blending ratio' and define the amount of 'blending' for each control point on the curve (equation 8). The general form of the least squares for the fitting a B-spline is arranged by Dierckx as: δ = m r=1 w r y r − g i=−k p i w r B k+1 i (x r ) 2 (5) from [23] • data points: (x r , y r ) • set of weights w r • the control points p i • the number an position of the knots t • substitute the blending ratio and Bernstein basis 'B k i ' from (4). We have only shown a system where the knot vectors are fixed. Here, we will keep the description for picking the appropriate knots brief and suggest readers seek [23] for a more vigorous explanation. Dierckx avoids the problem of coinciding knots, and the existence of knots very near to the basis boundary [a,b], by separating (3) the least squares spline objective function and penalising the overall error using the following heuristic. (t) = σ(t) + pP (t)(6) from [23] where p is not to be confused with a control point and is set according to some heuristic P (t) = g i=0 (t i+1 − t i ) −1(7) from [23] It's plain to see that the penalty is inversely proportional to the 'closeness' of two adjacent knots. Due to local stopping points on the boundary [23] these constraints help avoid poor gradient based minimisation. With the residual error σ from equation 5. Dierckx suggests the objective function can be subject to the constraint that σ ≤ s where s is a user selected constant [23]. This allows the error function some flexibility so that the spline is not forced to traverse every point exactly. For the agenda of this paper a smooth fit is highly desirable. To this end, we attempt to build upon the smoothing property introduced by Dierckx. Picking the value of 's' is the most challenging task. While there exists some heuristics to setting 's' before fitting the spline, there was improvement on setting an arbitrary regularisation term by using the variance of y values in the target point set. δ ≤ λ n i (y i −ŷ) 2 |y|(8) Given that we want a curve that smoothly traverses the interpolants, the rationale behind using the variance is that the larger the discontinuity of values the greater allowance for fitting error thus the smoother the fit for jagged interpolants. λ is set at a default value but can be tweaked should the user desire. The pit fall with this method is that different values of lambda can still be tailored to each dataset thus the selection of the regularisation is not a fully automated process for finding the optimum qualitative results. One problem with triangulation algorithms is that they are not well suited to concave point clouds and cause edges outside the outer loop of the points to be created. Currently the working implementation for surface generation uses a basic method for removing triangles outside of the boundary defined by the fitted B-spline. The algorithm simply: • Compute the centroid of each triangle in the triangulation • Consider spline as polygon loop which encompasses all correct triangulations by centroid • Triangles with centroids that exist outside of B-spline polygon are removed Note that this method assumes that an optimal regularisation term has been picked otherwise jagged and overfit boundaries lead to the removal of triangles that would contribute to good definition of the surface. VI. RESULTS AND OBSERVATIONS The implementation is written in python and has 3 main dependencies: Pybrain, Sklearn [29] and Matplotlib. We show the effect of using a retrained network verses a copy of the best network found in training. If 'Retrained' is 'Yes', this indicates that a new NN was trained on all the data available after the best topology was discovered during training. If 'Retained' was 'No' then an exact copy of the trained network with the best topology during training was used on the whole data set. 'Final Error' reflects the mean squared error of the network output given the whole data set as an input and not just random samples. Its worth noting that the error in this case is compared against the noisy data so a very low error can mean that the points have overfit to the noise. It should also be noted that this wasn't the case for the earlier results in the 'dimensionality reduction comparison' section, where the error was defined against the 'perfect' error free parametric representation of the intended object. Overfitting on the test data needs to be avoided as it is the overall structure of the geometry that must be captured, and the test set, while sampled randomly, will skew the Final Error if we allow the NN to overfit. The initial preconception that a retrained network, with optimum hyperparameters, would perform better. Having been trained on the whole data set an expected a closer representation of the ground truth, but experimentation shows that a larger number of epochs causes the error of the retrained NN to diverge from the improvement shown by the other non-retrained NNs. The quality of the boundary of the point cloud directly affects how perturbed the resulting tessellation will appear on the final output. We will compare other methods of choosing interpolants and show the importance of having more points afforded to interpolation by allowing a variable depth of samples that form the outer loop. In order to keep the independent variables limited to 1, and to show the problem in a simple manner, the following images show the least squares fit of a Bezier curve for simple points sets. This is the same process as [2] -what changes is the sampling method: VII. CONCLUSION Discussed in this paper is the methods for surface generation from noisy point cloud data. Through experimentation we have been able to expose the internals of the algorithms suggested, and give a closer comparative review of this highly specific application of neural networks. The presented method gives good quantitative and qualitative results for a variety of different data sets. With a little more refinement, particularly in the training of the NN, it is hoped that this method can be extended for more complex 3D point clouds. A software improvement which will speed up the training time of the neural network is the use of a more modern library than Pybrain. On top of this, a better training method should be used to reduce training time. Currently, the training method is simply a gradient descent backpropgation. Not much attention has been payed to parameters like the momentum update, learning rate and other hyperparameters. It would be prudent to refine the method of training as this will be the slowest part of the algorithm. While most of the datasets and neural networks in this, and other papers [2], are constrained to small manageable sizes. The potential for larger, that is deep, with many layers defined in [28], should not be overlooked for complex mappings. However, this is only feasible in a reasonable time frame if the selection of the hyperparamters is more efficient than an exhaustive grid search, otherwise segmentation of the problem will need to be employed. Finding the best regularisation value should be also be on the agenda for improvement. The regularisation parameter, while manifold specific, is fixed throughout the fitting of the B-spline. Ideally there need to some variability in the amount of regularisation at the moment of update of least squares. It would be good to the follow a ridge regression trend in further implementations of this algorithm where the regularisation can be more closely dependent on the spline being fit in a similar way as Caitlin et al use the spherical harmonic order to construct the Tikhonov matrix [13]. As mentioned, regularisation strongly affect the qualitative results and currently a free parameter lambda exists that is not automatically decided. There needs to be some decision towards when hole fitting is appropriate. This method will blindly re-sample the inside of the point cloud whether or not holes where intended. To be able to realise more complex surfaces with intentional holes segmentation could be used but this immediately increase the complexity of the algorithm. To find a global method which fills in only unintended holes is a very challenging task but would improve this algorithm.
3,726
1811.12464
2902055211
This paper builds upon the current methods to increase their capability and automation for 3D surface construction from noisy and potentially sparse point clouds. It presents an analysis of an artificial neural network surface regression and mapping method, describing caveats, improvements and justification for the different approach.
In a slightly different problem, where a NN is used to reconstruct the shape of a 3D object from its shading in a 2D @cite_3 . show from experiment that quantitative improvement does not necessarily lead to quantitative improvement. This is something to consider when using a 'black box' function like a neural network, especially where there could be some information loss. In this regard we must ensure that the final model is representative of the ground truth and not only rely on an error measure. It is suggested that more research must be done for 3D surface quality metrics @cite_3 . Visual quality will be assessed in the method presented here alongside quantitative results in the absence of quality metrics.
{ "abstract": [ "Shape-from-shading (SFS) methods tend to rely on models with few parameters because these parameters need to be hand-tuned. This limits the number of different cues that the SFS problem can exploit. In this paper, we show how machine learning can be applied to an SFS model with a large number of parameters. Our system learns a set of weighting parameters that use the intensity of each pixel in the image to gauge the importance of that pixel in the shape reconstruction process. We show empirically that this leads to a significant increase in the accuracy of the recovered surfaces. Our learning approach is novel in that the parameters are optimized with respect to actual surface output by the system. In the first, offline phase, a hemisphere is rendered using a known illumination direction. The isophotes in the resulting reflectance map are then modelled using Gaussian mixtures to obtain a parametric representation of the isophotes. This Gaussian parameterization is then used in the second phase to learn intensity-based weights using a database of 3D shapes. The weights can also be optimized for a particular input image." ], "cite_N": [ "@cite_3" ], "mid": [ "2084560625" ] }
Increasing the Capability of Neural Networks for Surface Reconstruction from Noisy Point Clouds
Accurate surface reconstruction from noisy point cloud data is still an unsolved challenge. Raw point cloud data is unstructured and may be noisy or sparse. The challenge can be tackled with a neural network (NN) approach [2], to learn a mapping from a 2D parameterisation of 3D point cloud data, resulting in a surface that is less sensitive to noise. This approach differs to standard NN applications as both the hyperparameters and intrinsic parameters change during training in order to find the optimum model [2]. This paper considers the 2D parameterisation method, directly comparing the current dimension reduction used and other familiar methods. A least squares Spline fitting is applied to define and interpolate the boundaries of the surface mesh. The interpolants are carefully chosen by a method presented to enable a fit that is more faithful to the boundary of objects. A working implementation is demonstrated, producing good quantitative and qualitative results on a range of noisy datasets. III. MATERIALS AND METHODS We aim to create an accurate surface from noisy point cloud data by analysing and improving a NN approach by [2]. The steps required to achieve goal of this paper are: 1) Use the Isomap algorithm to reduce the dimensions of the point cloud R 3 → R 2 2) Train NN to map R 2 → R 3 . This mapping learned by sampling the initial point cloud and using the points as training and test data. 3) Use a multi-depth path method to choose the outer-most points of the 2D manifold to be interpolated 4) Least Squares fit a cubic B-spline to the interpolants chosen with a justifiable choice of regularisation 5) Re-sample points inside of the boundary dictated by Bspline 6) Find the triangular tessellation of the point cloud by Delaunay triangulation 7) Feed NN the points in R 2 to produce the target vertices in R 3 8) Mesh output according to triangular topology produced in R 2 A. Feature Selection and Dimensional Reduction The purpose of dimension reduction in this case is to allow a NN to learn a mapping between the 2D coordinates and the 3D coordinates [2]. Also, it simplifies the problem of surface generation: The boundary of the point set can be easily defined in 2D and the topology of the mesh established. Later the vertices of the mesh are fed to a trained NN. The first step of the proposed algorithm is to embed the points in R 3 to R 2 . Our feature space will only ever be R 3 , as is intrinsic to generating 3D surfaces in Euclidean space. To facilitate the 2D embedding the use of Isomap algorithm is suggested, originally proposed by Tenenbaum et al in 2000 [19]. The holistic reason for using Isomap over other dimension reduction algorithms is because Isomap intends to preserve global geometry [19]. Given the goal is to extract the underlying geometry and not the noise of the point cloud to produce a smooth surface, the 2D embedding must be representative. The hallmark of Isomap is that points are reconstructed according to their pairwise geodesic distance. A graph for each neighborhood is used to represent the distance path where each edge is weighted, usually by euclidean distance. A neighborhood is defined by either by K points or a radius denoted by σ. 1) Determination of neighborhood for each pair of points j,i is given by d x (i, j) and store relations as a weighted graph G. 2) Compute the shortest path distance d G (i, j) using an algorithm such as Floyd-Warshall. Once the graph distances are obtained as matrix D G = d G (i, i) the Multi-dimensional Scaling algorithm is applied. 3) Finally the coordinate vectors in the resulting space Y are reconstructed to reduce the cost function: E = ||τ (D G ) − τ (D Y )|| L 2(1) from [19] D Y represents the matrix of Euclidean distances. τ converts the distances to inner products B. Comparison 1) Qualitative Comparison: To make our investigation more critical we compare Isomap and Locally Linear Embedding (LLE) [2]. Silvia and Tenenbaum, the original authors behind Isomap, published a comparison between Isomap and LLE two years earlier. In the defence of LLE it was suggested that it would be useful on a broader range of data when the local topology is close to Euclidean [10]. LLE is attractive in this regard as many surfaces to be produced will have local geometry that is close to Euclidean. We also desire a representative 2D embedding that is conducive to an accurate output once input to the learned NN. When noise is present it is more important that the general structure of the point cloud is learned and not the noise. Global methods, like Isomap, tend to give a more 'faithful' representation with respect to its global geometry [10]. With two algorithms, and two desired properties, we attempt to evaluate (and decide on the best) applicability to the niche problem in this section. Before a surface was constructed we compared the output of the trained NN against the original data (the Standford Bunny point cloud with 1600 points) to ascertain which two dimensional input gives us results closest to the original. All methods gave expected results on a more complex and dense point cloud. The Hessian eigenmap method by Donoho and Grime [21] performs poorly on capturing the relative scale of the ears. The modified weight [22] LLE method seems to give the most intuitive results. 2) Quantitative Comparison: We conducted further systematic tests to distinguish the proper use of either a global geodesic method like Isomap or the Modified LLE. In order to ensure results weren't reflective of a particular dataset or neural network topology, all combinations of activation functions from a pool of well known functions were chosen and tested. The size constraints of the network were kept relatively small to account for a lengthy training+test run time. Both methods are dependent on the activation functions and both methods give similar qualitative results as error. However, it became apparent that a second unintended independent variable, being the cardinality of the points in the data set, affects the final NN output error for different methods of 2D embedding. The results show that the Modified LLE method has the edge for very sparse data whereas Isomap gave better results on denser (relatively speaking) datasets. Modified LLE often failed to run at all on dense datasets and we have discounted the traditional method of LLE especially given its non deterministic output. Therefore, from the experiments conducted, the use of Isomap for this problem is suggested, unless its known in advance that very sparse data will be used. IO Isomap, however, is not perfect. One problem of Isomap, that the next stage of our method tries to mitigate, is that it often highlights outliers. Outliers from noise that occur outside the manifold may be included in the transformation of the points from 3D to 2D. Both methods suffer the problem of incorrectly choosing 'K' (points in the neighborhood) and lead to poor results. This method employs no heuristic to choose an optimum K. Extreme error values may indicate that the value of K should change however a more concrete system of heuristics must be included to reduce user interaction. IV. TRAINING A NEURAL NETWORK A neural network is used to learn the mapping between our embedded 2D points and the ground truth 3D point cloud. We use a NN in this context as it is hoped this interpolation property captures the general structure of the point cloud and not the noise. Noisy data is approximated by a linear regressor function, resulting in a smooth approximation of the underlying distribution, thus avoiding the scattered nonuniform raw data. Given any point cloud, the NN can fit a function to the noisy data that can represent any general function. This property is most desirable as it implies the NN method can be applied to a huge range of different data. The final form of the whole network as shown for use in [2]. − → D k = f ( n j=1 w kj f ( 2 i=1 w ji − → P i + w j 0) + w k 0)(2) Where f is to be decided. The method of training builds on that of Yumer and Kara's, and this is the only part of the work that has not not been changed in an important way. 1) Segment the input data into random samples of 85% training, 10% test, 5% validation, 2) Initialize a network with a single hidden layer and a single neuron 3) Train the network until the validation set performance converges, with back-propagation and early stopping to prevent overfitting 4) Record the weighted training-test set performance for the current network configuration 5) Increase the number of hidden neurons by 1. Iterate steps 3-5 until the weighted performance converges or the number of neurons reaches a maximum 6) Record the number of neurons and the test performance for the current layer 7) Iterate steps 3-7 until the number of layers reaches a maximum 8) Return the network configuration with the best weighted performance [2] V. SURFACE GENERATION A. Defining the manifold Once the point set is embedded in two dimensions the next task is to sample the edge points of the now 2D point cloud and fit an curve to define the manifold. Once the 2D vertices generated by the procedure are fed to the trained NN the resulting points in 3D represent less noisy version of the intended surface. The output points of the NN become vertices of a triangular mesh. Prior research makes the assumption that the boundary point set of the 2D embedding is a reasonable outline of the expected shape, but with very noisy data outliners outside the expected boundary would still be considered as valid points, causing the manifold to appear perturbed. We use the idea of re-sampling the inner points with a regular grid, based on Yumer and Kara's work [2]. The method chosen here is outlined below. It makes no assumptions about the quality of the boundary points and can handle outlier points not representative of the manifold. 1) Sample a proportion of the outer most points in the cloud using the multi-line sampling method described in 'Choosing the Interpolants' 2) Use sampled points to fit a cubic B-spline curve using Least Squares fit with regularisation [23] 3) Superimpose regular grid on point set and uniformly resample points inside of B-spline loop In order to represent the outline of the 2D point cloud we use a cubic B-spline curve to better fit the local boundary of the dataset [15]. The B-spline is defined as follows: S i (t) = 3 r=0 P i+r B k r (t) for 0 ≤ t ≤ 1(3) where r denotes blending ratio and k the degree of the Bernstein basis B 1 i = t − t i t i+1 − t i + t i+2 − t t i+2 − t i+1(4) when t i ≤ t ≤ t i+1 the Berstein basis and associated control point blend in. when t i ≤ t ≤ t i+2 the control point and Bernstein basis blend out. The task of deciding which p i control points, the sequence of values for t i (knot vector) and additional weights that interpolate the points best will be discussed in surface fitting. B. Choosing the interpolants For the purposes of fitting the B-spline we are reducing the problem to a Least Squares interpolation, and the interpolation is only as good as the interpolants are representative. In similar work it was suggested that the outermost path connected by 4 corners by polyline [2]. This works just fine for relatively noise free data but it became apparent that the method can be improved for noisy data sets and made more precise for nonnoisy data. We can not use a 1-point deep outer loop as the probability of this being true to the noise free representation of the surface outline is very low. With the addition of the noise, the outer-loop will very likely be perturbed. However, unless the points do not even remotely resemble the expected geometry, we do know that somewhere between the outer most points and a few points in the centroid direction lies the outline of the 'perfect' noise free shape. • For each depth • Pick 8 (or more if desired) corners according the furthest distance in circular sector from the centroid • For each corner -Segment point space into rectangles containing all points between one corner and its adjacent corner -Consider points between as weighted graph (using KDTree in this case) -Set weights to be w = c 1 .d s + c 2 .d c where d s , d c are the straight line distance to adjacent point and centroid, respectively. c defining the 'importance' -Find the shortest path to adjacent point using w as criteria for next point selection • Strip path containing selected points away from cloud so as not to be calculated for next depth • Combine path returned for all corners This method differs from previous attempts as more 'anchor' points are selected according to the spoke sampling criteria mentioned earlier. The incorporation of more anchor points gave better local precision for finding a path as the distance, and thus points considered between each anchor, ensures the algorithm has less room for error: by a short circuiting path, for example. Further, our method allows adjustable depth of points to be sampled so more interpolants can be considered when fitting a boundary. C. Surface fitting In order to fit the spline to the selected interpolants we use Dierckx's algorithm for least squares fitting a B-spline with variable knot vectors. Where the knot vector is a vector of initial values for the 'blending ratio' and define the amount of 'blending' for each control point on the curve (equation 8). The general form of the least squares for the fitting a B-spline is arranged by Dierckx as: δ = m r=1 w r y r − g i=−k p i w r B k+1 i (x r ) 2 (5) from [23] • data points: (x r , y r ) • set of weights w r • the control points p i • the number an position of the knots t • substitute the blending ratio and Bernstein basis 'B k i ' from (4). We have only shown a system where the knot vectors are fixed. Here, we will keep the description for picking the appropriate knots brief and suggest readers seek [23] for a more vigorous explanation. Dierckx avoids the problem of coinciding knots, and the existence of knots very near to the basis boundary [a,b], by separating (3) the least squares spline objective function and penalising the overall error using the following heuristic. (t) = σ(t) + pP (t)(6) from [23] where p is not to be confused with a control point and is set according to some heuristic P (t) = g i=0 (t i+1 − t i ) −1(7) from [23] It's plain to see that the penalty is inversely proportional to the 'closeness' of two adjacent knots. Due to local stopping points on the boundary [23] these constraints help avoid poor gradient based minimisation. With the residual error σ from equation 5. Dierckx suggests the objective function can be subject to the constraint that σ ≤ s where s is a user selected constant [23]. This allows the error function some flexibility so that the spline is not forced to traverse every point exactly. For the agenda of this paper a smooth fit is highly desirable. To this end, we attempt to build upon the smoothing property introduced by Dierckx. Picking the value of 's' is the most challenging task. While there exists some heuristics to setting 's' before fitting the spline, there was improvement on setting an arbitrary regularisation term by using the variance of y values in the target point set. δ ≤ λ n i (y i −ŷ) 2 |y|(8) Given that we want a curve that smoothly traverses the interpolants, the rationale behind using the variance is that the larger the discontinuity of values the greater allowance for fitting error thus the smoother the fit for jagged interpolants. λ is set at a default value but can be tweaked should the user desire. The pit fall with this method is that different values of lambda can still be tailored to each dataset thus the selection of the regularisation is not a fully automated process for finding the optimum qualitative results. One problem with triangulation algorithms is that they are not well suited to concave point clouds and cause edges outside the outer loop of the points to be created. Currently the working implementation for surface generation uses a basic method for removing triangles outside of the boundary defined by the fitted B-spline. The algorithm simply: • Compute the centroid of each triangle in the triangulation • Consider spline as polygon loop which encompasses all correct triangulations by centroid • Triangles with centroids that exist outside of B-spline polygon are removed Note that this method assumes that an optimal regularisation term has been picked otherwise jagged and overfit boundaries lead to the removal of triangles that would contribute to good definition of the surface. VI. RESULTS AND OBSERVATIONS The implementation is written in python and has 3 main dependencies: Pybrain, Sklearn [29] and Matplotlib. We show the effect of using a retrained network verses a copy of the best network found in training. If 'Retrained' is 'Yes', this indicates that a new NN was trained on all the data available after the best topology was discovered during training. If 'Retained' was 'No' then an exact copy of the trained network with the best topology during training was used on the whole data set. 'Final Error' reflects the mean squared error of the network output given the whole data set as an input and not just random samples. Its worth noting that the error in this case is compared against the noisy data so a very low error can mean that the points have overfit to the noise. It should also be noted that this wasn't the case for the earlier results in the 'dimensionality reduction comparison' section, where the error was defined against the 'perfect' error free parametric representation of the intended object. Overfitting on the test data needs to be avoided as it is the overall structure of the geometry that must be captured, and the test set, while sampled randomly, will skew the Final Error if we allow the NN to overfit. The initial preconception that a retrained network, with optimum hyperparameters, would perform better. Having been trained on the whole data set an expected a closer representation of the ground truth, but experimentation shows that a larger number of epochs causes the error of the retrained NN to diverge from the improvement shown by the other non-retrained NNs. The quality of the boundary of the point cloud directly affects how perturbed the resulting tessellation will appear on the final output. We will compare other methods of choosing interpolants and show the importance of having more points afforded to interpolation by allowing a variable depth of samples that form the outer loop. In order to keep the independent variables limited to 1, and to show the problem in a simple manner, the following images show the least squares fit of a Bezier curve for simple points sets. This is the same process as [2] -what changes is the sampling method: VII. CONCLUSION Discussed in this paper is the methods for surface generation from noisy point cloud data. Through experimentation we have been able to expose the internals of the algorithms suggested, and give a closer comparative review of this highly specific application of neural networks. The presented method gives good quantitative and qualitative results for a variety of different data sets. With a little more refinement, particularly in the training of the NN, it is hoped that this method can be extended for more complex 3D point clouds. A software improvement which will speed up the training time of the neural network is the use of a more modern library than Pybrain. On top of this, a better training method should be used to reduce training time. Currently, the training method is simply a gradient descent backpropgation. Not much attention has been payed to parameters like the momentum update, learning rate and other hyperparameters. It would be prudent to refine the method of training as this will be the slowest part of the algorithm. While most of the datasets and neural networks in this, and other papers [2], are constrained to small manageable sizes. The potential for larger, that is deep, with many layers defined in [28], should not be overlooked for complex mappings. However, this is only feasible in a reasonable time frame if the selection of the hyperparamters is more efficient than an exhaustive grid search, otherwise segmentation of the problem will need to be employed. Finding the best regularisation value should be also be on the agenda for improvement. The regularisation parameter, while manifold specific, is fixed throughout the fitting of the B-spline. Ideally there need to some variability in the amount of regularisation at the moment of update of least squares. It would be good to the follow a ridge regression trend in further implementations of this algorithm where the regularisation can be more closely dependent on the spline being fit in a similar way as Caitlin et al use the spherical harmonic order to construct the Tikhonov matrix [13]. As mentioned, regularisation strongly affect the qualitative results and currently a free parameter lambda exists that is not automatically decided. There needs to be some decision towards when hole fitting is appropriate. This method will blindly re-sample the inside of the point cloud whether or not holes where intended. To be able to realise more complex surfaces with intentional holes segmentation could be used but this immediately increase the complexity of the algorithm. To find a global method which fills in only unintended holes is a very challenging task but would improve this algorithm.
3,726
1906.12182
2954035548
The honeynet is a promising active cyber defense mechanism. It reveals the fundamental Indicators of Compromise (IoC) by luring attackers to conduct adversarial behaviors in a controlled and monitored environment. The active interaction at the honeynet brings a high reward but also introduces high implementation costs and risks of adversarial honeynet exploitation. In this work, we apply the infinite-horizon Semi-Markov Decision Process (SMDP) to characterize the stochastic transition and sojourn time of attackers in the honeynet and quantify the reward-risk trade-off. In particular, we produce adaptive long-term engagement policies shown to be risk-averse, cost-effective, and time-efficient. Numerical results have demonstrated that our adaptive interaction policies can quickly attract attackers to the target honeypot and engage them for a sufficiently long period to obtain worthy threat information. Meanwhile, the penetration probability is kept at a low level. The results show that the expected utility is robust against attackers of a large range of persistence and intelligence. Finally, we apply reinforcement learning to SMDP to solve the curse of modeling. Under a prudent choice of the learning rate and exploration policy, we achieve a quick and robust convergence of the optimal policy and value.
SMDP generalizes MDP by considering the random sojourn time at each state, and is widely applied to machine maintenance @cite_13 , resource allocation @cite_8 , and cybersecurity @cite_9 . However, as far as we know, it is the first time that the SMDP is applied to determine the optimal attacker engagement policy and to quantify the trade-off between the investigation reward and the risks.
{ "abstract": [ "In this paper, we study how to optimally manage the freshness of information updates sent from a source node to a destination via a channel. A proper metric for data freshness at the destination is the age-of-information , or simply age , which is defined as how old the freshest received update is, since the moment that this update was generated at the source node (e.g., a sensor). A reasonable update policy is the zero-wait policy, i.e., the source node submits a fresh update once the previous update is delivered, which achieves the maximum throughput and the minimum delay. Surprisingly, this zero-wait policy does not always minimize the age. This counter-intuitive phenomenon motivates us to study how to optimally control information updates to keep the data fresh and to understand when the zero-wait policy is optimal. We introduce a general age penalty function to characterize the level of dissatisfaction on data staleness and formulate the average age penalty minimization problem as a constrained semi-Markov decision problem with an uncountable state space. We develop efficient algorithms to find the optimal update policy among all causal policies and establish sufficient and necessary conditions for the optimality of the zero-wait policy. Our investigation shows that the zero-wait policy is far from the optimum if: 1) the age penalty function grows quickly with respect to the age; 2) the packet transmission times over the channel are positively correlated over time; or 3) the packet transmission times are highly random (e.g., following a heavy-tail distribution).", "", "Mobile cloud computing is a promising technique that shifts the data and computing service modules from individual devices to a geographically distributed cloud service architecture. A general mobile cloud computing system is comprised of multiple cloud domains, and each domain manages a portion of the cloud system resources, such as the Central Processing Unit, memory and storage, etc. How to efficiently manage the cloud resources across multiple cloud domains is critical for providing continuous mobile cloud services. In this paper, we propose a service decision making system for interdomain service transfer to balance the computation loads among multiple cloud domains. Our system focuses on maximizing the rewards for both the cloud system and the users by minimizing the number of service rejections that degrade the user satisfaction level significantly. To this end, we formulate the service request decision making process as a semi-Markov decision process. The optimal service transfer decisions are obtained by jointly considering the system incomes and expenses. Extensive simulation results show that the proposed decision making system can significantly improve the system rewards and decrease service disruptions compared with the greedy approach." ], "cite_N": [ "@cite_9", "@cite_13", "@cite_8" ], "mid": [ "2744248483", "", "2139583278" ] }
Adaptive Honeypot Engagement through Reinforcement Learning of Semi-Markov Decision Processes
Recent instances of WannaCry ransomware attack and Stuxnet malware have demonstrated an inadequacy of traditional cybersecurity techniques such as the firewall and intrusion detection systems. These passive defense mechanisms can detect low-level Indicators of Compromise (IoCs) such as hash values, IP addresses, and domain names. However, they can hardly disclose high-level indicators such as attack tools and Tactics, Techniques and Procedures (TTPs) of the attacker, which induces the attacker fewer pains to adapt to the defense mechanism, evade the indicators, and launch revised attacks as shown in the pyramid of pain [2]. Since high-level indicators are more effective in deterring emerging advanced attacks yet harder to acquire through the traditional passive mechanism, defenders need to adopt active defense paradigms to learn these fundamental characteristics of the attacker, attribute cyber attacks [35], and design defensive countermeasures correspondingly. Honeypots are one of the most frequently employed active defense techniques to gather information on threats. A honeynet is a network of honeypots, which emulates the real production system but has no production activities nor authorized services. Thus, an interaction with a honeynet, e.g., unauthorized inbound connections to any honeypot, directly reveals malicious activities. On the contrary, traditional passive techniques such as firewall logs or IDS sensors have to separate attacks from a ton of legitimate activities, thus provide much more false alarms and may still miss some unknown attacks. Besides a more effective identification and denial of adversarial exploitation through low-level indicators such as the inbound traffic, a honeynet can also help defenders to achieve the goal of identifying attackers' TTPs under proper engagement actions. The defender can interact with attackers and allow them to probe and perform in the honeynet until she has learned the attacker's fundamental characteristics. More services a honeynet emulates, more activities an attacker is allowed to perform, and a higher degree of interactions together result in a larger revelation probability of the attacker's TTPs. However, the additional services and reduced restrictions also bring extra risks. Attacks may use some honeypots as pivot nodes to launch attackers against other production systems [37]. The current honeynet applies the honeywall as a gateway device to supervise outbound data and separate the honeynet from other production systems, as shown in Fig. 1. However, to avoid attackers' identification of the data control and the honeynet, a defender cannot block all outbound traffics from the honeynet, which leads to a trade-off between the rewards of learning high-level IoCs and the following three types of risks. T1: Attackers identify the honeynet and thus either terminate on their own or generate misleading interactions with honeypots. T2: Attackers circumvent the honeywall to penetrate other production systems [34]. T3: Defender's engagement costs outweigh the investigation reward. We quantify risk T1 in Section 2.3, T2 in Section 2.5, and T3 in Section 2.4. In particular, risk T3 brings the problem of timeliness and optimal decisions on timing. Since a persistent traffic generation to engage attackers is costly and the defender aims to obtain timely threat information, the defender needs cost-effective policies to lure the attacker quickly to the target honeypot and reduce attacker's sojourn time in honeypots of low-investigation value. To achieve the goal of long-term, cost-effective policies, we construct the Semi-Markov Decision Process (SMDP) in Section 2 on the network shown in Fig. 2. Nodes 1 to 11 represent different types of honeypots, nodes 12 and 13 represent the domain of the production system and the virtual absorbing state, respectively. The attacker transits between these nodes according to the network topology in Fig. 1 and can remain at different nodes for an arbitrary period of time. The defender can dynamically change the honeypots' engagement levels such as the amount of outbound traffic, to affect the attacker's sojourn time, engagement rewards, and the probabilistic transition in that honeypot. In Section 3, we define security metrics related to our attacker engagement problem and analyze the risk both theoretically and numerically. These metrics answer important security questions in the honeypot engagement problem as follows. How likely will the attacker visit the normal zone at a given time? How long can a defender engage the attacker in a given honeypot before his first visit to the normal zone? How attractive is the honeynet if the attacker is initially in the normal zone? To protect against the Advanced Persistent Threats (APTs), we further investigate the engagement performance against attacks of different levels of persistence and intelligence. Finally, for systems with a large number of governing random variables, it is often hard to characterize the exact attack model, which is referred to as the curse of modeling. Hence, we apply reinforcement learning methods in Section 4 to learn the attacker's behaviors represented by the parameters of the SMDP. We visualize the convergence of the optimal engagement policy and the optimal value in a video demo 1 . In Section 4.1, we discuss challenges and future works of reinforcement learning in the honeypot engagement scenario where the learning environment is non-cooperative, risky, and sample scarce. Notations Throughout the paper, we use calligraphic letter X to define a set. The upper case letter X denotes a random variable and the lower case x represents its realization. The boldface X denotes a vector or matrix and I denotes an identity matrix of a proper dimension. Notation Pr represents the probability measure and represents the convolution. The indicator function 1 {x=y} equals one if x = y, and zero if x = y. The superscript k represents decision epoch k and the subscript i is the index of a node or a state. The pronoun 'she' refers to the defender, and 'he' refers to the attacker. Problem Formulation To obtain optimal engagement decisions at each honeypot under the probabilistic transition and the continuous sojourn time, we introduce the continuous-time infinite-horizon discounted SMDPs, which can be summarized by the tuple {t ∈ [0, ∞), S, A(s j ), tr(s l |s j , a j ), z(·|s j , a j , s l ), r γ (s j , a j , s l ), γ ∈ [0, ∞)}. We describe each element of the tuple in this section. Network Topology We abstract the structure of the honeynet as a finite graph G = (N , E). The node set N := {n 1 , n 2 , · · · , n N } ∪ {n N +1 } contains N nodes of hybrid honeypots. Take Fig. 2 as an example, a node can be either a virtual honeypot of an integrated database system or a physical honeypot of an individual computer. These nodes provide different types of functions and services, and are connected following the topology of the emulated production system. Since we focus on optimizing the value of investigation in the honeynet, we only distinguish between different types of honeypots in different shapes, yet use one extra node n N +1 to represent the entire domain of the production system. The network topology E := {e jl }, j, l ∈ N , is the set of directed links connecting node n j with n l , and represents all possible transition trajectories in the honeynet. The links can be either physical (if the connecting nodes are real facilities such as computers) or logical (if the nodes represent integrated systems). Attackers cannot break the topology restriction. Since an attacker may use some honeypots as pivots to reach a production system, and it is also possible for a defender to attract attackers from the normal zone to the honeynet through these bridge nodes, there exist links of both directions between honeypots and the normal zone. States and State-Dependent Actions At time t ∈ [0, ∞), an attacker's state belongs to a finite set S := {s 1 , s 2 , · · · , s N , s N +1 , s N +2 } where s i , i ∈ {1 , · · · , N + 1}, represents the attacker's location at time t. Once attackers are ejected or terminate on their own, we use the extra absorbing state s N +2 to represent the virtual location. The attacker's state reveals the adversary visit and exploitation of the emulated functions and services. Since the honeynet provides a controlled environment, we assume that the defender can monitor the state and transitions persistently without uncertainties. The attacker can visit a node multiple times for different purposes. A stealthy attacker may visit the honeypot node of the database more than once and revise data progressively (in a small amount each time) to evade detection. An attack on the honeypot node of sensors may need to frequently check the node for the up-to-date data. Some advanced honeypots may also emulate anti-virus systems or other protection mechanisms such as setting up an authorization expiration time, then the attacker has to compromise the nodes repeatedly. At each state s i ∈ S, the defender can choose an action a i from a state-dependent finite set A(s i ). For example, at each honeypot node, the defender can conduct action a E to eject the attacker, action a P to purely record the attacker's activities, lowinteractive action a L , or high-interactive action a H to engage the attacker, i.e., A(s i ) := {a E , a P , a L , a H }, i ∈ {1, · · · , N }. The high-interactive action is costly to implement yet both increases the probability of a longer sojourn time at honeypot n i , and reduces the probability of attackers penetrating the normal system from n i if connected. If the attacker resides in the normal zone either from the beginning or later through the pivot honeypots, the defender can choose either action a E to eject the attacker immediately, or action a A to attract the attacker to the honeynet by exposing some vulnerabilities intentionally, i.e., A(s N +1 ) := {a E , a A }. Note that the instantiation of the action set and the corresponding consequences are not limited to the above scenario. For example, the action can also refer to a different degree of outbound data control. A strict control reduces the probability of attackers penetrating the normal system from the honeypot, yet also brings less investigation value. Continuous-Time Process and Discrete Decision Model Based on the current state s j ∈ S, the defender's action a j ∈ A(s j ), the attacker transits to state s l ∈ S with a probability tr(s l |s j , a j ) and the sojourn time at state s j is a continuous random variable with a probability density z(·|s j , a j , s l ). Note that the risk T1 of the attacker identifying the honeynet at state s j under action a j = A E can be characterized by the transition probability tr(s N +2 |s j , a j ) as well as the duration time z(·|s j , a j , s N +2 ). Once the attacker arrives at a new honeypot n i , the defender dynamically applies an interaction action at honeypot n i from A(s i ) and keeps interacting with the attacker until he transits to the next honeypot. The defender may not change the action before the transition to reduce the probability of attackers detecting the change and become aware of the honeypot engagement. Since the decision is made at the time of transition, we can transform the above continuous time model on horizon t ∈ [0, ∞) into a discrete decision model at decision epoch k ∈ {0, 1, · · · , ∞}. The time of the attacker's k th transition is denoted by a random variable T k , the landing state is denoted as s k ∈ S, and the adopted action after arriving at s k is denoted as a k ∈ A(s k ). Investigation Value The defender gains a reward of investigation by engaging and analyzing the attacker in the honeypot. To simplify the notation, we divide the reward during time t ∈ [0, ∞) into ones at discrete decision epochs T k , k ∈ {0, 1, · · · , ∞}. When τ ∈ [T k , T k+1 ] amount of time elapses at stage k, the defender's reward of investigation r(s k , a k , s k+1 , T k , T k+1 , τ ) = r 1 (s k , a k , s k+1 )1 {τ =0} + r 2 (s k , a k , T k , T k+1 , τ ), at time τ of stage k, is the sum of two parts. The first part is the immediate cost of applying engagement action a k ∈ A(s k ) at state s k ∈ S and the second part is the reward rate of threat information acquisition minus the cost rate of persistently generating deceptive traffics. Due to the randomness of the attacker's behavior, the information acquisition can also be random, thus the actual reward rate r 2 is perturbed by an additive zero-mean noise w r . Different types of attackers target different components of the production system. For example, an attacker who aims to steal data will take intensive adversarial actions at the database. Thus, if the attacker is actually in the honeynet and adopts the same behavior as he is in the production system, the defender can identify the target of the attack based on the traffic intensity. We specify r 1 and r 2 at each state properly to measure the risk T3. To maximize the value of the investigation, the defender should choose proper actions to lure the attacker to the honeypot emulating the target of the attacker in a short time and with a large probability. Moreover, the defender's action should be able to engage the attacker in the target honeypot actively for a longer time to obtain more valuable threat information. We compute the optimal long-term policy that achieves the above objectives in Section 2.5. As the defender spends longer time interacting with attackers, investigating their behaviors and acquires better understandings of their targets and TTPs, less new information can be extracted. In addition, the same intelligence becomes less valuable as time elapses due to the timeliness. Thus, we use a discounted factor of γ ∈ [0, ∞) to penalize the decreasing value of the investigation as time elapses. Optimal Long-Term Policy The defender aims at a policy π ∈ Π which maps state s k ∈ S to action a k ∈ A(s k ) to maximize the long-term expected utility starting from state s 0 , i.e., u(s 0 , π) = E ∞ k=0 T k+1 T k e −γ(τ +T k ) (r(S k , A k , S k+1 , T k , T k+1 , τ ) + w r )dτ . At each decision epoch, the value function v(s 0 ) = sup π∈Π u(s 0 , π) can be represented by dynamic programming, i.e., v(s 0 ) = sup a 0 ∈A(s 0 ) E T 1 T 0 e −γ(τ +T 0 ) r(s 0 , a 0 , S 1 , T 0 , T 1 , τ )dτ + e −γT 1 v(S 1 ) .(1) We assume a constant reward rate r 2 (s k , a k , T k , T k+1 , τ ) =r 2 (s k , a k ) for simplicity. Then, (1) can be transformed into an equivalent MDP form, i.e., ∀s 0 ∈ S, v(s 0 ) = sup a 0 ∈A(s 0 ) s 1 ∈S tr(s 1 |s 0 , a 0 )(r γ (s 0 , a 0 , s 1 ) + z γ (s 0 , a 0 , s 1 )v(s 1 )),(2) where z γ (s 0 , a 0 , s 1 ) := ∞ 0 e −γτ z(τ |s 0 , a 0 , s 1 )dτ ∈ [0, 1] is the Laplace transform of the sojourn probability density z(τ |s 0 , a 0 , s 1 ) and the equivalent reward r γ (s 0 , a 0 , s 1 ) := r 1 (s 0 , a 0 , s 1 )+r 2(s 0 ,a 0 ) γ (1−z γ (s 0 , a 0 , s 1 )) ∈ [−m c , m c ] is assumed to be bounded by a constant m c . A classical regulation condition of SMDP to avoid the probability of an infinite number of transitions within a finite time is stated as follows: there exists constants θ ∈ (0, 1) and δ > 0 such that s 1 ∈S tr(s 1 |s 0 , a 0 )z(δ|s 0 , a 0 , s 1 ) ≤ 1 − θ, ∀s 0 ∈ S, a 0 ∈ A(s 0 ). It is shown in [12] that condition (3) is equivalent to s 1 ∈S tr(s 1 |s 0 , a 0 )z γ (s 0 , a 0 , s 1 ) ∈ [0, 1), which serves as the equivalent stage-varying discounted factor for the associated MDP. Then, the right-hand side of (1) is a contraction mapping and there exists a unique optimal policy π * = arg max π∈Π u(s 0 , π) which can be found by value iteration, policy iteration or linear programming. Cost-Effective Policy The computation result of our 13-state example system is illustrated in Fig. 2. The optimal policies at honeypot nodes n 1 to n 11 are represented by different colors. Specifically, actions a E , a P , a L , a H are denoted in red, blue, purple, and green, respectively. The size of node n i represents the state value v(s i ). In the example scenario, the honeypot of database n 10 and sensors n 11 are the main and secondary targets of the attacker, respectively. Thus, defenders can obtain a higher investigation value when they manage to engage the attacker in these two honeypot nodes with a larger probability and for a longer time. However, instead of naively adopting high interactive actions, a savvy defender also balances the high implantation cost of a H . Our quantitative results indicate that the high interactive action should only be applied at n 10 to be cost-effective. On the other hand, although the bridge nodes n 1 , n 2 , n 8 which connect to the normal zone n 12 do not contain higher investigation values than other nodes, the defender still takes action a L at these nodes. The goal is to either increase the probability of attracting attackers away from the normal zone or reduce the probability of attackers penetrating the normal zone from these bridge nodes. Engagement Safety versus Investigation Values Restrictive engagement actions endow attackers less freedom so that they are less likely to penetrate the normal zone. However, restrictive actions also decrease the probability of obtaining high-level IoCs, thus reduces the investigation values. To quantify the system value under the trade-off of the engagement safety and the reward from the investigation, we visualize the trade-off surface in Fig. 3. In the x-axis, a larger penetration probability p(s N +1 |s j , a j ), j ∈ {s 1 , s 2 , s 8 }, a j = a E , decreases the value v(s 10 ). In the y-axis, a larger reward r γ (s j , a j , s l ), j ∈ S \ {s 12 , s 13 }, l ∈ S, increases the value. The figure also shows that value v(s 10 ) changes in a higher rate, i.e., are more sensitive when the penetration probability is small and the reward from the investigation is large. In our scenario, the penetration probability has less influence on the value than the investigation reward, which motivates a less restrictive engagement. Fig. 3: The trade-off surface of v(s 10 ) in z-axis under different values of penetration probability p(s N +1 |s j , a j ), j ∈ {s 1 , s 2 , s 8 }, a j = a E , in x-axis, and the reward r γ (s j , a j , s l ), j ∈ S \ {s 12 , s 13 }, l ∈ S, in y-axis. Risk Assessment Given any feasible engagement policy π ∈ Π, the SMDP becomes a semi-Markov process [24]. We analyze the evolution of the occupancy distribution and first passage time in Section 3.1 and 3.2, respectively, which leads to three security metrics during the honeypot engagement. To shed lights on the defense of APTs, we investigate the system performance against attackers with different levels of persistence and intelligence in Section 3.3. Transition Probability of Semi-Markov Process Define the cumulative probability q ij (t) of the one-step transition from {S k = i, T k = t k } to {S k+1 = j, T k+1 = t k + t} as Pr(S k+1 = j, T k+1 − t k ≤ t|S k = i, T k = t k ) = tr(j|i, π(i)) t 0 z(τ |i, π(i), j)dτ, ∀i, j ∈ S, t ≥ 0. Based on a variation of the forward Kolmogorov equation where the one-step transition lands on an intermediate state l ∈ S at time T k+1 = t k + u, ∀u ∈ [0, t], the transition probability of the system in state j at time t, given the initial state i at time 0 can be represented as p ii (t) = 1 − h∈S q ih (t) + l∈S t 0 p li (t − u)dq il (u), p ij (t) = l∈S t 0 p lj (t − u)dq il (u) = l∈S p lj (t) dq il (t) dt , ∀i, j ∈ S, j = i, ∀t ≥ 0, where 1 − h∈S q ih (t) is the probability that no transitions happen before time t. We can easily verify that l∈S p il (t) = 1, ∀i ∈ S, ∀t ∈ [0, ∞). To compute p ij (t) and p ii (t), we can take Laplace transform and then solve two sets of linear equations. For simplicity, we specify z(τ |i, π(i), j) to be exponential distributions with parameters λ ij (π(i)), and the semi-Markov process degenerates to a continuous time Markov chain. Then, we obtain the infinitesimal generator via the Leibniz integral rule, i.e., q ij := dp ij (t) dt t=0 = λ ij (π(i)) · tr(j|i, π(i)) > 0, ∀i, j ∈ S, j = i, q ii := dp ii (t) dt t=0 = − j∈S\{i}q ij < 0, ∀i ∈ S. Define matrixQ := [q ij ] i,j∈S and vector P i (t) = [p ij (t)] j∈S , then based on the forward Kolmogorov equation, dP i (t) dt = lim u→0 + P i (t + u) − P i (t) u = lim u→0 + P i (u) − I u P i (t) =QP i (t). Thus, we can compute the first security metric, the occupancy distribution of any state s ∈ S at time t starting from the initial state i ∈ S at time 0, i.e., P i (t) = eQ t P i (0), ∀i ∈ S.(4) We plot the evolution of p ij (t), i = s N +1 , j ∈ {s 1 , s 2 , s 10 , s 12 }, versus t ∈ [0, ∞) in Fig. 4 and the limiting occupancy distribution p ij (∞), i = s N +1 , in Fig. 5. In Fig. 4, although the attacker starts at the normal zone i = s N +1 , our engagement policy can quickly attract the attacker into the honeynet. Fig. 5 demonstrates that the engagement policy can keep the attacker in the honeynet with a dominant probability of 91% and specifically, in the target honeypot n 10 with a high probability of 41%. The honeypots connecting the normal zone also have a higher occupancy probability than nodes n 3 , n 4 , n 5 , n 6 , n 7 , n 9 , which are less likely to be explored by the attacker due to the network topology. = df c ij (t) dt satisfies p ij (t) = t 0 p jj (t − u)df c ij (u) = p jj (t) f ij (t), ∀i, j ∈ S, j = i. Take Laplace transformp ij (s) := ∞ 0 e −st p ij (t)dt, and then take inverse Laplace transform onf ij (s) =p ij (s) pjj (s) , we obtain f ij (t) = ∞ 0 e stp ij (s) p jj (s) ds, ∀i, j ∈ S, j = i.(5) We define the second security metric, the attraction efficiency as the probability of the first passenger time T s12,s10 less than a threshold t th . Based on (4) and (5), the probability density function of T s12,s10 is shown in Fig. 6. We take the mean denoted by the orange line as the threshold t th and the attraction efficiency is 0.63, which means that the defender can attract the attacker from the normal zone to the database honeypot in less than t th = 20.7 with a probability of 0.63. In general, the MFPT is asymmetric, i.e., t m ij = t m ji , ∀i, j ∈ S. Based on (6), we compute the MFPT from and to the normal zone in Fig. 7 and Fig. 8, respectively. The color of each node indicates the value of MFPT. In Fig. 7, the honeypot nodes that directly connect to the normal zone have the shortest MFPT, and it takes attackers much longer time to visit the honeypots of clients due to the network topology. Fig. 8 shows that the defender can engage attackers in the target honeypot nodes of database and sensors for a longer time. The engagements at the client nodes are yet much less attractive. Note that two figures have different time scales denoted by the color bar value, and the comparison shows that it generally takes the defender more time and efforts to attract the attacker from the normal zone. The MFPT from the normal zone t m s12,j measures the average time it takes to attract attacker to honeypot state j ∈ S \ {s 12 , s 13 } for the first time. On the contrary, the MFPT to the normal zone t m i,s12 measures the average time of the attacker penetrating the normal zone from honeypot state i ∈ S \ {s 12 , s 13 } for the first time. If the defender pursues absolute security and ejects the attack once it goes to the normal zone, then Fig. 8 also shows the attacker's average sojourn time in the honeynet starting from different honeypot nodes. Advanced Persistent Threats In this section, we quantify three engagement criteria on attackers of different levels of persistence and intelligence in Fig. 9 and Fig. 10, respectively. The criteria are the stationary probability of normal zone p i,s12 (∞), ∀i ∈ S \ {s 13 }, the utility of normal zone v(s 12 ), and the expected utility over the stationary probability, i.e., j∈S p ij (∞)v(j), ∀i ∈ S \ {s 13 }. As shown in Fig. 9, when the attacker is at the normal zone i = s 12 and the defender chooses action a = a A , a larger λ := λ ij (a A ), ∀j ∈ {s 1 , s 2 , s 8 }, of the exponential sojourn distribution indicates that the attacker is more inclined to respond to the honeypot attraction and thus less time is required to attract the attacker away from the normal zone. As the persistence level λ increases from 0.1 to 2.5, the stationary probability of the normal zone decreases and the expected utility over the stationary probability increases, both converge to their stable values. The change rate is higher during λ ∈ (0, 0.5] and much lower afterward. On the other hand, the utility loss at the normal zone decreases approximately linearly during the entire period λ ∈ (0, 2.5]. As shown in Fig. 10, when the attacker becomes more advanced with a larger failure probability of attraction, i.e., p := p(j|s 12 , a A ), ∀j ∈ {s 12 , s 13 }, he can stay in the normal zone with a larger probability. A significant increase happens after p ≥ 0.5. On the other hand, as p increases from 0 to 1, the utility of the normal zone reduces linearly, and the expected utility over the stationary probability remains approximately unchanged until p ≥ 0.9. Fig. 9 and Fig. 10 demonstrate that the expected utility over the stationary probability receives a large decrease only at the extreme cases of a high transition frequency and a large penetration probability. Similarly, the stationary probability of the normal zone remains small for most cases except for the above extreme cases. Thus, our policy provides a robust expected utility as well as a low-risk engagement over a large range of changes in the attacker's persistence and intelligence. Reinforcement Learning of SMDP Due to the absent knowledge of an exact SMDP model, i.e., the investigation reward, the attacker's transition probability (and even the network topology), and the sojourn distribution, the defender has to learn the optimal engagement policy based on the actual experience of the honeynet interactions. As one of the classical model-free reinforcement learning methods, the Q-learning algorithm for SMDP has been stated in [3], i.e., Q k+1 (s k , a k ) :=(1 − α k (s k , a k ))Q k (s k , a k ) + α k (s k , a k )[r 1 (s k , a k ,s k+1 ) +r 2 (s k , a k ) (1 − e −γτ k ) γ − e −γτ k max a ∈A(s k+1 ) Q k (s k+1 , a )],(7) where s k is the current state sample, a k is the current selected action, α k (s k , a k ) ∈ (0, 1) is the learning rate,s k+1 is the observed state at next stage,r 1 ,r 2 is the observed investigation rewards, andτ k is the observed sojourn time at state s k . When the learning rate satisfies ∞ k=0 α k (s k , a k ) = ∞, ∞ k=0 (α k (s k , a k )) 2 < ∞, ∀s k ∈ S, ∀a k ∈ A(s k ), and all state-action pairs are explored infinitely, max a ∈A(s k ) Q k (s k , a ), k → ∞, in (7) converges to value v(s k ) with probability 1. At each decision epoch k ∈ {0, 1, · · · }, the action a k is chosen according to thegreedy policy, i.e., the defender chooses the optimal action arg max a ∈A(s k ) Q k (s k , a ) with a probability 1 − , and a random action with a probability . Note that the exploration rate ∈ (0, 1] should not be too small to guarantee sufficient samples of all state-action pairs. The Q-learning algorithm under a pure exploration policy = 1 still converges yet at a slower rate. In our scenario, the defender knows the reward of ejection action a A and v(s 13 ) = 0, thus does not need to explore action a A to learn it. We plot one learning trajectory of the state transition and sojourn time under the -greedy exploration policy in Fig. 11, where the chosen actions a E , a P , a L , a H are denoted in red, blue, purple, and green, respectively. If the ejection reward is unknown, the defender should be restrictive in exploring a A which terminates the learning process. Otherwise, the defender may need to engage with a group of attackers who share similar behaviors to obtain sufficient samples to learn the optimal engagement policy. In particular, we choose α k (s k , a k ) = kc k {s k ,a k } −1+kc , ∀s k ∈ S, ∀a k ∈ A(s k ), to guarantee the asymptotic convergence, where k c ∈ (0, ∞) is a constant parameter and k {s k ,a k } ∈ {0, 1, · · · } is the number of visits to state-action pair {s k , a k } up to stage k. We need to choose a proper value of k c to guarantee a good numerical performance of convergence in finite steps as shown in Fig. 12. We shift the green and blue lines vertically to avoid the overlap with the red line and represent the corresponding theoretical values in dotted black lines. If k c is too small as shown in the red line, the learning rate decreases so fast that new observed samples hardly update the Q-value and the defender may need a long time to learn the right value. However, if k c is too large as shown in the green line, the learning rate decreases so slow that new samples contribute significantly to the current Q-value. It causes a large variation and a slower convergence rate of max a ∈A(s12) Q k (s 12 , a ). We show the convergence of the policy and value under k c = 1, = 0.2, in the video demo (See URL: https://bit.ly/2QUz3Ok). In the video, the color of each node n k distinguishes the defender's action a k at state s k and the size of the node is proportional to max a ∈A(s k ) Q k (s k , a ) at stage k. To show the convergence, we decrease the value of gradually to 0 after 5000 steps. Since the convergence trajectory is stochastic, we run the simulation for 100 times and plot the mean and the variance of Q k (s 12 , a P ) of state s 12 under the optimal policy π(s 12 ) = a P in Fig. 13. The mean in red converges to the theoretical value in about 400 steps and the variance in blue reduces dramatically as step k increases. Step k Discussion In this section, we discuss the challenges and related future directions about reinforcement learning in the honeypot engagement. Non-cooperative and Adversarial Learning Environment The major challenge of learning under the security scenario is that the defender lacks full control of the learning environment, which limits the scope of feasible reinforcement learning algorithms. In the classical reinforcement learning task, the learner can choose to start at any state at any time, and repeatedly simulate the path from the target state. In the adaptive honeypot engagement problem, however, the defender can remove attackers but cannot arbitrarily draw them to the target honeypot and force them to show their attacking behaviors because the true threat information is revealed only when attackers are unaware of the honeypot engagements. The future work could generalize the current framework to an adversarial learning environment where a savvy attacker can detect the honeypot and adopt deceptive behaviors to interrupt the learning process. Risk Reduction during the Learning Period Since the learning process is based on samples from real interactions, the defender needs to concern the system safety and security during the learning period. For example, if the visit and sojourn in the normal zone bring a significant amount of losses, we can use the SARSA algorithm to conduct a more conservative learning process than Q-learning. Other safe reinforcement learning methods are stated in the survey [8], which are left as future work. Asymptotic versus Finite-Step Convergence Since an attacker can terminate the interaction on his own, the engagement time with attacker may be limited. Thus, comparing to an asymptotic convergence of policy learning, the defender aims more to conduct speedy learning of the attacker's behaviors in finite steps, and meanwhile, achieve a good engagement performance in these finite steps. Previous works have studied the convergence rate [6] and the non-asymptotic convergence [19,18] in the MDP setting. For example, [6] have shown a relationship between the convergence rate and the learning rate of Q-learning, [19] has provided the performance bound of the finite-sample convergence rate, and [18] has proposed E 3 algorithm which achieves near-optimal with a large probability in polynomial time. However, in the honeypot engagement problem, the defender does not know the remaining steps that she can interact with the attacker because the attacker can terminate on his own. Thus, we cannot directly apply the E 3 algorithm which depends on the horizon time. Moreover, since attackers may change their behaviors during the long learning period, the learning algorithm needs to adapt to the changes of SMDP model quickly. In this preliminary work, we use the -greedy policy for the trade-off of the exploitation and exploration during the finite learning time. The can be set at a relatively large value without the gradual decrease so that the learning algorithm persistently adapts to the changes in the environment. On the other hand, the defender can keep a larger discounted factor γ to focus on the immediate investigation reward. If the defender expects a short interaction time, i.e., the attacker is likely to terminate in the near future, she can increase the discounted factor in the learning process to adapt to her expectations. Transfer Learning In general, the learning algorithm on SMDP converges slower than the one on MDP because the sojourn distribution introduces extra randomness. Thus, instead of learning from scratch, the defender can attempt to reuse the past experience with attackers of similar behaviors to expedite the learning process, which motivates the investigation of transfer learning in reinforcement learning [39]. Some side-channel information may also contribute to the transfer learning. Conclusion A honeynet is a promising active defense scheme. Comparing to traditional passive defense techniques such as the firewall and intrusion detection systems, the engagement with attackers can reveal a large range of Indicators of Compromise (IoC) at a lower rate of false alarms and missed detection. However, the active interaction also introduces the risks of attackers identifying the honeypot setting, penetrating the production system, and a high implementation cost of persistent synthetic traffic generations. Since the reward depends on honeypots' type, the defender aims to lure the attacker into the target honeypot in the shortest time. To satisfy the above requirements of security, cost, and timeliness, we leverage the Semi-Markov Decision Process (SMDP) to model the transition probability, sojourn distribution, and investigation reward. After transforming the continuous time process into the equivalent discrete decision model, we have obtained long-term optimal policies that are risk-averse, cost-effective, and time-efficient. We have theoretically analyzed the security metrics of the occupancy distribution, attraction efficiency, and average engagement efficiency based on the transition probability and the probability density function of the first passenger time. The numerical results have shown that the honeypot engagement can engage the attacker in the target honeypot with a large probability and in a desired speed. In the meantime, the penetration probability is kept under a bearable level for most of the time. The results also demonstrate that it is a worthy compromise of the immediate security to allow a small penetration probability so that a high investigation reward can be obtained in the long run. Finally, we have applied reinforcement learning methods on the SMDP in case the defender can not obtain the exact model of the attacker's behaviors. Based on a prudent choice of the learning rate and exploration-exploitation policy, we have achieved a quick convergence rate of the optimal policy and the value. Moreover, the variance of the learning process has decreased dramatically with the number of observed samples.
6,691
1906.12028
2971620038
Learning from web data has attracted lots of research interest in recent years. However, crawled web images usually have two types of noises, label noise and background noise, which induce extra difficulties in utilizing them effectively. Most existing methods either rely on human supervision or ignore the background noise. In this paper, we propose a novel method, which is capable of handling these two types of noises together, without the supervision of clean images in the training stage. Particularly, we formulate our method under the framework of multi-instance learning by grouping ROIs (i.e., images and their region proposals) from the same category into bags. ROIs in each bag are assigned with different weights based on the representative discriminative scores of their nearest clusters, in which the clusters and their scores are obtained via our designed memory module. Our memory module could be naturally integrated with the classification module, leading to an end-to-end trainable system. Extensive experiments on four benchmark datasets demonstrate the effectiveness of our method.
In learning classifier with web data, previous works focus on handling the label noise in three directions, removing label noise @cite_14 @cite_48 @cite_32 @cite_57 @cite_12 @cite_5 @cite_43 , building noise-robust model @cite_29 @cite_18 @cite_39 @cite_0 @cite_38 @cite_47 , and curriculum learning @cite_6 @cite_26 .
{ "abstract": [ "", "", "", "", "We study the problem of automatically removing outliers from noisy data, with application for removing outlier images from an image collection. We address this problem by utilizing the reconstruction errors of an autoencoder. We observe that when data are reconstructed from low-dimensional representations, the inliers and the outliers can be well separated according to their reconstruction errors. Based on this basic observation, we gradually inject discriminative information in the learning process of an autoencoder to make the inliers and the outliers more separable. Experiments on a variety of image datasets validate our approach.", "We present a theoretically grounded approach to train deep neural networks, including recurrent networks, subject to class-dependent label noise. We propose two procedures for loss correction that are agnostic to both application domain and network architecture. They simply amount to at most a matrix inversion and multiplication, provided that we know the probability of each class being corrupted into another. We further show how one can estimate these probabilities, adapting a recent technique for noise estimation to the multi-class setting, and thus providing an end-to-end framework. Extensive experiments on MNIST, IMDB, CIFAR-10, CIFAR-100 and a large scale dataset of clothing images employing a diversity of architectures &#x2014; stacking dense, convolutional, pooling, dropout, batch normalization, word embedding, LSTM and residual layers &#x2014; demonstrate the noise robustness of our proposals. Incidentally, we also prove that, when ReLU is the only non-linearity, the loss curvature is immune to class-dependent label noise.", "In this paper, we study the problem of learning image classification models with label noise. Existing approaches depending on human supervision are generally not scalable as manually identifying correct or incorrect labels is time-consuming, whereas approaches not relying on human supervision are scalable but less effective. To reduce the amount of human supervision for label noise cleaning, we introduce CleanNet, a joint neural embedding network, which only requires a fraction of the classes being manually verified to provide the knowledge of label noise that can be transferred to other classes. We further integrate CleanNet and conventional convolutional neural network classifier into one framework for image classification learning. We demonstrate the effectiveness of the proposed algorithm on both of the label noise detection task and the image classification on noisy data task on several large-scale datasets. Experimental results show that CleanNet can reduce label noise detection error rate on held-out classes where no human supervision available by 41.5 compared to current weakly supervised methods. It also achieves 47 of the performance gain of verifying all images with only 3.2 images verified on an image classification task. Source code and dataset will be available at kuanghuei.github.io CleanNetProject.", "We present a simple yet efficient approach capable of training deep neural networks on large-scale weakly-supervised web images, which are crawled raw from the Internet by using text queries, without any human annotation. We develop a principled learning strategy by leveraging curriculum learning, with the goal of handling a massive amount of noisy labels and data imbalance effectively. We design a new learning curriculum by measuring the complexity of data using its distribution density in a feature space, and rank the complexity in an unsupervised manner. This allows for an efficient implementation of curriculum learning on large-scale web images, resulting in a high-performance CNN the model, where the negative impact of noisy labels is reduced substantially. Importantly, we show by experiments that those images with highly noisy labels can surprisingly improve the generalization capability of model, by serving as a manner of regularization. Our approaches obtain state-of-the-art performance on four benchmarks: WebVision, ImageNet, Clothing-1M and Food-101. With an ensemble of multiple models, we achieved a top-5 error rate of 5.2 on the WebVision challenge [18] for 1000-category classification. This result was the top performance by a wide margin, outperforming second place by a nearly 50 relative error rate. Code and models are available at: https: github.com MalongTech CurriculumNet.", "", "Learning from web data is increasingly popular due to abundant free web resources. However, the performance gap between webly supervised learning and traditional supervised learning is still very large, due to the label noise of web data as well as the domain shift between web data and test data. To fill this gap, most existing methods propose to purify or augment web data using instance-level supervision, which generally requires heavy annotation. Instead, we propose to address the label noise and domain shift by using more accessible category-level supervision. In particular, we build our deep probabilistic framework upon variational autoencoder (VAE), in which classification network and VAE can jointly leverage category-level hybrid information. Then, we extend our method for domain adaptation followed by our low-rank refinement strategy. Extensive experiments on three benchmark datasets demonstrate the effectiveness of our proposed method.", "Label noise is an important issue in classification, with many potential negative consequences. For example, the accuracy of predictions may decrease, whereas the complexity of inferred models and the number of necessary training samples may increase. Many works in the literature have been devoted to the study of label noise and the development of techniques to deal with label noise. However, the field lacks a comprehensive survey on the different types of label noise, their consequences and the algorithms that consider label noise. This paper proposes to fill this gap. First, the definitions and sources of label noise are considered and a taxonomy of the types of label noise is proposed. Second, the potential consequences of label noise are discussed. Third, label noise-robust, label noise cleansing, and label noise-tolerant algorithms are reviewed. For each category of approaches, a short discussion is proposed to help the practitioner to choose the most suitable technique in its own particular field of application. Eventually, the design of experiments is also discussed, what may interest the researchers who would like to test their own algorithms. In this paper, label noise consists of mislabeled instances: no additional information is assumed to be available like e.g., confidences on labels.", "", "In this paper, we study a classification problem in which sample labels are randomly corrupted. In this scenario, there is an unobservable sample with noise-free labels. However, before being observed, the true labels are independently flipped with a probability @math , and the random label noise can be class-conditional. Here, we address two fundamental problems raised by this scenario. The first is how to best use the abundant surrogate loss functions designed for the traditional classification problem when there is label noise. We prove that any surrogate loss function can be used for classification with noisy labels by using importance reweighting, with consistency assurance that the label noise does not ultimately hinder the search for the optimal classifier of the noise-free sample. The other is the open problem of how to obtain the noise rate @math . We show that the rate is upper bounded by the conditional probability @math of the noisy sample. Consequently, the rate can be estimated, because the upper bound can be easily reached in classification problems. Experimental results on synthetic and real datasets confirm the efficiency of our methods.", "", "Current approaches for fine-grained recognition do the following: First, recruit experts to annotate a dataset of images, optionally also collecting more structured data in the form of part annotations and bounding boxes. Second, train a model utilizing this data. Toward the goal of solving fine-grained recognition, we introduce an alternative approach, leveraging free, noisy data from the web and simple, generic methods of recognition. This approach has benefits in both performance and scalability. We demonstrate its efficacy on four fine-grained datasets, greatly exceeding existing state of the art without the manual collection of even a single label, and furthermore show first results at scaling to more than 10,000 fine-grained categories. Quantitatively, we achieve top-1 accuracies of (92.3 , ) on CUB-200-2011, (85.4 , ) on Birdsnap, (93.4 , ) on FGVC-Aircraft, and (80.8 , ) on Stanford Dogs without using their annotated training sets. We compare our approach to an active learning approach for expanding fine-grained datasets." ], "cite_N": [ "@cite_38", "@cite_18", "@cite_14", "@cite_26", "@cite_48", "@cite_29", "@cite_32", "@cite_6", "@cite_39", "@cite_57", "@cite_43", "@cite_0", "@cite_5", "@cite_47", "@cite_12" ], "mid": [ "2605021547", "", "", "", "2204904589", "2964292098", "2962762068", "2887842788", "", "2796418006", "2167460663", "", "1514928307", "", "2287418003" ] }
0
1906.12028
2971620038
Learning from web data has attracted lots of research interest in recent years. However, crawled web images usually have two types of noises, label noise and background noise, which induce extra difficulties in utilizing them effectively. Most existing methods either rely on human supervision or ignore the background noise. In this paper, we propose a novel method, which is capable of handling these two types of noises together, without the supervision of clean images in the training stage. Particularly, we formulate our method under the framework of multi-instance learning by grouping ROIs (i.e., images and their region proposals) from the same category into bags. ROIs in each bag are assigned with different weights based on the representative discriminative scores of their nearest clusters, in which the clusters and their scores are obtained via our designed memory module. Our memory module could be naturally integrated with the classification module, leading to an end-to-end trainable system. Extensive experiments on four benchmark datasets demonstrate the effectiveness of our method.
For label noise removal, some approaches address the label noise issue as outlier detection in an unsupervised manner. Xia al @cite_48 removes outlier images by using the reconstruction errors of an autoencoder. CleanNet @cite_32 used a fraction of manually-verified data to transfer the knowledge of label noise to other categories. For noise-robust model designing, Patrini al @cite_29 proposed to train DNN models with a loss correction framework, which is insensitive to class-dependent label noise. Sukhbaatar al @cite_11 developed an extra label flip layer which is enabled to match the noisy label distribution and absorb the noise. For curriculum learning @cite_7 , MentorNet @cite_45 designed an additional network to weight training examples and enforce the model to focus more on clean samples. CurriculumNet @cite_6 measured the distribution density of images in their feature space and ranked them for model training.
{ "abstract": [ "", "We study the problem of automatically removing outliers from noisy data, with application for removing outlier images from an image collection. We address this problem by utilizing the reconstruction errors of an autoencoder. We observe that when data are reconstructed from low-dimensional representations, the inliers and the outliers can be well separated according to their reconstruction errors. Based on this basic observation, we gradually inject discriminative information in the learning process of an autoencoder to make the inliers and the outliers more separable. Experiments on a variety of image datasets validate our approach.", "We present a theoretically grounded approach to train deep neural networks, including recurrent networks, subject to class-dependent label noise. We propose two procedures for loss correction that are agnostic to both application domain and network architecture. They simply amount to at most a matrix inversion and multiplication, provided that we know the probability of each class being corrupted into another. We further show how one can estimate these probabilities, adapting a recent technique for noise estimation to the multi-class setting, and thus providing an end-to-end framework. Extensive experiments on MNIST, IMDB, CIFAR-10, CIFAR-100 and a large scale dataset of clothing images employing a diversity of architectures &#x2014; stacking dense, convolutional, pooling, dropout, batch normalization, word embedding, LSTM and residual layers &#x2014; demonstrate the noise robustness of our proposals. Incidentally, we also prove that, when ReLU is the only non-linearity, the loss curvature is immune to class-dependent label noise.", "In this paper, we study the problem of learning image classification models with label noise. Existing approaches depending on human supervision are generally not scalable as manually identifying correct or incorrect labels is time-consuming, whereas approaches not relying on human supervision are scalable but less effective. To reduce the amount of human supervision for label noise cleaning, we introduce CleanNet, a joint neural embedding network, which only requires a fraction of the classes being manually verified to provide the knowledge of label noise that can be transferred to other classes. We further integrate CleanNet and conventional convolutional neural network classifier into one framework for image classification learning. We demonstrate the effectiveness of the proposed algorithm on both of the label noise detection task and the image classification on noisy data task on several large-scale datasets. Experimental results show that CleanNet can reduce label noise detection error rate on held-out classes where no human supervision available by 41.5 compared to current weakly supervised methods. It also achieves 47 of the performance gain of verifying all images with only 3.2 images verified on an image classification task. Source code and dataset will be available at kuanghuei.github.io CleanNetProject.", "We present a simple yet efficient approach capable of training deep neural networks on large-scale weakly-supervised web images, which are crawled raw from the Internet by using text queries, without any human annotation. We develop a principled learning strategy by leveraging curriculum learning, with the goal of handling a massive amount of noisy labels and data imbalance effectively. We design a new learning curriculum by measuring the complexity of data using its distribution density in a feature space, and rank the complexity in an unsupervised manner. This allows for an efficient implementation of curriculum learning on large-scale web images, resulting in a high-performance CNN the model, where the negative impact of noisy labels is reduced substantially. Importantly, we show by experiments that those images with highly noisy labels can surprisingly improve the generalization capability of model, by serving as a manner of regularization. Our approaches obtain state-of-the-art performance on four benchmarks: WebVision, ImageNet, Clothing-1M and Food-101. With an ensemble of multiple models, we achieved a top-5 error rate of 5.2 on the WebVision challenge [18] for 1000-category classification. This result was the top performance by a wide margin, outperforming second place by a nearly 50 relative error rate. Code and models are available at: https: github.com MalongTech CurriculumNet.", "Recent studies have discovered that deep networks are capable of memorizing the entire data even when the labels are completely random. Since deep models are trained on big data where labels are often noisy, the ability to overfit noise can lead to poor performance. To overcome the overfitting on corrupted training data, we propose a novel technique to regularize deep networks in the data dimension. This is achieved by learning a neural network called MentorNet to supervise the training of the base network, namely, StudentNet. Our work is inspired by curriculum learning and advances the theory by learning a curriculum from data by neural networks. We demonstrate the efficacy of MentorNet on several benchmarks. Comprehensive experiments show that it is able to significantly improve the generalization performance of the state-of-the-art deep networks on corrupted training data.", "" ], "cite_N": [ "@cite_7", "@cite_48", "@cite_29", "@cite_32", "@cite_6", "@cite_45", "@cite_11" ], "mid": [ "", "2204904589", "2964292098", "2962762068", "2887842788", "2775306753", "2973562770" ] }
0
1906.12028
2971620038
Learning from web data has attracted lots of research interest in recent years. However, crawled web images usually have two types of noises, label noise and background noise, which induce extra difficulties in utilizing them effectively. Most existing methods either rely on human supervision or ignore the background noise. In this paper, we propose a novel method, which is capable of handling these two types of noises together, without the supervision of clean images in the training stage. Particularly, we formulate our method under the framework of multi-instance learning by grouping ROIs (i.e., images and their region proposals) from the same category into bags. ROIs in each bag are assigned with different weights based on the representative discriminative scores of their nearest clusters, in which the clusters and their scores are obtained via our designed memory module. Our memory module could be naturally integrated with the classification module, leading to an end-to-end trainable system. Extensive experiments on four benchmark datasets demonstrate the effectiveness of our method.
More recently, memory networks have been employed for one-shot learning @cite_41 @cite_2 , few-shot learning @cite_44 , and semi-supervised learning @cite_25 . Specifically, Kaiser al @cite_2 designed a memory augmented network that could do life-long one-short learning. By adding an abstracting memory module, CMN @cite_44 encoded videos into fixed-size features via a multi-saliency embedding algorithm. MA-DNN @cite_25 leveraged the assimilation-accommodation interaction in memory networks for semi-supervised learning.
{ "abstract": [ "", "In this paper, we propose a new memory network structure for few-shot video classification by making the following contributions. First, we propose a compound memory network (CMN) structure under the key-value memory network paradigm, in which each key memory involves multiple constituent keys. These constituent keys work collaboratively for training, which enables the CMN to obtain an optimal video representation in a larger space. Second, we introduce a multi-saliency embedding algorithm which encodes a variable-length video sequence into a fixed-size matrix representation by discovering multiple saliencies of interest. For example, given a video of car auction, some people are interested in the car, while others are interested in the auction activities. Third, we design an abstract memory on top of the constituent keys. The abstract memory and constituent keys form a layered structure, which makes the CMN more efficient and capable of being scaled, while also retaining the representation capability of the multiple keys. We compare CMN with several state-of-the-art baselines on a new few-shot video classification dataset and show the effectiveness of our approach.", "We consider the semi-supervised multi-class classification problem of learning from sparse labelled and abundant unlabelled training data. To address this problem, existing semi-supervised deep learning methods often rely on the up-to-date “network-in-training” to formulate the semi-supervised learning objective. This ignores both the discriminative feature representation and the model inference uncertainty revealed by the network in the preceding learning iterations, referred to as the memory of model learning. In this work, we propose a novel Memory-Assisted Deep Neural Network (MA-DNN) capable of exploiting the memory of model learning to enable semi-supervised learning. Specifically, we introduce a memory mechanism into the network training process as an assimilation-accommodation interaction between the network and an external memory module. Experiments demonstrate the advantages of the proposed MA-DNN model over the state-of-the-art semi-supervised deep learning methods on three image classification benchmark datasets: SVHN, CIFAR10, and CIFAR100.", "Despite recent advances, memory-augmented deep neural networks are still limited when it comes to life-long and one-shot learning, especially in remembering rare events. We present a large-scale life-long memory module for use in deep learning. The module exploits fast nearest-neighbor algorithms for efficiency and thus scales to large memory sizes. Except for the nearest-neighbor query, the module is fully differentiable and trained end-to-end with no extra supervision. It operates in a life-long manner, i.e., without the need to reset it during training. @PARASPLIT Our memory module can be easily added to any part of a supervised neural network. To show its versatility we add it to a number of networks, from simple convolutional ones tested on image classification to deep sequence-to-sequence and recurrent-convolutional models. In all cases, the enhanced network gains the ability to remember and do life-long one-shot learning. Our module remembers training examples shown many thousands of steps in the past and it can successfully generalize from them. We set new state-of-the-art for one-shot learning on the Omniglot dataset and demonstrate, for the first time, life-long one-shot learning in recurrent neural networks on a large-scale machine translation task." ], "cite_N": [ "@cite_41", "@cite_44", "@cite_25", "@cite_2" ], "mid": [ "", "2894873912", "2895771689", "2583010282" ] }
0
1906.12176
2956045289
CNNs have excelled at performing place recognition over time, particularly when the neural network is optimized for localization in the current environmental conditions. In this paper we investigate the concept of feature map filtering, where, rather than using all the activations within a convolutional tensor, only the most useful activations are used. Since specific feature maps encode different visual features, the objective is to remove feature maps that are detract from the ability to recognize a location across appearance changes. Our key innovation is to filter the feature maps in an early convolutional layer, but then continue to run the network and extract a feature vector using a later layer in the same network. By filtering early visual features and extracting a feature vector from a higher, more viewpoint invariant later layer, we demonstrate improved condition and viewpoint invariance. Our approach requires image pairs for training from the deployment environment, but we show that state-of-the-art performance can regularly be achieved with as little as a single training image pair. An exhaustive experimental analysis is performed to determine the full scope of causality between early layer filtering and late layer extraction. For validity, we use three datasets: Oxford RobotCar, Nordland, and Gardens Point, achieving overall superior performance to NetVLAD. The work provides a number of new avenues for exploring CNN optimizations, without full re-training.
The recent successes of deep learning in image classification @cite_21 and object recognition @cite_26 have encouraged the application of neural networks in place recognition. In early work, the pre-trained AlexNet @cite_15 network is used to produce a feature vector out of the Conv3 layer @cite_16 @cite_27 . Rather than simply using a pre-trained network, NetVLAD learns visual place recognition end-to-end. In NetVLAD, triplet loss is used to find the optimal VLAD encoding to match scenes across both viewpoint and condition variations @cite_8 . LoST uses the semantic CNN RefineNet @cite_25 to select salient keypoints within the width by height dimensions of a convolutional tensor @cite_12 . In a related work, these keypoints have been found by observing the activations out of a late convolutional layer @cite_0 . The aforementioned examples involve improving a pre-trained neural network for place recognition, either by re-training, or selecting the most useful components out of the network activations.
{ "abstract": [ "We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.", "We tackle the problem of large scale visual place recognition, where the task is to quickly and accurately recognize the location of a given query photograph. We present the following four principal contributions. First, we develop a convolutional neural network (CNN) architecture that is trainable in an end-to-end manner directly for the place recognition task. The main component of this architecture, NetVLAD, is a new generalized VLAD layer, inspired by the “Vector of Locally Aggregated Descriptors” image representation commonly used in image retrieval. The layer is readily pluggable into any CNN architecture and amenable to training via backpropagation. Second, we create a new weakly supervised ranking loss, which enables end-to-end learning of the architecture's parameters from images depicting the same places over time downloaded from Google Street View Time Machine. Third, we develop an efficient training procedure which can be applied on very large-scale weakly labelled tasks. Finally, we show that the proposed architecture and training procedure significantly outperform non-learnt image representations and off-the-shelf CNN descriptors on challenging place recognition and image retrieval benchmarks.", "The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.", "Recently, image representations derived from Convolutional Neural Networks (CNNs) have been demonstrated to achieve impressive performance on a wide variety of tasks, including place recognition. In this paper, we take a step deeper into the internal structure of CNNs and propose novel CNN-based image features for place recognition by identifying salient regions and creating their regional representations directly from the convolutional layer activations. A range of experiments is conducted on challenging datasets with varied conditions and viewpoints. These reveal superior precision-recall characteristics and robustness against both viewpoint and appearance variations for the proposed approach over the state of the art. By analyzing the feature encoding process of our approach, we provide insights into what makes an image presentation robust against external variations.", "Place recognition has long been an incompletely solved problem in that all approaches involve significant compromises. Current methods address many but never all of the critical challenges of place recognition – viewpoint-invariance, condition-invariance and minimizing training requirements. Here we present an approach that adapts state-of-the-art object proposal techniques to identify potential landmarks within an image for place recognition. We use the astonishing power of convolutional neural network features to identify matching landmark proposals between images to perform place recognition over extreme appearance and viewpoint variations. Our system does not require any form of training, all components are generic enough to be used off-the-shelf. We present a range of challenging experiments in varied viewpoint and environmental conditions. We demonstrate superior performance to current state-of-the- art techniques. Furthermore, by building on existing and widely used recognition frameworks, this approach provides a highly compatible place recognition system with the potential for easy integration of other techniques such as object detection and semantic scene interpretation.", "We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.", "After the incredible success of deep learning in the computer vision domain, there has been much interest in applying Convolutional Network (ConvNet) features in robotic fields such as visual navigation and SLAM. Unfortunately, there are fundamental differences and challenges involved. Computer vision datasets are very different in character to robotic camera data, real-time performance is essential, and performance priorities can be different. This paper comprehensively evaluates and compares the utility of three state-of-the-art ConvNets on the problems of particular relevance to navigation for robots; viewpoint-invariance and condition-invariance, and for the first time enables real-time place recognition performance using ConvNets with large maps by integrating a variety of existing (locality-sensitive hashing) and novel (semantic search space partitioning) optimization techniques. We present extensive experiments on four real world datasets cultivated to evaluate each of the specific challenges in place recognition. The results demonstrate that speed-ups of two orders of magnitude can be achieved with minimal accuracy degradation, enabling real-time performance. We confirm that networks trained for semantic place categorization also perform better at (specific) place recognition when faced with severe appearance changes and provide a reference for which networks and layers are optimal for different aspects of the place recognition problem.", "Recently, very deep convolutional neural networks (CNNs) have shown outstanding performance in object recognition and have also been the first choice for dense classification problems such as semantic segmentation. However, repeated subsampling operations like pooling or convolution striding in deep CNNs lead to a significant decrease in the initial image resolution. Here, we present RefineNet, a generic multi-path refinement network that explicitly exploits all the information available along the down-sampling process to enable high-resolution prediction using long-range residual connections. In this way, the deeper layers that capture high-level semantic features can be directly refined using fine-grained features from earlier convolutions. The individual components of RefineNet employ residual connections following the identity mapping mindset, which allows for effective end-to-end training. Further, we introduce chained residual pooling, which captures rich background context in an efficient manner. We carry out comprehensive experiments and set new state-of-the-art results on seven public datasets. In particular, we achieve an intersection-over-union score of 83.4 on the challenging PASCAL VOC 2012 dataset, which is the best reported result to date.", "Human visual scene understanding is so remarkable that we are able to recognize a revisited place when entering it from the opposite direction it was first visited, even in the presence of extreme variations in appearance. This capability is especially apparent during driving: a human driver can recognize where they are when travelling in the reverse direction along a route for the first time, without having to turn back and look. The difficulty of this problem exceeds any addressed in past appearance- and viewpoint-invariant visual place recognition (VPR) research, in part because large parts of the scene are not commonly observable from opposite directions. Consequently, as shown in this paper, the precision-recall performance of current state-of-the-art viewpoint- and appearance-invariant VPR techniques is orders of magnitude below what would be usable in a closed-loop system. Current engineered solutions predominantly rely on panoramic camera or LIDAR sensing setups; an eminently suitable engineering solution but one that is clearly very different to how humans navigate, which also has implications for how naturally humans could interact and communicate with the navigation system. In this paper we develop a suite of novel semantic- and appearance-based techniques to enable for the first time high performance place recognition in this challenging scenario. We first propose a novel Local Semantic Tensor (LoST) descriptor of images using the convolutional feature maps from a state-of-the-art dense semantic segmentation network. Then, to verify the spatial semantic arrangement of the top matching candidates, we develop a novel approach for mining semantically-salient keypoint correspondences." ], "cite_N": [ "@cite_26", "@cite_8", "@cite_21", "@cite_0", "@cite_27", "@cite_15", "@cite_16", "@cite_25", "@cite_12" ], "mid": [ "2952122856", "2620629206", "2117539524", "2744874208", "1162411702", "2163605009", "2951399172", "2563705555", "2916040547" ] }
Filter Early, Match Late: Improving Network-Based Visual Place Recognition
Convolutional neural networks have demonstrated impressive performance on computer vision tasks [1], [2], including visual place recognition [3], [4]. Recently, researchers have investigated optimising and improving pre-trained CNNs, by either extracting salient features [5], [6], or by 'pruning' the network [7], [8]. Network pruning is typically used to increase the computation speed of forward-pass computation; however, our previous work has provided a proof of concept that a type of pruning, dubbed "feature map filtering", can also improve the place recognition performance of a pretrained CNN [9]. In feature map filtering, specific feature maps are removed, based on their suitability to identify the correct matching location across a changing environmental appearance. In this work, we propose that an early convolutional layer can be filtered to improve the matching utility of feature vectors extracted from a network's later layers. By performing this early layer filtering, simple visual features (e.g. textures and contours) that detract from a network's utility for place recognition across changing environmental conditions are removed. Crafting a feature vector out of a later layer is beneficial, as research [3], [10] has shown that later CNN layers are more invariant to viewpoint changes. We verify the ability to handle viewpoint variations by using the Gardens Point Walking dataset, and handle condition variations using the Oxford RobotCar dataset (matching from night to day) and the Nordland dataset (matching from summer to winter). We summarize the contributions of this work: • We propose a novel method of performing feature map filtering (or pruning) on early convolutional layers, while extracting features for place recognition out of later convolutional layers. • To determine the selection of feature maps to remove, we have developed a Triplet Loss calibration procedure which uses training image pairs to remove feature maps that show consistent detriment in the ability to localize in the current environment. We demonstrate experimentally that state-of-the-art performance can be achieved with as little as a single training image pair. • We provide a thorough experimental evaluation of the effects of filtering CNN feature maps for a pre-trained neural network, exhaustively testing all combinations in the layers Conv2 to Conv5. We also include a set of experiments filtering Conv2 and using the first fully connected layer as a feature vector. Our results also reveal the inner workings of neural networks -a neural network can have a portion of it's feature maps completely removed and yet a holistic feature vector can be extracted out of a higher convolutional layer. We also provide a visualization of the activations within a higher layer of the filtered network. The paper proceeds as follows. In Section II, we review feature map pruning literature and discuss the application of neural networks in visual place recognition. Section III presents our methodology, describing the calibration procedure. Section IV details the setup of our three experimental datasets and Section V discusses the performance of filtering different convolutional layers on these datasets. Section VI summarizes this work and provides suggestions for future work. III. PROPOSED APPROACH A pre-trained neural network, which was trained on a diverse set of images, will learn internal representations of a wide range of different visual features. However, in visual place recognition, perceptual aliasing is a common problem. Perceptual aliasing is where certain visual features make a scene visually similar to a previously observed scene in a different location. If a pre-trained network is selectively filtered on the expected environment, then visual features that contribute to perceptual aliasing can be removed, leaving the feature maps that encode visual features that can suitably match between two appearances of the same location. We use a short calibration method to prepare our feature map filter, as described in the following sub-sections. A. Early Feature Filtering, Late Feature Extraction In our previous work on feature map filtering [9], we filtered the feature map stack while extracting the feature vector from the same layer. While we demonstrated improved place recognition performance, this approach was not capable of optimizing for the extraction of visual features in higher convolutional layers. We hypothesize that filtering an early convolutional layer will remove distracting visual features, while crafting a feature vector out of a later layer has been shown [10] to have improved viewpoint robustness. Our improved approach filters the feature map space within a CNN, except the network is allowed to continue after filtering. Optimizing the filter is now dependent on the triplet loss on the features extracted out of a higher convolutional layer. This also adds the concept of feedback to a neural network, by modulating early visual features with respect to higher level features. B. Deep Learnt Feature Extraction and Triplet Loss A feature vector is extracted out of a width by height by channel (W × H × C) convolutional tensor. To improve viewpoint robustness and increase the processing speed, dimensionality reduction is performed using spatial pooling [21]. As per our previous work [9], we again use pyramid spatial pooling and convert each W × H tensor into a vector of dimension 5, containing the maximum activation across the entire feature map plus the maximum activation in each quadrant of the feature map. For all our experiments we use the pre-trained network HybridNet [21]. HybridNet was trained with place recognition in mind, resulting in a well-performing pre-trained network with a fast forward-pass processing speed. Triplet Loss [4] involves comparing the feature distance between a positive pair of images relative to one or more negative pairs of images. In this case, the positive pair are two images taken from the same physical location, but at different times, while the negative pairs are images taken at a similar time but at varying locations. We use one 'hard' negative pair, in this case, the second reference image is an image that is a fixed number of frames ahead of the current frame. This distance is slightly larger than the ground truth tolerance. We then have four 'soft' negatives, which are random images elsewhere in the reference dataset. Including a fixed, hard negative reduces variance in the filter calibration. As per literature best practise [4], [22], we use the L2-Distance as our optimization metric. Figure 2 shows an overview of our proposed approach. C. Filtering Method We use a type of Greedy algorithm [23] to determine which subset of the feature map stack suits the current environmental conditions. Our variant of the Greedy algorithm finds the worst performing (with respect to the triplet loss) feature map at each iteration of the algorithm. Normally the Greedy algorithm will terminate when the local minimum triplet loss is reached; however, to guarantee that the global minimum is found, we continue iterating until half the original feature map stack has been removed. We store the filter selection at each iteration and search for the global minimum across all iterations. Additionally, we also implement a batch filtering extension to the aforementioned algorithm. In batch filtering, in each iteration, the four worst feature maps are discovered (based on the triplet loss) and removed before the next iteration. We can safely add this approximation because of the global minimum search. If removing four maps at once prevents the loss function from being convex, the best match can still be found due to the global search. Adding batch filtering improves the computation speed of calibration by a factor of four. The decision to terminate the search at half the maps removed is a heuristically determined trade-off between calibration processing time and localization benefits. Determining the worst performing feature map is based on the triplet loss score out of a higher network layer. Specifically, each feature map is individually removed and the network is continued to run further into the forward pass. The triplet loss is then calculated based on the feature vectors extracted out of a higher network layer. As mentioned previously, we apply a maximum spatial pooling operation on the raw Conv ReLu activations. The purpose of this is to reduce the dimensionality of the feature vector, and to ensure the filtering process focuses on strong activations. For each pair of images in the triplet set, the L2 distance between that pair is calculated as per the equation below. D(q j i , r j i ) = M k=1 (q j i (k) − r j i (k)) 2(1) where M is the dimension of the filtered query feature vector q j i . The equation above is repeated for the five negative pairs. The difference scores for the five pairs are then averaged together. The triplet loss is the difference between the positive pair and the averaged negative pair, across a different feature map j being removed. D(j) = K k=1 D(r j i , n j k ) K − D(q j i , r j i ) ∀j(2) where r j i represents the current location filtered reference feature vector and n j i represents the averaged negative. j denotes the index of the currently filtered feature map. In our experiments, K is set to 5 since the set of n k consists of one fixed reference image and four, randomly selected, reference images. We then find the maximum distance: maxval = max 1≤j≤N D(j) (3) worstF map = argmax 1≤j≤N D(j)(4) where N is the number of remaining feature maps. The index of the maximum distance represents the feature map to be removed to achieve the greatest L2 difference between the images from the same location and the average negative distance. To implement the batch filtering, the previous worst map from D is removed and equation 4 is repeated until the four worst feature map ids are collected. At the end of each iteration, the weights and biases in an earlier convolutional layer are set to zero, for each weight inside the feature maps selected to be filtered. We then return the new, partially zeroed CNN to the next iteration of the algorithm. These iterations continue until half the features maps are removed. The global maximum triplet loss score is then found, and the selection of filtered feature maps at the maximum loss score are the final set of filtered maps for that calibration image. Finally, for improved robustness and to prevent outliers, we use multiple calibration images. The choice of filtered feature maps is stored for all images and after the calibration procedure is finished, the number of times a particular feature map is removed is summed across all the calibration image sets. The final filtered maps are feature maps that were chosen to be removed in at least 66% of the calibration sets. This threshold was heuristically determined, using place recognition experimentation on a range of different thresholds. With this threshold, on average, approximately 25% of earlier layer feature maps are removed after filtering. We chose this metric based on the objective to find feature maps that are consistently poor at navigation, rather than feature maps that are only inefficient for a single image. This approach reduces the risk of overfitting the filter to the calibration data. D. Place Recognition Filter Verification Algorithm To evaluate the performance of the calibrated feature map filter, we use a single-frame place recognition algorithm. To apply the filter, every convolutional weight in a filtered feature map is set to zero. This new network is then run in the forward direction to produce convolutional activations in higher network layers. We again apply spatial pooling to the convolutional activations, producing a feature vector of length five times the number of feature maps. The feature vectors from the reference and query traverses are compared using the cosine distance metric. While the euclidean distance was the training distance metric, our experiments revealed that it is advantageous to train the filter using the euclidean distance metric but perform place recognition using the cosine similarity metric. In early experiments, we also checked the performance by training with the cosine distance metric instead and found a reduction in the resulting place recognition performance. The resultant difference vector is then normalized to the range 0.001 to 0.999, where 0.001 denotes the worst match and 0.999 the best match. We then apply the logarithm operator to every element of the difference vector. Taking the logarithm amplifies the difference between the best match and other, perceptually aliased matches [24]. The place recognition quality score is calculated using the method originally proposed in SeqSLAM [25], where the quality score is the ratio between the score at the best matching template and the next largest score outside a window around the best matching template. A set of precision and recall values are calculated by varying a quality score threshold value. For compact viewing, we display the localization performance using the maximum F1 score metric, where the F1 score is the harmonic mean of precision and recall. IV. EXPERIMENTAL METHOD We demonstrate our approach on three benchmark datasets, which have been extensively tested in recent literature [10], [26], [27]. The datasets are Oxford RobotCar, Nordland, and Gardens Point Walking. Each dataset is briefly described in the sections below. Oxford RobotCar -RobotCar was recorded over a year across different times of day, seasons and routes [28]. For our training set, we use 50 image sets (a positive image, an anchor and five negative images) extracted at an approximate frame rate of one frame every two seconds. Using a low frame rate ensures that the individual images show some diversity between them. Therefore, the calibration set has a duration of approximately 100 seconds, which is a realistic and practical calibration duration for a real-world application. We also experiment with a smaller number of calibration image sets, to observe the effects of using fewer calibration images. For our test set, we use 1600 frames extracted out of the dataset, which corresponds to approximately two kilometers through Oxford. There are no training images present in the test set. The reference dataset was recorded on an overcast day (2014-12-09-13-21-02), while the query dataset is at nighttime on the following day (2014-12-10-18-10-50). We use a ground truth tolerance of 30 meters, consistent with recent publications [10], [24]. Nordland The Nordland dataset [29] is recorded from a train travelling for 728 km through Norway across four different seasons. The training set again consists of 50 images, with a recording frame rate of 0.2 frames per second. The resultant calibration duration is 250 seconds; a longer real-world duration was heuristically chosen to account for the significantly larger real-world distance of the Nordland dataset (compared to Oxford RobotCar or Gardens Point Walking). For the experimental dataset, we use the Winter route as the reference dataset and the Summer traverse as the recognition route, using a 2000 image subset of the original videos. In our previous work [9] we used the Summer images as the reference set and the Winter images as query; we flipped the order because we found that matching from Summer to Winter to be more challenging. For the ground truth we compare the query traverse frame number to the matching database frame number, with a ground-truth tolerance of 10 frames, since the two traverses are aligned frame-by-frame. Again the test set contains no images from the training set. Gardens Point Walking -was recorded at the QUT university campus in Brisbane and consists of two traverses during the day and one at night, with a duration of 200 images per traverse [3]. One of the day traverses is viewed from the left-hand side of the walkways, while the second day and the night traverse were both recorded from the righthand side. We train our filter on the comparison between the left-hand side at daytime to the right-hand side at nighttime, using just 5 calibration images. We then use 194 images as the evaluation set and a ground truth tolerance of 3 frames. V. RESULTS In this section, a detailed analysis is performed on the performance of feature map filtering in visual place recognition. The results are shown using the maximum F1 score metric and we compare our early layer filter approach to three benchmarks. First, we compare against filtering the same layer as the feature vector is extracted from. To ensure a fair comparison, the same triplet loss method is used, including using five negative images. The second benchmark is the localization performance without any filtering at all. Finally, we also compare against pre-trained NetVLAD (trained on Pittsburgh 30k) [4]. NetVLAD normally outputs results as a Recall@N metric; we convert this to an equivalent F1 score by assuming a precision score of 100% and using the Recall@1 value as the recall score. Figure 3 provides a summary of the overall place recognition performance across all three datasets. Overall, removing feature maps in the same layer as the feature vector is extracted from has a higher maximum F1 score with respect to both NetVLAD, and the same network without any filtering. Filtering the feature maps in an earlier convolutional layer produces a further improvement to the average place recognition performance. A. Oxford RobotCar Early layer filtering generally improves localization on the Oxford RobotCar dataset (see Figures 4 to 6). If Conv3 features are used for localization, then whether an earlier layer or the current layer is filtered is largely irrelevant. However, whichever method is used results in a significant improvement in localizing with these features. When Conv2 is filtered and Conv4 features are used, the localization experiment results in a maximum F1 score improvement of 0.8, compared to filtering on the same convolutional layer (Conv4). In Figure 7, we varied the number of calibration images used when training the filter on the Conv2 layer. We used as little as 1 calibration image, up to 50 calibration images. We determined that the localization performance improves gradually, and even calibrating with five images in the query environment improves the place recognition performance above both NetVLAD and HybridNet without any filtering. This is particularly apparent with Conv3 features, which normally are not a suitable choice for a localization system. As a general rule, the more calibration images, this lower the risk of over-fitting the filter on the calibration data. These results indicate that even if only a single calibration image is available, our approach can provide an improvement to localization. This also indicates that there are visual features which are a detriment to place recognition across all variations in the remainder of the Oxford RobotCar dataset (from night to day), such that 1 image of the environment is sufficient to remove many of these poor visual features. Fig. 7. In this experiment, we filter feature maps from the Conv2 layer and extract a feature vector out of one of the later layers. Increasing the number of calibration images provides a small improvement to the localization ability of the network. Even if just 5 calibration images are available, the filtering approach still beats both the baseline without filtering, and NetVLAD. With Conv3 features and a single calibration image, the maximum F1 score is higher than NetVLAD. Effect of Varying Number of Calibration Images on Oxford RobotCar Dataset B. Nordland The striking result from these experiments (Figures 8 to 10) is the magnitude of improvement added with filtering, which is much greater than the other two datasets. We hypothesize that this railway dataset can be easily encoded using a set of calibration images. The environmental appearance of both summer and winter traverses changes little over the dataset, unlike the Oxford RobotCar dataset, where street lighting makes the environmental change more dynamic. The choice of layer to filter is mostly indeterminate on this dataset. Because there are no viewpoint variations on this dataset, the max-pooling operations between layers makes little difference to the localization performance. Therefore, once the distracting visual features are removed from any layer in the network, the choice of layer to extract a feature vector from becomes largely irrelevant. C. Gardens Point Walking To test our theory that early layer filtering is advantageous for viewpoint variant datasets, we used the left-hand and right-hand Gardens Point Walking traverses. As expected, the higher Conv5 layer achieved the highest localization performance, attaining a maximum F1 score of 0.73 if Conv3 is filtered first (see Figures 11 to 13). There is a decent gap between filtering Conv3 and filtering Conv5, with a improvement in F1 score of 0.7. The improvement in F1score using just 5 calibration images indicates that the early layer filtering process is particularly useful when moderate viewpoint variations are present. NetVLAD performs well on this dataset and beats any of our filters. NetVLAD is designed for viewpoint invariance in mind, by virtue of the learnt VLAD clustering and use of features out of the final convolutional layer. The gap between our approach and NetVLAD becomes small when Conv3 was filtered and the feature vector was produced from Conv5, using just five calibration images. D. Fully Connected Features We performed a final experiment, to consider using feature map filtering to optimize a feature vector formed using the first full-connected layer. We directly used the activations within the ReLu layer after the first fully-connected layer as the feature vector. We filtered Conv2 using the exact same triplet loss method on the three datasets, and show the results in Figure 14 below. Notice how the result with a random set of filtered maps is worse than the baseline performance. This result shows that feature map filtering is beneficial because of the objective function, and not because of any inherent benefits of reducing the dimensionality of the network. Extracting Features out of First Fully Connected Layer E. Visualization of Feature Map Filtering By plotting the maximum activation coordinates onto a heat map, we can observe the change in activations after the network is filtered. As shown in Figure 15, the filtering process effects the spatial position of the maximum activations. We found that the magnitude of these maximum activations are still different, however, the location of the maximum activations within the spatial region of the feature maps becomes more accurate, between environments. VI. DISCUSSION AND CONCLUSION In this paper, we presented an early filtering, late matching approach to improving visual place recognition in appearance-changing environments. We showed that CNNs tend to activate in response to features with little utility for appearance-invariant place recognition, and show that by applying a calibrated feature map filter, these distracting features are removed from the localization feature vectors. Our results indicate that filtering an earlier layer of the network generally results in better performance than filtering the same layer that the feature vector is extracted from. We also provide a case where we filter the Conv2 layer and extract features out of the first fully connected layer, demonstrating the versatility of early layer filtering. The experimental results show that a network layer can be severely pruned and yet continue to be run in the forward direction with coherent and effective activations in a later layer. Our approach also does not re-train after pruning, unlike many previous work in the space [7], [8], [17], [18]. Therefore this research shows that, while later layers are directly impacted by the complete removal of early features, the removal of up to 50% of these early features does not cause a catastrophic collapse of activation strength in later layers. Note that we did find that removing more than 50% of feature maps in an early layer dramatically increased the risk of localization instability, as the later activations experience significantly reduced activation strength. Also, removing too many feature maps during calibration risks overfitting the training data. Our results show that a small number of feature maps can be selectively pruned from an early convolutional layer, to optimize localization in the current environment. The approach is also practical from a training perspective: our results show that state-of-the-art performance can be achieved even with a single training image pair. The work discussed here could be improved by making feature map filtering end-to-end, with the filters learnt by back-propagation. Normally a hard assignment of filtering is not differentiable, however, a soft filtering approach could be applied when training the filter. Further work will also investigate the use of feature map filtering to improve object detection and image classification. If an early layer can be filtered for the benefit of a later convolutional layer, or even a fully-connected layer, then it stands to reason that a filter could be learnt to optimize the final classifier output.
4,055
1811.12099
2903071347
The main reason for the standardization of network protocols, like QUIC, is to ensure interoperability between implementations, which poses a challenging task. Manual tests are currently used to test the different existing implementations for interoperability, but given the complex nature of network protocols, it is hard to cover all possible edge cases. State-of-the-art automated software testing techniques, such as Symbolic Execution (SymEx), have proven themselves capable of analyzing complex real-world software and finding hard to detect bugs. We present a SymEx-based method for finding interoperability issues in QUIC implementations, and explore its merit in a case study that analyzes the interoperability of picoquic and QUANT. We find that, while SymEx is able to analyze deep interactions between different implementations and uncovers several bugs, in order to enable efficient interoperability testing, implementations need to provide additional information about their current protocol state.
Formal methods have long been used to analyze network protocols @cite_14 @cite_4 @cite_13 @cite_5 @cite_25 , often with a focus on security. However, even if the formal analysis of a network protocol has successfully proven a property, be it related to correctness or security, it is by no means guaranteed that this property will also hold for an implementation of said protocol.
{ "abstract": [ "The authors present a detailed study of four formal methods (T-, U-, D-, and W-methods) for generating test sequences for protocols. Applications of these methods to the NBS Class 4 Transport Protocol are discussed. An estimation of fault coverage of four protocol-test-sequence generation techniques using Monte Carlo simulation is also presented. The ability of a test sequence to decide whether a protocol implementation conforms to its specification heavily relies on the range of faults that it can capture. Conformance is defined at two levels, namely, weak and strong conformance. This study shows that a test sequence produced by T-method has a poor fault detection capability, whereas test sequences produced by U-, D-, and W-methods have comparable (superior to that for T-method) fault coverage on several classes of randomly generated machines used in this study. Also, some problems with a straightforward application of the four protocol-test-sequence generation methods to real-world communication protocols are pointed out. >", "", "This paper explores the use of Spin for the verification of cryptographic protocol security properties. A general method is proposed to build a Promela model of the protocol and of the intruder capabilities. The method is illustrated showing the modeling of a classical case study, i.e. the Needham-Schroeder Public Key Authentication Protocol. Using the model so built, Spin can find a known attack on the protocol, and it correctly validates the fixed version of the protocol.", "The Secure Sockets Layer (SSL) protocol is analyzed using a finite-state enumeration tool called Murϕ. The analysis is presented using a sequence of incremental approximations to the SSL 3.0 handshake protocol. Each simplified protocol is \"model-checked\" using Murϕ, with the next protocol in the sequence obtained by correcting errors that Murϕ finds automatically. This process identifies the main shortcomings in SSL 2.0 that led to the design of SSL 3.0, as well as a few anomalies in the protocol that is used to resume a session in SSL 3.0. In addition to some insight into SSL, this study demonstrates the feasibility of using formal methods to analyze commercial protocols.", "Many system errors do not emerge unless some intricate sequence of events occurs. In practice, this means that most systems have errors that only trigger after days or weeks of execution. Model checking [4] is an effective way to find such subtle errors. It takes a simplified description of the code and exhaustively tests it on all inputs, using techniques to explore vast state spaces efficiently. Unfortunately, while model checking systems code would be wonderful, it is almost never done in practice: building models is just too hard. It can take significantly more time to write a model than it did to write the code. Furthermore, by checking an abstraction of the code rather than the code itself, it is easy to miss errors.The paper's first contribution is a new model checker, CMC, which checks C and C++ implementations directly, eliminating the need for a separate abstract description of the system behavior. This has two major advantages: it reduces the effort to use model checking, and it reduces missed errors as well as time-wasting false error reports resulting from inconsistencies between the abstract description and the actual implementation. In addition, changes in the implementation can be checked immediately without updating a high-level description.The paper's second contribution is demonstrating that CMC works well on real code by applying it to three implementations of the Ad-hoc On-demand Distance Vector (AODV) networking protocol [7]. We found 34 distinct errors (roughly one bug per 328 lines of code), including a bug in the AODV specification itself. Given our experience building systems, it appears that the approach will work well in other contexts, and especially well for other networking protocols." ], "cite_N": [ "@cite_14", "@cite_4", "@cite_5", "@cite_13", "@cite_25" ], "mid": [ "2121954581", "", "1503887377", "2049224403", "2117009500" ] }
Interoperability-Guided Testing of QUIC Implementations using Symbolic Execution
The emergence of new, modern protocols for the Internet promises a solution to long-standing issues that can only be solved by changing core parts of the current protocol stack. Such new protocols and their implementations must meet the highest requirements: They will have to reliably function at similar levels of maturity as what they aim to replace. This includes aspects such as reliability, security, performance and, prominently, interoperability between implementations. Ensuring interoperability is the main reason for standardizing QUIC as a protocol, and the IETF standardization process goes to great lengths, such as requiring multiple independent implementations, to make sure this is achievable. Thus, better methods and tools that assist with the difficult challenge of interoperability testing are highly desirable. Automated testing techniques, such as Symbolic Execution (SymEx), have proven themselves to be capable of analyzing complex real world software, usually focused on finding low-level safety violations [4], and SymEx has also proven its worth in the networking domain in various other ways [7, 14, 16-18, 22, 24, 25]. This paper explores the potential of SymEx for checking the interoperability of QUIC implementations. It does so by presenting a SymEx-based method to detect interoperability issues, and demonstrates its potential in a case study of two existing QUIC implementations, picoquic and QUANT. We discover that, while our method is able to successfully analyze nontrivial interactions between different implementations, implementations need to disclose more protocol-level information to truly enable deep semantic interoperability testing. Key Contributions and Outline The key contributions of this paper are as follows: • We describe a method that uses Symbolic Execution (SymEx) to test QUIC implementations for interoperability, and discuss how additional information from implementations about their current protocol state could be leveraged for semantically deeper testing. • We then present our case study in which we symbolically test picoquic and QUANT for interoperability, and discuss the abstraction layers that are necessary to enable SymEx for QUIC implementations. • The final key contribution is the evaluation of our implementation, testing picoquic and QUANT, in which we report on the performance of our method as well as on defects we discovered. We begin by giving background on SymEx in Sect. 2, followed by a discussion of related work in Sect. 3. We then present our method in Sect. 4, and describe its implementation and the setup of the case study in Sect. 5. This is followed by an evaluation of our results in Sect. 6, before we shortly discuss future work in Sect. 7 and conclude in Sect. 8 if(x < 5) { } if(x >= 100) {x < 5} if(x >= 100) {x ≥ 5} return ok {x < 5, x ≥ 100} return ok {x < 5, x < 100} return ok {x ≥ 5, x ≥ 100} return ok {x ≥ 5, x < 100} 2 if(x < 5) ok = false; 1 bool ok = true; 3 if(x >= 100) ok = false; 4 return ok; Figure 1: Symbolic Execution (SymEx) of a small example program. Constraints encountered in branching statements are recorded in the path constraints of the corresponding explored paths. By checking new branching conditions for satisfiability on each path, exactly all reachable paths through the program are explored. SYMBOLIC EXECUTION (SYMEX) Given a program that takes some input (e.g., command line arguments, files, network packets, etc.), SymEx systematically explores the program by executing all reachable paths. It does so by assigning symbolic values instead of concrete ones to its input, which allows the SymEx engine to fork execution at a branch-statement (i.e., if) when both branches are feasible. If this is the case, the condition that caused the fork (i.e., the condition inside the if statement) is remembered on the execution path following the true-branch as an additional constraint. On the other execution path, which follows the false-branch, the negation of the condition is remembered as a constraint instead. To determine the reachability given the current constraints, an SMT solver, such as Z3 [5], is queried. SMT solvers are the backbone of every SymEx engine, and their performance and completeness directly influence the efficiency of the symbolic analysis, and they ensure that only feasible paths are explored. Continuing in this fashion a SymEx engine will explore all reachable paths through the program. Whenever a path terminates, either regularly or because an error was encountered, the engine will query the SMT solver using the collected path constraints to get concrete values for each symbolic input value. These values will then be recorded in the form of a test case, which can then be run again later to exercise the same path through the program. If a bug was encountered, the generated test case will be able to reproduce the taken path for further debugging and introspection. Figure 1 shows a small example program that performs operations depending on the value of a symbolic input variable x. The program contains two conditional branches that have to be traversed before the return in line 4 is reached. On the right, all paths explored by SymEx are shown. In the beginning, x is unconstrained, but, as SymEx progresses, a path for each side of the first branch is explored. For each side, a corresponding constraint (either x < 5 or x ≥ 5) is added to the path constraints. When the second branch is reached, only three paths need to be explored further: The constraint set {x < 5, x ≥ 100} is not satisfiable, and therefore this path will never be reachable during execution. In the end, SymEx will query the SMT solver for concrete values for x for each path to generate a suite of concrete test cases that cover all reachable paths of the program. METHOD OUTLINE SymEx engines such as KLEE [3], which our cases study utilizes, usually expect their input to be a program. However, protocol implementations are naturally libraries, and as such lack an implicit singular entry point. Although ways to analyze libraries directly have been proposed [15], they suffer from a lack of insight into what constitutes a sensible use of the library. Instead, we propose to analyze programs that utilize the libraries and execute different test scenarios. One way to choose the test scenarios for this would be to use existing applications that already implement real-world application logic. This is currently difficult to do for QUIC, as there are only very few applications built on top of QUIC, and is further complicated by their use of only a small set of different QUIC implementations. Instead, we follow the current best-practice in compliance testing by designing test scenarios based on primitives defined in the QUIC standard. Unlike common, concrete compliance testing suites, we formulate symbolic testing scenarios that perform large families of related tests in one go. These describe the involved endpoints (e.g., clients and servers) and the communication that takes place between them, for example, which connections are established, which streams are opened, what is sent on those streams, and so on. Such scenarios can be defined in both high-level as well as low-level terms. A more low-level scenario describes individual packets and effects such as loss or reordering instead of focusing on connections and streams. Independently of the test scenarios, we need to define what we categorize as actual errors, so that the SymEx engine can actually detect which paths exhibit erroneous behavior. We present two categories of errors here, one focused on interoperability, and one focused on robustness. Testing Interoperability Generally speaking, whenever there is a conflict between what the communication partners believe the state of their connection to be, an interoperability violation exists. In the case of networked programs, it is important to quantify the belief state of each endpoint in a way that is neither too constrained (e.g., if the server believes that a data connection is open, but the client has already sent a shutdown request, there is no conflict), nor too open (otherwise error detection becomes impossible). Such issues can cause the communication to continue without exhibiting low-level errors, but the result of the execution to differ from what was expected. For example, if the amount of application data sent by one endpoint differs from the amount of data received by the other after a finished transmission, this is an error, as the two endpoints hold different beliefs about the correct state of the connection. To be able to detect such bugs, it is necessary to have a way to extract the current belief state of each endpoint in a way that can be compared to that of the other endpoints. Here, standardization can help: A definition of what exactly is part of the (belief) state of a QUIC connection could be used by implementations to provide this information to analysis tools. Such information could then be used by testing and verification tools to great effect, enabling stronger and more semantically meaningful analyses. Testing Robustness Robustness can be defined as the ability of an implementation to deal correctly with unexpected events, such as packet loss, reordering or packets crafted with malicious intent. Here, errors usually manifest in the form of, e.g., out-of-bound memory accesses, useafter-free violations, assertion errors, etc. When using a SymEx engine, the engine will provide the capability to test for such violations out-of-the-box, already providing valuable testing feedback without needing to define additional error conditions. Generality of the Method This method is, in its core, protocol independent, and can be applied to other protocols than QUIC. However, its application to QUIC shows the effort required to implement it for non-trivial, real world protocols, as well as its suitability for such protocols. The question in this case is scalability: While it is usually straight-forward to apply any method to simple examples, we are interested in whether the method scales to implementations of complex protocols, such as QUIC, and also to see the requirements for such protocols in regards to automated testing. CASE STUDY: PICOQUIC AND QUANT For our case study we implemented our method for picoquic 1 and QUANT 2 . These implementations were chosen because they are written in C, and could therefore be analyzed by KLEE, our SymEx engine of choice, out-of-the-box. It has successfully been shown that SymEx can also be applied to programs written in other languages, like C++ [8], so this is not a limitation of the general approach. We defined multiple test scenarios, and developed simple clients for each library that execute the defined scenarios. An additional challenge is that the KLEE SymEx engine [3] only works on single programs, which caused us to implement a single program that instantiates all communication partners and advances them in tandem for each test scenario. This means that one endpoint (e.g., client or server) is executed until it can make no more progress (i.e., it blocks waiting for a response) at which point execution switches to the next endpoint. This continues until either the scenario is finished, or one endpoint reports an error. In the following sections we describe our test scenarios and present which library-specific adaptations were necessary, as well as which library-independent abstractions we implemented that can be re-used for other libraries in the future. Test Scenarios We decided upon three test scenarios that exercise some of the core features of QUIC: In the first scenario, a client establishes a connection with a server, then closes it again. In the second scenario, we establish a connection just as before, but the client also opens a stream and sends a simple HTTP request (GET /index.html), which the server then closes without responding. Finally, the third scenario builds upon the second one, but the server also responds with a one-byte response. We define as interoperability issues any case in which a run ends without the underlying scenario being fulfilled, e.g., because a connection could not be established, or because one of the endpoints timed out during the process. For each library we implemented a frontend which provides functions that create a client or a server for one of the scenarios, a function that advances a client or server (executing it until it has reached the next stage in the scenario or is blocked on network input), and a function that checks whether a client or server is finished with the scenario. In our evaluation, we focused on the scenarios being executed with a picoquic client communicating with a QUANT server, but our implementation also supports the other cases (QUANT client and picoquic server or both from the same library). Library-Independent Abstractions In order symbolically execute the test scenarios, we had to implement abstractions for various functionalities, such as blocking and non-blocking network operations, as well as cryptographic operations. Figure 2 shows an overview of the test setup, including the layers we replaced with abstractions, such as communication via UDP. UNIX Sockets. To enable KLEE to correctly route network data, we explicitly modeled the network environment by providing simple custom implementations of functions such as socket, connect, sendto, and so forth. Note that we implemented only those parts of the POSIX socket API necessary for executing the two implementations, as the whole API surface covers extensive functionality. These parts where straightforward to implement, and we used a simple linked-list structure for sent packets and otherwise tracked additional information per socket, and only implemented UDP functionality. Symbolic Values. We also used our abstraction to model some of the properties of UDP-based communication, such as unreliability. To model packet drops, we used a symbolic variable that decides whether or not to drop each packet. The result of this is that we will for each packet explore a path in which this packet was lost. Additionally, we also implemented the possibility to make certain bytes of a sent packet symbolic instead of simply delivering the packet. This allows testing the receiver of each packet with regards to robustness, within the current state of the communication. While this is enough for QUIC implementations that rely on blocking communication, such as picoquic, others rely on asynchronous event notifications. For the case of QUANT, this is provided by libev. Libev. Libev is a library that provides an event loop for asynchronous applications. We implemented a mock version of libev that fulfilled our requirements of being easy to integrate into QUANT and our final test scenario binaries, as well as being simple to evaluate with KLEE. OpenSSL. SymEx is not able to reverse constraints that are based on cryptographic operations (encryption, decryption, hashing, etc.), as otherwise the underlying cryptography would be broken. As QUIC heavily relies on cryptographic operations, we needed to make these operations transparent, for which we implemented an OpenSSL abstraction that always performs null-encryption. This means we implemented most of the functions used by picoquic and QUANT in a very bare-bones fashion, often nothing more than a no-op, and implemented encryption and decryption basically as a memcpy. With regards to hash-functions, we decided to use actual implementations of these hashes instead of, e.g., hashing all values to the same hash value. This had certain implications for our evaluation: On one hand, it makes our implementation more correct, as different messages will correctly hash to different values. On the other hand, whenever our SymEx engine had to reverse the result of such a hash-function, it would not be able to do so, possibly preventing the exploration of certain parts of the libraries. Library-Independence. All of these mocks are implemented independently of the QUIC library under test, and are reusable for future interoperability tests. Thus, our work lays the foundation for testing a larger set of implementations. Picoquic For picoquic, a frontend that can execute our test scenarios was straightforward to implement, due to the fact that an example client and an example server were available. We replaced blocking reads in the client and server with points at which execution would return to the test harness, so the next communication partner would be able to make progress. This was made easy by the fact that the picoquic API itself only prepares packets for sending, and leaves the actual sending to the application. This means that we could implement the communication handover inside of our frontend library, and did not need to implement it inside of picoquic itself, requiring no changes to the library. QUANT The changes needed for QUANT were more extensive, as QUANT internally uses a libev-based event loop, which we needed to intercept in order to be able to return execution to the test harness when the event loop would block waiting for new data. To do so, in addition to implementing a simple variant of libev as described before, we modified the top-level API functions of QUANT. These expected blocking behavior of the underlying event loop, but instead we changed them to return control to the test harness when entering the event loop would block. Additionally, QUANT made use of global variables, which lead to corrupt behavior when, e.g., trying to instantiate a QUANT server and client in the same binary. To circumvent this, we performed a simple renaming of all defined symbols on the LLVM IR of QUANT. This prefixes all functions, such as q_connect, with a prefix of our choice, resulting in, e.g., client__q_connect and server__q_connect. Since this also renamed all global variables, this allowed us to test QUANT clients and servers in the same binary. As only a single QUANT instance is contained, our case study did not require this additional renaming. However, since global variables are a common feature in programs, it is necessary that this is also supported by our approach. EVALUATION For our evaluation we considered six different combinations of scenarios and symbolic input. All configurations were executed in KLEE with Z3 [5] as the underlying SMT solver, with a time limit of 8 hours and a memory limit of 32 GB on a system with two E5-2643 v4 processors providing a total of 12 physical cores and 256 GB main memory. We additionally added timeouts of 10 seconds per instruction and per query, to prevent the analysis from being stuck on too hard queries. We chose picoquic and QUANT for our case study as both are written in C, which is supported by KLEE. Both of these libraries also implement the newest version of the QUIC standard at the time of writing (draft 14). Configurations We tested six configurations and provide the results of their symbolic execution in Table 1. Sym-stream. This configuration combines all three described scenarios. We added symbolic input that chooses which of these to execute, resulting in the execution of all three, as SymEx explores all possible paths. This is the same as executing all three scenarios concretely without further symbolic values. This configuration terminated in about a minute after exploring all three reachable paths through the test binary, and reported two bugs. The first error is an interoperability bug that we originally found during the development of our implementation. This bug occurs in the second scenario, when the stream is closed by the server without sending any data. In this case, the QUANT server silently closes the stream on its end, not notifying the client. The client then times out and closes the connection prematurely. The second error occurs because certain resources are freed which might still be in use inside of libev. This bug was discovered because our libev-abstraction touched the freed value, which was discovered by KLEE. In practice, this kind of bug is hard to check: As it occurred in a shared library, concrete execution with a tool like ASAN would not detect this bug, and this is exactly the kind of bug that can cause rare, random crashes. We reported both bugs in the QUANT library, and both were verified and later fixed 34 . This configuration also gives a good baseline for instruction and branch coverage, as the other configurations explore the third scenario, which covers the most API surface, with different symbolic values. The values for instruction and branch coverage include dead code, and thus their absolute values need to be treated with care. However, they can be used to compare against the other configurations. Sym-version. This configuration is built upon the third scenario (connection establishment, new stream, response), but makes the version proposed by the picoquic client to the server symbolic. We chose this configuration because setting the proposed version is an option of the picoquic library. This configuration terminates after only 25 minutes, with most of the time spent inside the solver. The reason for this is that this configuration resulted early on in the generation of constraints that were not solvable by the SMT solver in the given timeout, thus terminating all paths early on. Nevertheless, this configuration also found an error that prevented the establishment of a connection. This error occurs when the proposed version is set to 0xbabababa. As the current QUIC draft reserves all versions of the form 0x?a?a?a?a for version negotiation, it seems plausible that this version by itself could not lead to a successfully established connection. We categorize this bug as very mild, as it is obviously only a small API problem. Sym-drop. For this configuration we symbolically dropped every packet. This is the first configuration that needed more than a few hundred MB of memory, and also the first that explored a large number of paths through the program. We see that only little time was spent solving constraints, which makes sense, since no symbolic data was actually touched by either of the libraries (either a packet was delivered as-is or it was dropped). Most interestingly, this is also the first configuration that found a bug which only occurred after multiple exchanged packets, and would not be easily found during manual testing. The reported test case drops the 4th, 5th and 7th packet exchanged between the two endpoints, which triggers a segfault due to a null-pointer in the QUANT server when the 9th packet is received. We verified that this bug also occurs when running concretely with regular OpenSSL instead of our abstraction. This is a robustness bug, but it might also be an interoperability bug, as other implementations might not trigger it. Sym-mod-X. In these configurations, the first X bytes of every sent packet are made symbolic, in order to test the robustness of the receiving endpoint. This category includes the two configurations that reached the highest instruction and branch coverages, but it also includes the configuration that achieved the lowest coverages. A trend can be seen here: More symbolic bytes cause more work for the SymEx engine due to state explosion, resulting in more time spent inside the SMT solver, resulting in slower progress overall. However, the run that achieved the lowest coverage uncovered an additional bug in QUANT's packet receiving code. The generated test case triggers the bug by replacing the first 10 bytes of the first packet sent by picoquic with the concrete values [0xff, 0x01, 0x01, 0x01, 0x01, 0x67, 0xff, 0xff, 0xff, 0xff], which leads to a null-pointer dereference in the server. We verified this bug as well while running without our OpenSSL abstraction. FUTURE WORK While our case study shows the usefulness of automated testing techniques such as SymEx for analyzing QUIC implementations, there is still much that can be done. A first and important step is the definition of the kinds of belief state QUIC implementations should be able to report on. In a second step, such a model can then be used for testing implementations for state divergence regarding the belief states of the different endpoints. Most of the effort to achieve this should be in defining a common ground for the definition of the belief state. We expect then extracting the belief state from implementations to require manageable effort, since implementations must already be keeping track of the state of each connection. This could be realized in the form of standardized testing and verification interfaces for protocol implementations, which would enable high levels of accessibility for new analysis approaches. This in turn would lead to high-quality implementations, increasing stability, robustness and performance in a field where all of these are important. Our test scenario only dropped packets or made some of the bytes symbolic, but did not take the specific structure of QUIC packets into account. Here, a layer that reads the packets that are sent and performs symbolic mutations based on the semantics of the protocol, e.g., symbolic ACK numbers, could lead to more thorough and scalable testing. The need for such a method becomes obvious when looking at the sym-mod-10 configuration, which already caused a visible slowdown of the SymEx engine due to state explosion. Furthermore, to analyze more parts of protocol implementations, additional test scenarios that excercise so-far uncovered protocol functionality are required. One way to achieve this would be to create more test scenarios based on the QUIC standard. However, it might also be possible to automatically derive test scenarios, either from a model of the standard, or from the implementations themselves. For this, knowing which API call caused which state change could help choosing possible next API calls. To extend test scenarios to more than two endpoints, it might be favorable to utilize SymEx techniques that target distributed systems, such as KleeNet [17,18]. While doing so, it might also become relevant to investigate symbolic time, since behavior in network protocols is often dependent on timing, most notably due to timeouts. CONCLUSION We presented an interoperability-guided method to test QUIC implementations and demonstrated its potential in a case study. Our method consists of testing implementations in pre-defined scenarios, but enriched with additional symbolic input, such as packet drops and symbolic modifications. In our case study we showed that, in order to symbolically execute and test implementations, it is required that underlying libraries are abstracted in a way that is sensible for testing. On one hand, kernel code such as UNIX sockets can otherwise not be executed and analyzed, but also to, e.g., turn encryption transparent in order to enable any analysis at all. We were able to uncover several bugs with varying levels of severity. While two were simple API issues that could easily be found through manual testing, two of the other three would be hard to find without some kind of automated testing approach, as they occur only in very specific situations: The right packets in a long chain of packets have to be dropped, or a very specific first packet has to be sent. The last bug was only detected due to abstracting libev, but does not necessarily require SymEx to uncover. In summary, most of these bugs are robustness bugs. To detect deeper semantic interoperability bugs, support in implementations that provides information about the current belief state of endpoints is required. We appeal to the authors of QUIC implementations, as well as to the members of the IETF working group, to develop a common understanding of what information makes up the belief state of a QUIC connection, and to extend implementations with ways to report this information for the sake of deep semantic interoperability testing.
4,508
1811.12099
2903071347
The main reason for the standardization of network protocols, like QUIC, is to ensure interoperability between implementations, which poses a challenging task. Manual tests are currently used to test the different existing implementations for interoperability, but given the complex nature of network protocols, it is hard to cover all possible edge cases. State-of-the-art automated software testing techniques, such as Symbolic Execution (SymEx), have proven themselves capable of analyzing complex real-world software and finding hard to detect bugs. We present a SymEx-based method for finding interoperability issues in QUIC implementations, and explore its merit in a case study that analyzes the interoperability of picoquic and QUANT. We find that, while SymEx is able to analyze deep interactions between different implementations and uncovers several bugs, in order to enable efficient interoperability testing, implementations need to provide additional information about their current protocol state.
Programs have also been analyzed with formal methods, such as symex , to test for obvious problems like memory safety and assertion violations @cite_1 @cite_11 and for less easily checked properties, such as liveness violations @cite_6 and authentication bypass flaws in firmware binaries @cite_3 . One of the main problems encountered when formally analyzing real-world code is the penchant of the state-space to grow infeasibly large---a problem also known as . Many different approaches to tame the state explosion problem inherent in symex have been proposed in the past: state merging @cite_18 , targeted search strategies @cite_0 and pruning of provably equivalent paths @cite_15 , to name a few.
{ "abstract": [ "Symbolic execution has proven to be a practical technique for building automated test case generation and bug finding tools. Nevertheless, due to state explosion, these tools still struggle to achieve scalability. Given a program, one way to reduce the number of states that the tools need to explore is to merge states obtained on different paths. Alas, doing so increases the size of symbolic path conditions (thereby stressing the underlying constraint solver) and interferes with optimizations of the exploration process (also referred to as search strategies). The net effect is that state merging may actually lower performance rather than increase it. We present a way to automatically choose when and how to merge states such that the performance of symbolic execution is significantly increased. First, we present query count estimation, a method for statically estimating the impact that each symbolic variable has on solver queries that follow a potential merge point; states are then merged only when doing so promises to be advantageous. Second, we present dynamic state merging, a technique for merging states that interacts favorably with search strategies in automated test case generation and bug finding tools. Experiments on the 96 GNU Coreutils show that our approach consistently achieves several orders of magnitude speedup over previously published results. Our code and experimental data are publicly available at http: cloud9.epfl.ch.", "We present a new symbolic execution tool, KLEE, capable of automatically generating tests that achieve high coverage on a diverse set of complex and environmentally-intensive programs. We used KLEE to thoroughly check all 89 stand-alone programs in the GNU COREUTILS utility suite, which form the core user-level environment installed on millions of Unix systems, and arguably are the single most heavily tested set of open-source programs in existence. KLEE-generated tests achieve high line coverage -- on average over 90 per tool (median: over 94 ) -- and significantly beat the coverage of the developers' own hand-written test suite. When we did the same for 75 equivalent tools in the BUSYBOX embedded system suite, results were even better, including 100 coverage on 31 of them. We also used KLEE as a bug finding tool, applying it to 452 applications (over 430K total lines of code), where it found 56 serious bugs, including three in COREUTILS that had been missed for over 15 years. Finally, we used KLEE to crosscheck purportedly identical BUSYBOX and COREUTILS utilities, finding functional correctness errors and a myriad of inconsistencies.", "", "Liveness violation bugs are notoriously hard to detect, especially due to the difficulty inherent in applying formal methods to real-world programs. We present a generic and practically useful liveness property which defines a program as being live as long as it will eventually either consume more input or terminate. We show that this property naturally maps to many different kinds of real-world programs.", "In this paper, we study the problem of automatically finding program executions that reach a particular target line. This problem arises in many debugging scenarios; for example, a developer may want to confirm that a bug reported by a static analysis tool on a particular line is a true positive. We propose two new directed symbolic execution strategies that aim to solve this problem: shortest-distance symbolic execution (SDSE) uses a distance metric in an interprocedural control flow graph to guide symbolic execution toward a particular target; and call-chain-backward symbolic execution (CCBSE) iteratively runs forward symbolic execution, starting in the function containing the target line, and then jumping backward up the call chain until it finds a feasible path from the start of the program. We also propose a hybrid strategy, Mix-CCBSE, which alternates CCBSE with another (forward) search strategy. We compare these three with several existing strategies from the literature on a suite of six GNU Coreutils programs. We find that SDSE performs extremely well in many cases but may fail badly. CCBSE also performs quite well, but imposes additional overhead that sometimes makes it slower than SDSE. Considering all our benchmarks together, Mix-CCBSE performed best on average, combining to good effect the features of its constituent components.", "Recent work has used variations of symbolic execution to automatically generate high-coverage test inputs [3, 4, 7, 8, 14]. Such tools have demonstrated their ability to find very subtle errors. However, one challenge they all face is how to effectively handle the exponential number of paths in checked code. This paper presents a new technique for reducing the number of traversed code paths by discarding those that must have side-effects identical to some previously explored path. Our results on a mix of open source applications and device drivers show that this (sound) optimization reduces the numbers of paths traversed by several orders of magnitude, often achieving program coverage far out of reach for a standard constraint-based execution system.", "The challenges---and great promise---of modern symbolic execution techniques, and the tools to help implement them." ], "cite_N": [ "@cite_18", "@cite_1", "@cite_3", "@cite_6", "@cite_0", "@cite_15", "@cite_11" ], "mid": [ "1979693894", "1710734607", "2091939272", "2883123122", "116894366", "1497028280", "2107147876" ] }
Interoperability-Guided Testing of QUIC Implementations using Symbolic Execution
The emergence of new, modern protocols for the Internet promises a solution to long-standing issues that can only be solved by changing core parts of the current protocol stack. Such new protocols and their implementations must meet the highest requirements: They will have to reliably function at similar levels of maturity as what they aim to replace. This includes aspects such as reliability, security, performance and, prominently, interoperability between implementations. Ensuring interoperability is the main reason for standardizing QUIC as a protocol, and the IETF standardization process goes to great lengths, such as requiring multiple independent implementations, to make sure this is achievable. Thus, better methods and tools that assist with the difficult challenge of interoperability testing are highly desirable. Automated testing techniques, such as Symbolic Execution (SymEx), have proven themselves to be capable of analyzing complex real world software, usually focused on finding low-level safety violations [4], and SymEx has also proven its worth in the networking domain in various other ways [7, 14, 16-18, 22, 24, 25]. This paper explores the potential of SymEx for checking the interoperability of QUIC implementations. It does so by presenting a SymEx-based method to detect interoperability issues, and demonstrates its potential in a case study of two existing QUIC implementations, picoquic and QUANT. We discover that, while our method is able to successfully analyze nontrivial interactions between different implementations, implementations need to disclose more protocol-level information to truly enable deep semantic interoperability testing. Key Contributions and Outline The key contributions of this paper are as follows: • We describe a method that uses Symbolic Execution (SymEx) to test QUIC implementations for interoperability, and discuss how additional information from implementations about their current protocol state could be leveraged for semantically deeper testing. • We then present our case study in which we symbolically test picoquic and QUANT for interoperability, and discuss the abstraction layers that are necessary to enable SymEx for QUIC implementations. • The final key contribution is the evaluation of our implementation, testing picoquic and QUANT, in which we report on the performance of our method as well as on defects we discovered. We begin by giving background on SymEx in Sect. 2, followed by a discussion of related work in Sect. 3. We then present our method in Sect. 4, and describe its implementation and the setup of the case study in Sect. 5. This is followed by an evaluation of our results in Sect. 6, before we shortly discuss future work in Sect. 7 and conclude in Sect. 8 if(x < 5) { } if(x >= 100) {x < 5} if(x >= 100) {x ≥ 5} return ok {x < 5, x ≥ 100} return ok {x < 5, x < 100} return ok {x ≥ 5, x ≥ 100} return ok {x ≥ 5, x < 100} 2 if(x < 5) ok = false; 1 bool ok = true; 3 if(x >= 100) ok = false; 4 return ok; Figure 1: Symbolic Execution (SymEx) of a small example program. Constraints encountered in branching statements are recorded in the path constraints of the corresponding explored paths. By checking new branching conditions for satisfiability on each path, exactly all reachable paths through the program are explored. SYMBOLIC EXECUTION (SYMEX) Given a program that takes some input (e.g., command line arguments, files, network packets, etc.), SymEx systematically explores the program by executing all reachable paths. It does so by assigning symbolic values instead of concrete ones to its input, which allows the SymEx engine to fork execution at a branch-statement (i.e., if) when both branches are feasible. If this is the case, the condition that caused the fork (i.e., the condition inside the if statement) is remembered on the execution path following the true-branch as an additional constraint. On the other execution path, which follows the false-branch, the negation of the condition is remembered as a constraint instead. To determine the reachability given the current constraints, an SMT solver, such as Z3 [5], is queried. SMT solvers are the backbone of every SymEx engine, and their performance and completeness directly influence the efficiency of the symbolic analysis, and they ensure that only feasible paths are explored. Continuing in this fashion a SymEx engine will explore all reachable paths through the program. Whenever a path terminates, either regularly or because an error was encountered, the engine will query the SMT solver using the collected path constraints to get concrete values for each symbolic input value. These values will then be recorded in the form of a test case, which can then be run again later to exercise the same path through the program. If a bug was encountered, the generated test case will be able to reproduce the taken path for further debugging and introspection. Figure 1 shows a small example program that performs operations depending on the value of a symbolic input variable x. The program contains two conditional branches that have to be traversed before the return in line 4 is reached. On the right, all paths explored by SymEx are shown. In the beginning, x is unconstrained, but, as SymEx progresses, a path for each side of the first branch is explored. For each side, a corresponding constraint (either x < 5 or x ≥ 5) is added to the path constraints. When the second branch is reached, only three paths need to be explored further: The constraint set {x < 5, x ≥ 100} is not satisfiable, and therefore this path will never be reachable during execution. In the end, SymEx will query the SMT solver for concrete values for x for each path to generate a suite of concrete test cases that cover all reachable paths of the program. METHOD OUTLINE SymEx engines such as KLEE [3], which our cases study utilizes, usually expect their input to be a program. However, protocol implementations are naturally libraries, and as such lack an implicit singular entry point. Although ways to analyze libraries directly have been proposed [15], they suffer from a lack of insight into what constitutes a sensible use of the library. Instead, we propose to analyze programs that utilize the libraries and execute different test scenarios. One way to choose the test scenarios for this would be to use existing applications that already implement real-world application logic. This is currently difficult to do for QUIC, as there are only very few applications built on top of QUIC, and is further complicated by their use of only a small set of different QUIC implementations. Instead, we follow the current best-practice in compliance testing by designing test scenarios based on primitives defined in the QUIC standard. Unlike common, concrete compliance testing suites, we formulate symbolic testing scenarios that perform large families of related tests in one go. These describe the involved endpoints (e.g., clients and servers) and the communication that takes place between them, for example, which connections are established, which streams are opened, what is sent on those streams, and so on. Such scenarios can be defined in both high-level as well as low-level terms. A more low-level scenario describes individual packets and effects such as loss or reordering instead of focusing on connections and streams. Independently of the test scenarios, we need to define what we categorize as actual errors, so that the SymEx engine can actually detect which paths exhibit erroneous behavior. We present two categories of errors here, one focused on interoperability, and one focused on robustness. Testing Interoperability Generally speaking, whenever there is a conflict between what the communication partners believe the state of their connection to be, an interoperability violation exists. In the case of networked programs, it is important to quantify the belief state of each endpoint in a way that is neither too constrained (e.g., if the server believes that a data connection is open, but the client has already sent a shutdown request, there is no conflict), nor too open (otherwise error detection becomes impossible). Such issues can cause the communication to continue without exhibiting low-level errors, but the result of the execution to differ from what was expected. For example, if the amount of application data sent by one endpoint differs from the amount of data received by the other after a finished transmission, this is an error, as the two endpoints hold different beliefs about the correct state of the connection. To be able to detect such bugs, it is necessary to have a way to extract the current belief state of each endpoint in a way that can be compared to that of the other endpoints. Here, standardization can help: A definition of what exactly is part of the (belief) state of a QUIC connection could be used by implementations to provide this information to analysis tools. Such information could then be used by testing and verification tools to great effect, enabling stronger and more semantically meaningful analyses. Testing Robustness Robustness can be defined as the ability of an implementation to deal correctly with unexpected events, such as packet loss, reordering or packets crafted with malicious intent. Here, errors usually manifest in the form of, e.g., out-of-bound memory accesses, useafter-free violations, assertion errors, etc. When using a SymEx engine, the engine will provide the capability to test for such violations out-of-the-box, already providing valuable testing feedback without needing to define additional error conditions. Generality of the Method This method is, in its core, protocol independent, and can be applied to other protocols than QUIC. However, its application to QUIC shows the effort required to implement it for non-trivial, real world protocols, as well as its suitability for such protocols. The question in this case is scalability: While it is usually straight-forward to apply any method to simple examples, we are interested in whether the method scales to implementations of complex protocols, such as QUIC, and also to see the requirements for such protocols in regards to automated testing. CASE STUDY: PICOQUIC AND QUANT For our case study we implemented our method for picoquic 1 and QUANT 2 . These implementations were chosen because they are written in C, and could therefore be analyzed by KLEE, our SymEx engine of choice, out-of-the-box. It has successfully been shown that SymEx can also be applied to programs written in other languages, like C++ [8], so this is not a limitation of the general approach. We defined multiple test scenarios, and developed simple clients for each library that execute the defined scenarios. An additional challenge is that the KLEE SymEx engine [3] only works on single programs, which caused us to implement a single program that instantiates all communication partners and advances them in tandem for each test scenario. This means that one endpoint (e.g., client or server) is executed until it can make no more progress (i.e., it blocks waiting for a response) at which point execution switches to the next endpoint. This continues until either the scenario is finished, or one endpoint reports an error. In the following sections we describe our test scenarios and present which library-specific adaptations were necessary, as well as which library-independent abstractions we implemented that can be re-used for other libraries in the future. Test Scenarios We decided upon three test scenarios that exercise some of the core features of QUIC: In the first scenario, a client establishes a connection with a server, then closes it again. In the second scenario, we establish a connection just as before, but the client also opens a stream and sends a simple HTTP request (GET /index.html), which the server then closes without responding. Finally, the third scenario builds upon the second one, but the server also responds with a one-byte response. We define as interoperability issues any case in which a run ends without the underlying scenario being fulfilled, e.g., because a connection could not be established, or because one of the endpoints timed out during the process. For each library we implemented a frontend which provides functions that create a client or a server for one of the scenarios, a function that advances a client or server (executing it until it has reached the next stage in the scenario or is blocked on network input), and a function that checks whether a client or server is finished with the scenario. In our evaluation, we focused on the scenarios being executed with a picoquic client communicating with a QUANT server, but our implementation also supports the other cases (QUANT client and picoquic server or both from the same library). Library-Independent Abstractions In order symbolically execute the test scenarios, we had to implement abstractions for various functionalities, such as blocking and non-blocking network operations, as well as cryptographic operations. Figure 2 shows an overview of the test setup, including the layers we replaced with abstractions, such as communication via UDP. UNIX Sockets. To enable KLEE to correctly route network data, we explicitly modeled the network environment by providing simple custom implementations of functions such as socket, connect, sendto, and so forth. Note that we implemented only those parts of the POSIX socket API necessary for executing the two implementations, as the whole API surface covers extensive functionality. These parts where straightforward to implement, and we used a simple linked-list structure for sent packets and otherwise tracked additional information per socket, and only implemented UDP functionality. Symbolic Values. We also used our abstraction to model some of the properties of UDP-based communication, such as unreliability. To model packet drops, we used a symbolic variable that decides whether or not to drop each packet. The result of this is that we will for each packet explore a path in which this packet was lost. Additionally, we also implemented the possibility to make certain bytes of a sent packet symbolic instead of simply delivering the packet. This allows testing the receiver of each packet with regards to robustness, within the current state of the communication. While this is enough for QUIC implementations that rely on blocking communication, such as picoquic, others rely on asynchronous event notifications. For the case of QUANT, this is provided by libev. Libev. Libev is a library that provides an event loop for asynchronous applications. We implemented a mock version of libev that fulfilled our requirements of being easy to integrate into QUANT and our final test scenario binaries, as well as being simple to evaluate with KLEE. OpenSSL. SymEx is not able to reverse constraints that are based on cryptographic operations (encryption, decryption, hashing, etc.), as otherwise the underlying cryptography would be broken. As QUIC heavily relies on cryptographic operations, we needed to make these operations transparent, for which we implemented an OpenSSL abstraction that always performs null-encryption. This means we implemented most of the functions used by picoquic and QUANT in a very bare-bones fashion, often nothing more than a no-op, and implemented encryption and decryption basically as a memcpy. With regards to hash-functions, we decided to use actual implementations of these hashes instead of, e.g., hashing all values to the same hash value. This had certain implications for our evaluation: On one hand, it makes our implementation more correct, as different messages will correctly hash to different values. On the other hand, whenever our SymEx engine had to reverse the result of such a hash-function, it would not be able to do so, possibly preventing the exploration of certain parts of the libraries. Library-Independence. All of these mocks are implemented independently of the QUIC library under test, and are reusable for future interoperability tests. Thus, our work lays the foundation for testing a larger set of implementations. Picoquic For picoquic, a frontend that can execute our test scenarios was straightforward to implement, due to the fact that an example client and an example server were available. We replaced blocking reads in the client and server with points at which execution would return to the test harness, so the next communication partner would be able to make progress. This was made easy by the fact that the picoquic API itself only prepares packets for sending, and leaves the actual sending to the application. This means that we could implement the communication handover inside of our frontend library, and did not need to implement it inside of picoquic itself, requiring no changes to the library. QUANT The changes needed for QUANT were more extensive, as QUANT internally uses a libev-based event loop, which we needed to intercept in order to be able to return execution to the test harness when the event loop would block waiting for new data. To do so, in addition to implementing a simple variant of libev as described before, we modified the top-level API functions of QUANT. These expected blocking behavior of the underlying event loop, but instead we changed them to return control to the test harness when entering the event loop would block. Additionally, QUANT made use of global variables, which lead to corrupt behavior when, e.g., trying to instantiate a QUANT server and client in the same binary. To circumvent this, we performed a simple renaming of all defined symbols on the LLVM IR of QUANT. This prefixes all functions, such as q_connect, with a prefix of our choice, resulting in, e.g., client__q_connect and server__q_connect. Since this also renamed all global variables, this allowed us to test QUANT clients and servers in the same binary. As only a single QUANT instance is contained, our case study did not require this additional renaming. However, since global variables are a common feature in programs, it is necessary that this is also supported by our approach. EVALUATION For our evaluation we considered six different combinations of scenarios and symbolic input. All configurations were executed in KLEE with Z3 [5] as the underlying SMT solver, with a time limit of 8 hours and a memory limit of 32 GB on a system with two E5-2643 v4 processors providing a total of 12 physical cores and 256 GB main memory. We additionally added timeouts of 10 seconds per instruction and per query, to prevent the analysis from being stuck on too hard queries. We chose picoquic and QUANT for our case study as both are written in C, which is supported by KLEE. Both of these libraries also implement the newest version of the QUIC standard at the time of writing (draft 14). Configurations We tested six configurations and provide the results of their symbolic execution in Table 1. Sym-stream. This configuration combines all three described scenarios. We added symbolic input that chooses which of these to execute, resulting in the execution of all three, as SymEx explores all possible paths. This is the same as executing all three scenarios concretely without further symbolic values. This configuration terminated in about a minute after exploring all three reachable paths through the test binary, and reported two bugs. The first error is an interoperability bug that we originally found during the development of our implementation. This bug occurs in the second scenario, when the stream is closed by the server without sending any data. In this case, the QUANT server silently closes the stream on its end, not notifying the client. The client then times out and closes the connection prematurely. The second error occurs because certain resources are freed which might still be in use inside of libev. This bug was discovered because our libev-abstraction touched the freed value, which was discovered by KLEE. In practice, this kind of bug is hard to check: As it occurred in a shared library, concrete execution with a tool like ASAN would not detect this bug, and this is exactly the kind of bug that can cause rare, random crashes. We reported both bugs in the QUANT library, and both were verified and later fixed 34 . This configuration also gives a good baseline for instruction and branch coverage, as the other configurations explore the third scenario, which covers the most API surface, with different symbolic values. The values for instruction and branch coverage include dead code, and thus their absolute values need to be treated with care. However, they can be used to compare against the other configurations. Sym-version. This configuration is built upon the third scenario (connection establishment, new stream, response), but makes the version proposed by the picoquic client to the server symbolic. We chose this configuration because setting the proposed version is an option of the picoquic library. This configuration terminates after only 25 minutes, with most of the time spent inside the solver. The reason for this is that this configuration resulted early on in the generation of constraints that were not solvable by the SMT solver in the given timeout, thus terminating all paths early on. Nevertheless, this configuration also found an error that prevented the establishment of a connection. This error occurs when the proposed version is set to 0xbabababa. As the current QUIC draft reserves all versions of the form 0x?a?a?a?a for version negotiation, it seems plausible that this version by itself could not lead to a successfully established connection. We categorize this bug as very mild, as it is obviously only a small API problem. Sym-drop. For this configuration we symbolically dropped every packet. This is the first configuration that needed more than a few hundred MB of memory, and also the first that explored a large number of paths through the program. We see that only little time was spent solving constraints, which makes sense, since no symbolic data was actually touched by either of the libraries (either a packet was delivered as-is or it was dropped). Most interestingly, this is also the first configuration that found a bug which only occurred after multiple exchanged packets, and would not be easily found during manual testing. The reported test case drops the 4th, 5th and 7th packet exchanged between the two endpoints, which triggers a segfault due to a null-pointer in the QUANT server when the 9th packet is received. We verified that this bug also occurs when running concretely with regular OpenSSL instead of our abstraction. This is a robustness bug, but it might also be an interoperability bug, as other implementations might not trigger it. Sym-mod-X. In these configurations, the first X bytes of every sent packet are made symbolic, in order to test the robustness of the receiving endpoint. This category includes the two configurations that reached the highest instruction and branch coverages, but it also includes the configuration that achieved the lowest coverages. A trend can be seen here: More symbolic bytes cause more work for the SymEx engine due to state explosion, resulting in more time spent inside the SMT solver, resulting in slower progress overall. However, the run that achieved the lowest coverage uncovered an additional bug in QUANT's packet receiving code. The generated test case triggers the bug by replacing the first 10 bytes of the first packet sent by picoquic with the concrete values [0xff, 0x01, 0x01, 0x01, 0x01, 0x67, 0xff, 0xff, 0xff, 0xff], which leads to a null-pointer dereference in the server. We verified this bug as well while running without our OpenSSL abstraction. FUTURE WORK While our case study shows the usefulness of automated testing techniques such as SymEx for analyzing QUIC implementations, there is still much that can be done. A first and important step is the definition of the kinds of belief state QUIC implementations should be able to report on. In a second step, such a model can then be used for testing implementations for state divergence regarding the belief states of the different endpoints. Most of the effort to achieve this should be in defining a common ground for the definition of the belief state. We expect then extracting the belief state from implementations to require manageable effort, since implementations must already be keeping track of the state of each connection. This could be realized in the form of standardized testing and verification interfaces for protocol implementations, which would enable high levels of accessibility for new analysis approaches. This in turn would lead to high-quality implementations, increasing stability, robustness and performance in a field where all of these are important. Our test scenario only dropped packets or made some of the bytes symbolic, but did not take the specific structure of QUIC packets into account. Here, a layer that reads the packets that are sent and performs symbolic mutations based on the semantics of the protocol, e.g., symbolic ACK numbers, could lead to more thorough and scalable testing. The need for such a method becomes obvious when looking at the sym-mod-10 configuration, which already caused a visible slowdown of the SymEx engine due to state explosion. Furthermore, to analyze more parts of protocol implementations, additional test scenarios that excercise so-far uncovered protocol functionality are required. One way to achieve this would be to create more test scenarios based on the QUIC standard. However, it might also be possible to automatically derive test scenarios, either from a model of the standard, or from the implementations themselves. For this, knowing which API call caused which state change could help choosing possible next API calls. To extend test scenarios to more than two endpoints, it might be favorable to utilize SymEx techniques that target distributed systems, such as KleeNet [17,18]. While doing so, it might also become relevant to investigate symbolic time, since behavior in network protocols is often dependent on timing, most notably due to timeouts. CONCLUSION We presented an interoperability-guided method to test QUIC implementations and demonstrated its potential in a case study. Our method consists of testing implementations in pre-defined scenarios, but enriched with additional symbolic input, such as packet drops and symbolic modifications. In our case study we showed that, in order to symbolically execute and test implementations, it is required that underlying libraries are abstracted in a way that is sensible for testing. On one hand, kernel code such as UNIX sockets can otherwise not be executed and analyzed, but also to, e.g., turn encryption transparent in order to enable any analysis at all. We were able to uncover several bugs with varying levels of severity. While two were simple API issues that could easily be found through manual testing, two of the other three would be hard to find without some kind of automated testing approach, as they occur only in very specific situations: The right packets in a long chain of packets have to be dropped, or a very specific first packet has to be sent. The last bug was only detected due to abstracting libev, but does not necessarily require SymEx to uncover. In summary, most of these bugs are robustness bugs. To detect deeper semantic interoperability bugs, support in implementations that provides information about the current belief state of endpoints is required. We appeal to the authors of QUIC implementations, as well as to the members of the IETF working group, to develop a common understanding of what information makes up the belief state of a QUIC connection, and to extend implementations with ways to report this information for the sake of deep semantic interoperability testing.
4,508
1811.12099
2903071347
The main reason for the standardization of network protocols, like QUIC, is to ensure interoperability between implementations, which poses a challenging task. Manual tests are currently used to test the different existing implementations for interoperability, but given the complex nature of network protocols, it is hard to cover all possible edge cases. State-of-the-art automated software testing techniques, such as Symbolic Execution (SymEx), have proven themselves capable of analyzing complex real-world software and finding hard to detect bugs. We present a SymEx-based method for finding interoperability issues in QUIC implementations, and explore its merit in a case study that analyzes the interoperability of picoquic and QUANT. We find that, while SymEx is able to analyze deep interactions between different implementations and uncovers several bugs, in order to enable efficient interoperability testing, implementations need to provide additional information about their current protocol state.
As the the state explosion problem grows exponentially with the number of programs considered at the same time, approaches explicitly targeting distributed programs have been developed. For example, KleeNet @cite_19 @cite_10 exploits the independence of networked programs by delaying codependent path forks until messages are received at each node that require the fork to be actualized.
{ "abstract": [ "We present KleeNet, a Klee based bug hunting tool for sensor network applications before deployment. KleeNet automatically tests code for all possible inputs, ensures memory safety, and integrates well into TinyOS based application development life cycle, making it easy for developers to test their applications.", "Complex interactions and the distributed nature of wireless sensor networks make automated testing and debugging before deployment a necessity. A main challenge is to detect bugs that occur due to non-deterministic events, such as node reboots or packet duplicates. Often, these events have the potential to drive a sensor network and its applications into corner-case situations, exhibiting bugs that are hard to detect using existing testing and debugging techniques. In this paper, we present KleeNet, a debugging environment that effectively discovers such bugs before deployment. KleeNet executes unmodified sensor network applications on symbolic input and automatically injects non-deterministic failures. As a result, KleeNet generates distributed execution paths at high-coverage, including low-probability corner-case situations. As a case study, we integrated KleeNet into the Contiki OS and show its effectiveness by detecting four insidious bugs in the μIP TCP IP protocol stack. One of these bugs is critical and lead to refusal of further connections." ], "cite_N": [ "@cite_19", "@cite_10" ], "mid": [ "2158715516", "2062200967" ] }
Interoperability-Guided Testing of QUIC Implementations using Symbolic Execution
The emergence of new, modern protocols for the Internet promises a solution to long-standing issues that can only be solved by changing core parts of the current protocol stack. Such new protocols and their implementations must meet the highest requirements: They will have to reliably function at similar levels of maturity as what they aim to replace. This includes aspects such as reliability, security, performance and, prominently, interoperability between implementations. Ensuring interoperability is the main reason for standardizing QUIC as a protocol, and the IETF standardization process goes to great lengths, such as requiring multiple independent implementations, to make sure this is achievable. Thus, better methods and tools that assist with the difficult challenge of interoperability testing are highly desirable. Automated testing techniques, such as Symbolic Execution (SymEx), have proven themselves to be capable of analyzing complex real world software, usually focused on finding low-level safety violations [4], and SymEx has also proven its worth in the networking domain in various other ways [7, 14, 16-18, 22, 24, 25]. This paper explores the potential of SymEx for checking the interoperability of QUIC implementations. It does so by presenting a SymEx-based method to detect interoperability issues, and demonstrates its potential in a case study of two existing QUIC implementations, picoquic and QUANT. We discover that, while our method is able to successfully analyze nontrivial interactions between different implementations, implementations need to disclose more protocol-level information to truly enable deep semantic interoperability testing. Key Contributions and Outline The key contributions of this paper are as follows: • We describe a method that uses Symbolic Execution (SymEx) to test QUIC implementations for interoperability, and discuss how additional information from implementations about their current protocol state could be leveraged for semantically deeper testing. • We then present our case study in which we symbolically test picoquic and QUANT for interoperability, and discuss the abstraction layers that are necessary to enable SymEx for QUIC implementations. • The final key contribution is the evaluation of our implementation, testing picoquic and QUANT, in which we report on the performance of our method as well as on defects we discovered. We begin by giving background on SymEx in Sect. 2, followed by a discussion of related work in Sect. 3. We then present our method in Sect. 4, and describe its implementation and the setup of the case study in Sect. 5. This is followed by an evaluation of our results in Sect. 6, before we shortly discuss future work in Sect. 7 and conclude in Sect. 8 if(x < 5) { } if(x >= 100) {x < 5} if(x >= 100) {x ≥ 5} return ok {x < 5, x ≥ 100} return ok {x < 5, x < 100} return ok {x ≥ 5, x ≥ 100} return ok {x ≥ 5, x < 100} 2 if(x < 5) ok = false; 1 bool ok = true; 3 if(x >= 100) ok = false; 4 return ok; Figure 1: Symbolic Execution (SymEx) of a small example program. Constraints encountered in branching statements are recorded in the path constraints of the corresponding explored paths. By checking new branching conditions for satisfiability on each path, exactly all reachable paths through the program are explored. SYMBOLIC EXECUTION (SYMEX) Given a program that takes some input (e.g., command line arguments, files, network packets, etc.), SymEx systematically explores the program by executing all reachable paths. It does so by assigning symbolic values instead of concrete ones to its input, which allows the SymEx engine to fork execution at a branch-statement (i.e., if) when both branches are feasible. If this is the case, the condition that caused the fork (i.e., the condition inside the if statement) is remembered on the execution path following the true-branch as an additional constraint. On the other execution path, which follows the false-branch, the negation of the condition is remembered as a constraint instead. To determine the reachability given the current constraints, an SMT solver, such as Z3 [5], is queried. SMT solvers are the backbone of every SymEx engine, and their performance and completeness directly influence the efficiency of the symbolic analysis, and they ensure that only feasible paths are explored. Continuing in this fashion a SymEx engine will explore all reachable paths through the program. Whenever a path terminates, either regularly or because an error was encountered, the engine will query the SMT solver using the collected path constraints to get concrete values for each symbolic input value. These values will then be recorded in the form of a test case, which can then be run again later to exercise the same path through the program. If a bug was encountered, the generated test case will be able to reproduce the taken path for further debugging and introspection. Figure 1 shows a small example program that performs operations depending on the value of a symbolic input variable x. The program contains two conditional branches that have to be traversed before the return in line 4 is reached. On the right, all paths explored by SymEx are shown. In the beginning, x is unconstrained, but, as SymEx progresses, a path for each side of the first branch is explored. For each side, a corresponding constraint (either x < 5 or x ≥ 5) is added to the path constraints. When the second branch is reached, only three paths need to be explored further: The constraint set {x < 5, x ≥ 100} is not satisfiable, and therefore this path will never be reachable during execution. In the end, SymEx will query the SMT solver for concrete values for x for each path to generate a suite of concrete test cases that cover all reachable paths of the program. METHOD OUTLINE SymEx engines such as KLEE [3], which our cases study utilizes, usually expect their input to be a program. However, protocol implementations are naturally libraries, and as such lack an implicit singular entry point. Although ways to analyze libraries directly have been proposed [15], they suffer from a lack of insight into what constitutes a sensible use of the library. Instead, we propose to analyze programs that utilize the libraries and execute different test scenarios. One way to choose the test scenarios for this would be to use existing applications that already implement real-world application logic. This is currently difficult to do for QUIC, as there are only very few applications built on top of QUIC, and is further complicated by their use of only a small set of different QUIC implementations. Instead, we follow the current best-practice in compliance testing by designing test scenarios based on primitives defined in the QUIC standard. Unlike common, concrete compliance testing suites, we formulate symbolic testing scenarios that perform large families of related tests in one go. These describe the involved endpoints (e.g., clients and servers) and the communication that takes place between them, for example, which connections are established, which streams are opened, what is sent on those streams, and so on. Such scenarios can be defined in both high-level as well as low-level terms. A more low-level scenario describes individual packets and effects such as loss or reordering instead of focusing on connections and streams. Independently of the test scenarios, we need to define what we categorize as actual errors, so that the SymEx engine can actually detect which paths exhibit erroneous behavior. We present two categories of errors here, one focused on interoperability, and one focused on robustness. Testing Interoperability Generally speaking, whenever there is a conflict between what the communication partners believe the state of their connection to be, an interoperability violation exists. In the case of networked programs, it is important to quantify the belief state of each endpoint in a way that is neither too constrained (e.g., if the server believes that a data connection is open, but the client has already sent a shutdown request, there is no conflict), nor too open (otherwise error detection becomes impossible). Such issues can cause the communication to continue without exhibiting low-level errors, but the result of the execution to differ from what was expected. For example, if the amount of application data sent by one endpoint differs from the amount of data received by the other after a finished transmission, this is an error, as the two endpoints hold different beliefs about the correct state of the connection. To be able to detect such bugs, it is necessary to have a way to extract the current belief state of each endpoint in a way that can be compared to that of the other endpoints. Here, standardization can help: A definition of what exactly is part of the (belief) state of a QUIC connection could be used by implementations to provide this information to analysis tools. Such information could then be used by testing and verification tools to great effect, enabling stronger and more semantically meaningful analyses. Testing Robustness Robustness can be defined as the ability of an implementation to deal correctly with unexpected events, such as packet loss, reordering or packets crafted with malicious intent. Here, errors usually manifest in the form of, e.g., out-of-bound memory accesses, useafter-free violations, assertion errors, etc. When using a SymEx engine, the engine will provide the capability to test for such violations out-of-the-box, already providing valuable testing feedback without needing to define additional error conditions. Generality of the Method This method is, in its core, protocol independent, and can be applied to other protocols than QUIC. However, its application to QUIC shows the effort required to implement it for non-trivial, real world protocols, as well as its suitability for such protocols. The question in this case is scalability: While it is usually straight-forward to apply any method to simple examples, we are interested in whether the method scales to implementations of complex protocols, such as QUIC, and also to see the requirements for such protocols in regards to automated testing. CASE STUDY: PICOQUIC AND QUANT For our case study we implemented our method for picoquic 1 and QUANT 2 . These implementations were chosen because they are written in C, and could therefore be analyzed by KLEE, our SymEx engine of choice, out-of-the-box. It has successfully been shown that SymEx can also be applied to programs written in other languages, like C++ [8], so this is not a limitation of the general approach. We defined multiple test scenarios, and developed simple clients for each library that execute the defined scenarios. An additional challenge is that the KLEE SymEx engine [3] only works on single programs, which caused us to implement a single program that instantiates all communication partners and advances them in tandem for each test scenario. This means that one endpoint (e.g., client or server) is executed until it can make no more progress (i.e., it blocks waiting for a response) at which point execution switches to the next endpoint. This continues until either the scenario is finished, or one endpoint reports an error. In the following sections we describe our test scenarios and present which library-specific adaptations were necessary, as well as which library-independent abstractions we implemented that can be re-used for other libraries in the future. Test Scenarios We decided upon three test scenarios that exercise some of the core features of QUIC: In the first scenario, a client establishes a connection with a server, then closes it again. In the second scenario, we establish a connection just as before, but the client also opens a stream and sends a simple HTTP request (GET /index.html), which the server then closes without responding. Finally, the third scenario builds upon the second one, but the server also responds with a one-byte response. We define as interoperability issues any case in which a run ends without the underlying scenario being fulfilled, e.g., because a connection could not be established, or because one of the endpoints timed out during the process. For each library we implemented a frontend which provides functions that create a client or a server for one of the scenarios, a function that advances a client or server (executing it until it has reached the next stage in the scenario or is blocked on network input), and a function that checks whether a client or server is finished with the scenario. In our evaluation, we focused on the scenarios being executed with a picoquic client communicating with a QUANT server, but our implementation also supports the other cases (QUANT client and picoquic server or both from the same library). Library-Independent Abstractions In order symbolically execute the test scenarios, we had to implement abstractions for various functionalities, such as blocking and non-blocking network operations, as well as cryptographic operations. Figure 2 shows an overview of the test setup, including the layers we replaced with abstractions, such as communication via UDP. UNIX Sockets. To enable KLEE to correctly route network data, we explicitly modeled the network environment by providing simple custom implementations of functions such as socket, connect, sendto, and so forth. Note that we implemented only those parts of the POSIX socket API necessary for executing the two implementations, as the whole API surface covers extensive functionality. These parts where straightforward to implement, and we used a simple linked-list structure for sent packets and otherwise tracked additional information per socket, and only implemented UDP functionality. Symbolic Values. We also used our abstraction to model some of the properties of UDP-based communication, such as unreliability. To model packet drops, we used a symbolic variable that decides whether or not to drop each packet. The result of this is that we will for each packet explore a path in which this packet was lost. Additionally, we also implemented the possibility to make certain bytes of a sent packet symbolic instead of simply delivering the packet. This allows testing the receiver of each packet with regards to robustness, within the current state of the communication. While this is enough for QUIC implementations that rely on blocking communication, such as picoquic, others rely on asynchronous event notifications. For the case of QUANT, this is provided by libev. Libev. Libev is a library that provides an event loop for asynchronous applications. We implemented a mock version of libev that fulfilled our requirements of being easy to integrate into QUANT and our final test scenario binaries, as well as being simple to evaluate with KLEE. OpenSSL. SymEx is not able to reverse constraints that are based on cryptographic operations (encryption, decryption, hashing, etc.), as otherwise the underlying cryptography would be broken. As QUIC heavily relies on cryptographic operations, we needed to make these operations transparent, for which we implemented an OpenSSL abstraction that always performs null-encryption. This means we implemented most of the functions used by picoquic and QUANT in a very bare-bones fashion, often nothing more than a no-op, and implemented encryption and decryption basically as a memcpy. With regards to hash-functions, we decided to use actual implementations of these hashes instead of, e.g., hashing all values to the same hash value. This had certain implications for our evaluation: On one hand, it makes our implementation more correct, as different messages will correctly hash to different values. On the other hand, whenever our SymEx engine had to reverse the result of such a hash-function, it would not be able to do so, possibly preventing the exploration of certain parts of the libraries. Library-Independence. All of these mocks are implemented independently of the QUIC library under test, and are reusable for future interoperability tests. Thus, our work lays the foundation for testing a larger set of implementations. Picoquic For picoquic, a frontend that can execute our test scenarios was straightforward to implement, due to the fact that an example client and an example server were available. We replaced blocking reads in the client and server with points at which execution would return to the test harness, so the next communication partner would be able to make progress. This was made easy by the fact that the picoquic API itself only prepares packets for sending, and leaves the actual sending to the application. This means that we could implement the communication handover inside of our frontend library, and did not need to implement it inside of picoquic itself, requiring no changes to the library. QUANT The changes needed for QUANT were more extensive, as QUANT internally uses a libev-based event loop, which we needed to intercept in order to be able to return execution to the test harness when the event loop would block waiting for new data. To do so, in addition to implementing a simple variant of libev as described before, we modified the top-level API functions of QUANT. These expected blocking behavior of the underlying event loop, but instead we changed them to return control to the test harness when entering the event loop would block. Additionally, QUANT made use of global variables, which lead to corrupt behavior when, e.g., trying to instantiate a QUANT server and client in the same binary. To circumvent this, we performed a simple renaming of all defined symbols on the LLVM IR of QUANT. This prefixes all functions, such as q_connect, with a prefix of our choice, resulting in, e.g., client__q_connect and server__q_connect. Since this also renamed all global variables, this allowed us to test QUANT clients and servers in the same binary. As only a single QUANT instance is contained, our case study did not require this additional renaming. However, since global variables are a common feature in programs, it is necessary that this is also supported by our approach. EVALUATION For our evaluation we considered six different combinations of scenarios and symbolic input. All configurations were executed in KLEE with Z3 [5] as the underlying SMT solver, with a time limit of 8 hours and a memory limit of 32 GB on a system with two E5-2643 v4 processors providing a total of 12 physical cores and 256 GB main memory. We additionally added timeouts of 10 seconds per instruction and per query, to prevent the analysis from being stuck on too hard queries. We chose picoquic and QUANT for our case study as both are written in C, which is supported by KLEE. Both of these libraries also implement the newest version of the QUIC standard at the time of writing (draft 14). Configurations We tested six configurations and provide the results of their symbolic execution in Table 1. Sym-stream. This configuration combines all three described scenarios. We added symbolic input that chooses which of these to execute, resulting in the execution of all three, as SymEx explores all possible paths. This is the same as executing all three scenarios concretely without further symbolic values. This configuration terminated in about a minute after exploring all three reachable paths through the test binary, and reported two bugs. The first error is an interoperability bug that we originally found during the development of our implementation. This bug occurs in the second scenario, when the stream is closed by the server without sending any data. In this case, the QUANT server silently closes the stream on its end, not notifying the client. The client then times out and closes the connection prematurely. The second error occurs because certain resources are freed which might still be in use inside of libev. This bug was discovered because our libev-abstraction touched the freed value, which was discovered by KLEE. In practice, this kind of bug is hard to check: As it occurred in a shared library, concrete execution with a tool like ASAN would not detect this bug, and this is exactly the kind of bug that can cause rare, random crashes. We reported both bugs in the QUANT library, and both were verified and later fixed 34 . This configuration also gives a good baseline for instruction and branch coverage, as the other configurations explore the third scenario, which covers the most API surface, with different symbolic values. The values for instruction and branch coverage include dead code, and thus their absolute values need to be treated with care. However, they can be used to compare against the other configurations. Sym-version. This configuration is built upon the third scenario (connection establishment, new stream, response), but makes the version proposed by the picoquic client to the server symbolic. We chose this configuration because setting the proposed version is an option of the picoquic library. This configuration terminates after only 25 minutes, with most of the time spent inside the solver. The reason for this is that this configuration resulted early on in the generation of constraints that were not solvable by the SMT solver in the given timeout, thus terminating all paths early on. Nevertheless, this configuration also found an error that prevented the establishment of a connection. This error occurs when the proposed version is set to 0xbabababa. As the current QUIC draft reserves all versions of the form 0x?a?a?a?a for version negotiation, it seems plausible that this version by itself could not lead to a successfully established connection. We categorize this bug as very mild, as it is obviously only a small API problem. Sym-drop. For this configuration we symbolically dropped every packet. This is the first configuration that needed more than a few hundred MB of memory, and also the first that explored a large number of paths through the program. We see that only little time was spent solving constraints, which makes sense, since no symbolic data was actually touched by either of the libraries (either a packet was delivered as-is or it was dropped). Most interestingly, this is also the first configuration that found a bug which only occurred after multiple exchanged packets, and would not be easily found during manual testing. The reported test case drops the 4th, 5th and 7th packet exchanged between the two endpoints, which triggers a segfault due to a null-pointer in the QUANT server when the 9th packet is received. We verified that this bug also occurs when running concretely with regular OpenSSL instead of our abstraction. This is a robustness bug, but it might also be an interoperability bug, as other implementations might not trigger it. Sym-mod-X. In these configurations, the first X bytes of every sent packet are made symbolic, in order to test the robustness of the receiving endpoint. This category includes the two configurations that reached the highest instruction and branch coverages, but it also includes the configuration that achieved the lowest coverages. A trend can be seen here: More symbolic bytes cause more work for the SymEx engine due to state explosion, resulting in more time spent inside the SMT solver, resulting in slower progress overall. However, the run that achieved the lowest coverage uncovered an additional bug in QUANT's packet receiving code. The generated test case triggers the bug by replacing the first 10 bytes of the first packet sent by picoquic with the concrete values [0xff, 0x01, 0x01, 0x01, 0x01, 0x67, 0xff, 0xff, 0xff, 0xff], which leads to a null-pointer dereference in the server. We verified this bug as well while running without our OpenSSL abstraction. FUTURE WORK While our case study shows the usefulness of automated testing techniques such as SymEx for analyzing QUIC implementations, there is still much that can be done. A first and important step is the definition of the kinds of belief state QUIC implementations should be able to report on. In a second step, such a model can then be used for testing implementations for state divergence regarding the belief states of the different endpoints. Most of the effort to achieve this should be in defining a common ground for the definition of the belief state. We expect then extracting the belief state from implementations to require manageable effort, since implementations must already be keeping track of the state of each connection. This could be realized in the form of standardized testing and verification interfaces for protocol implementations, which would enable high levels of accessibility for new analysis approaches. This in turn would lead to high-quality implementations, increasing stability, robustness and performance in a field where all of these are important. Our test scenario only dropped packets or made some of the bytes symbolic, but did not take the specific structure of QUIC packets into account. Here, a layer that reads the packets that are sent and performs symbolic mutations based on the semantics of the protocol, e.g., symbolic ACK numbers, could lead to more thorough and scalable testing. The need for such a method becomes obvious when looking at the sym-mod-10 configuration, which already caused a visible slowdown of the SymEx engine due to state explosion. Furthermore, to analyze more parts of protocol implementations, additional test scenarios that excercise so-far uncovered protocol functionality are required. One way to achieve this would be to create more test scenarios based on the QUIC standard. However, it might also be possible to automatically derive test scenarios, either from a model of the standard, or from the implementations themselves. For this, knowing which API call caused which state change could help choosing possible next API calls. To extend test scenarios to more than two endpoints, it might be favorable to utilize SymEx techniques that target distributed systems, such as KleeNet [17,18]. While doing so, it might also become relevant to investigate symbolic time, since behavior in network protocols is often dependent on timing, most notably due to timeouts. CONCLUSION We presented an interoperability-guided method to test QUIC implementations and demonstrated its potential in a case study. Our method consists of testing implementations in pre-defined scenarios, but enriched with additional symbolic input, such as packet drops and symbolic modifications. In our case study we showed that, in order to symbolically execute and test implementations, it is required that underlying libraries are abstracted in a way that is sensible for testing. On one hand, kernel code such as UNIX sockets can otherwise not be executed and analyzed, but also to, e.g., turn encryption transparent in order to enable any analysis at all. We were able to uncover several bugs with varying levels of severity. While two were simple API issues that could easily be found through manual testing, two of the other three would be hard to find without some kind of automated testing approach, as they occur only in very specific situations: The right packets in a long chain of packets have to be dropped, or a very specific first packet has to be sent. The last bug was only detected due to abstracting libev, but does not necessarily require SymEx to uncover. In summary, most of these bugs are robustness bugs. To detect deeper semantic interoperability bugs, support in implementations that provides information about the current belief state of endpoints is required. We appeal to the authors of QUIC implementations, as well as to the members of the IETF working group, to develop a common understanding of what information makes up the belief state of a QUIC connection, and to extend implementations with ways to report this information for the sake of deep semantic interoperability testing.
4,508
1811.12099
2903071347
The main reason for the standardization of network protocols, like QUIC, is to ensure interoperability between implementations, which poses a challenging task. Manual tests are currently used to test the different existing implementations for interoperability, but given the complex nature of network protocols, it is hard to cover all possible edge cases. State-of-the-art automated software testing techniques, such as Symbolic Execution (SymEx), have proven themselves capable of analyzing complex real-world software and finding hard to detect bugs. We present a SymEx-based method for finding interoperability issues in QUIC implementations, and explore its merit in a case study that analyzes the interoperability of picoquic and QUANT. We find that, while SymEx is able to analyze deep interactions between different implementations and uncovers several bugs, in order to enable efficient interoperability testing, implementations need to provide additional information about their current protocol state.
Testing protocols and programs independently is -- however worthwhile -- not enough. To this end, approaches have been designed that test implementations for protocol compliance using many different testing and verification methodologies, ranging from fuzzing @cite_8 @cite_9 over symex @cite_23 to model checking @cite_2 @cite_25 . Validating that a given implementation fulfills a specification or standard does, however, require a formalized representation to be available, which effectively constitutes another implementation of the specification.
{ "abstract": [ "Fuzzing is a well-known black-box approach to the security testing of applications. Fuzzing has many advantages in terms of simplicity and effectiveness over more complex, expensive testing approaches. Unfortunately, current fuzzing tools suffer from a number of limitations, and, in particular, they provide little support for the fuzzing of stateful protocols. In this paper, we present SNOOZE, a tool for building flexible, security-oriented, network protocol fuzzers. SNOOZE implements a stateful fuzzing approach that can be used to effectively identify security flaws in network protocol implementations. SNOOZE allows a tester to describe the stateful operation of a protocol and the messages that need to be generated in each state. In addition, SNOOZE provides attack-specific fuzzing primitives that allow a tester to focus on specific vulnerability classes. We used an initial prototype of the SNOOZE tool to test programs that implement the SIP protocol, with promising results. SNOOZE supported the creation of sophisticated fuzzing scenarios that were able to expose real-world bugs in the programs analyzed.", "Security flaws existed in protocol implementations might be exploited by malicious attackers and the consequences can be very serious. Therefore, detecting vulnerabilities of network protocol implementations is becoming a hot research topic recently. However, protocol security test is a very complex, challenging and error-prone task, as constructing test packets manually or randomly are not practical. This paper presents an efficient mutation-based approach for detecting implementation flaws of network protocol. Compared with other protocol testing tools, our approach divides the procedure of protocol testing into many phases, and flexible design can cover many testing cases for the protocol implementations under testing, and could apply for testing various protocol implementations quite easily. Besides, this approach is more comprehensible that makes the protocol security test easier to carry out. To assess the usefulness of this approach, several experiments are performed on four FTP server implementations and the results showed that our approach can find flaws of protocol implementation very easily. The method is of the important application value and can improve the security of network protocols.", "Implementations of network protocols, such as DNS, DHCP and Zeroconf, are prone to flaws, security vulnerabilities and interoperability issues caused by developer mistakes and ambiguous requirements in protocol specifications. Detecting such problems is not easy because (i) many bugs manifest themselves only after prolonged operation; (ii) reasoning about semantic errors requires a machine-readable specification; and (iii) the state space of complex protocol implementations is large. This article presents a novel approach that combines symbolic execution and rule-based specifications to detect various types of flaws in network protocol implementations. The core idea behind our approach is to (1) automatically generate high-coverage test input packets for a network protocol implementation using single- and multi-packet exchange symbolic execution (targeting stateless and stateful protocols, respectively) and then (2) use these packets to detect potential violations of manual rules derived from the protocol specification, and check the interoperability of different implementations of the same network protocol. We present a system based on these techniques, SymbexNet, and evaluate it on multiple implementations of two network protocols: Zeroconf, a service discovery protocol, and DHCP, a network configuration protocol. SymbexNet is able to discover non-trivial bugs as well as interoperability problems, most of which have been confirmed by the developers.", "Network protocols must work. The effects of protocol specification or implementation errors range from reduced performance, to security breaches, to bringing down entire networks. However, network protocols are difficult to test due to the exponential size of the state space they define. Ideally, a protocol implementation must be validated against all possible events (packet arrivals, packet losses, timeouts, etc.) in all possible protocol states. Conventional means of testing can explore only a minute fraction of these possible combinations. This paper focuses on how to effectively find errors in large network protocol implementations using model checking, a formal verification technique. Model checking involves a systematic exploration of the possible states of a system, and is well-suited to finding intricate errors lurking deep in exponential state spaces. Its primary limitation has been the effort needed to use it on software. The primary contribution of this paper are novel techniques that allow us to model check complex, real-world, well-tested protocol implementations with reasonable effort. We have implemented these techniques in CMC, a C model checker [30] and applied the result to the Linux TCP IP implementation, finding four errors in the protocol implementation.", "Many system errors do not emerge unless some intricate sequence of events occurs. In practice, this means that most systems have errors that only trigger after days or weeks of execution. Model checking [4] is an effective way to find such subtle errors. It takes a simplified description of the code and exhaustively tests it on all inputs, using techniques to explore vast state spaces efficiently. Unfortunately, while model checking systems code would be wonderful, it is almost never done in practice: building models is just too hard. It can take significantly more time to write a model than it did to write the code. Furthermore, by checking an abstraction of the code rather than the code itself, it is easy to miss errors.The paper's first contribution is a new model checker, CMC, which checks C and C++ implementations directly, eliminating the need for a separate abstract description of the system behavior. This has two major advantages: it reduces the effort to use model checking, and it reduces missed errors as well as time-wasting false error reports resulting from inconsistencies between the abstract description and the actual implementation. In addition, changes in the implementation can be checked immediately without updating a high-level description.The paper's second contribution is demonstrating that CMC works well on real code by applying it to three implementations of the Ad-hoc On-demand Distance Vector (AODV) networking protocol [7]. We found 34 distinct errors (roughly one bug per 328 lines of code), including a bug in the AODV specification itself. Given our experience building systems, it appears that the approach will work well in other contexts, and especially well for other networking protocols." ], "cite_N": [ "@cite_8", "@cite_9", "@cite_23", "@cite_2", "@cite_25" ], "mid": [ "2129975948", "2064828481", "2117448240", "207759855", "2117009500" ] }
Interoperability-Guided Testing of QUIC Implementations using Symbolic Execution
The emergence of new, modern protocols for the Internet promises a solution to long-standing issues that can only be solved by changing core parts of the current protocol stack. Such new protocols and their implementations must meet the highest requirements: They will have to reliably function at similar levels of maturity as what they aim to replace. This includes aspects such as reliability, security, performance and, prominently, interoperability between implementations. Ensuring interoperability is the main reason for standardizing QUIC as a protocol, and the IETF standardization process goes to great lengths, such as requiring multiple independent implementations, to make sure this is achievable. Thus, better methods and tools that assist with the difficult challenge of interoperability testing are highly desirable. Automated testing techniques, such as Symbolic Execution (SymEx), have proven themselves to be capable of analyzing complex real world software, usually focused on finding low-level safety violations [4], and SymEx has also proven its worth in the networking domain in various other ways [7, 14, 16-18, 22, 24, 25]. This paper explores the potential of SymEx for checking the interoperability of QUIC implementations. It does so by presenting a SymEx-based method to detect interoperability issues, and demonstrates its potential in a case study of two existing QUIC implementations, picoquic and QUANT. We discover that, while our method is able to successfully analyze nontrivial interactions between different implementations, implementations need to disclose more protocol-level information to truly enable deep semantic interoperability testing. Key Contributions and Outline The key contributions of this paper are as follows: • We describe a method that uses Symbolic Execution (SymEx) to test QUIC implementations for interoperability, and discuss how additional information from implementations about their current protocol state could be leveraged for semantically deeper testing. • We then present our case study in which we symbolically test picoquic and QUANT for interoperability, and discuss the abstraction layers that are necessary to enable SymEx for QUIC implementations. • The final key contribution is the evaluation of our implementation, testing picoquic and QUANT, in which we report on the performance of our method as well as on defects we discovered. We begin by giving background on SymEx in Sect. 2, followed by a discussion of related work in Sect. 3. We then present our method in Sect. 4, and describe its implementation and the setup of the case study in Sect. 5. This is followed by an evaluation of our results in Sect. 6, before we shortly discuss future work in Sect. 7 and conclude in Sect. 8 if(x < 5) { } if(x >= 100) {x < 5} if(x >= 100) {x ≥ 5} return ok {x < 5, x ≥ 100} return ok {x < 5, x < 100} return ok {x ≥ 5, x ≥ 100} return ok {x ≥ 5, x < 100} 2 if(x < 5) ok = false; 1 bool ok = true; 3 if(x >= 100) ok = false; 4 return ok; Figure 1: Symbolic Execution (SymEx) of a small example program. Constraints encountered in branching statements are recorded in the path constraints of the corresponding explored paths. By checking new branching conditions for satisfiability on each path, exactly all reachable paths through the program are explored. SYMBOLIC EXECUTION (SYMEX) Given a program that takes some input (e.g., command line arguments, files, network packets, etc.), SymEx systematically explores the program by executing all reachable paths. It does so by assigning symbolic values instead of concrete ones to its input, which allows the SymEx engine to fork execution at a branch-statement (i.e., if) when both branches are feasible. If this is the case, the condition that caused the fork (i.e., the condition inside the if statement) is remembered on the execution path following the true-branch as an additional constraint. On the other execution path, which follows the false-branch, the negation of the condition is remembered as a constraint instead. To determine the reachability given the current constraints, an SMT solver, such as Z3 [5], is queried. SMT solvers are the backbone of every SymEx engine, and their performance and completeness directly influence the efficiency of the symbolic analysis, and they ensure that only feasible paths are explored. Continuing in this fashion a SymEx engine will explore all reachable paths through the program. Whenever a path terminates, either regularly or because an error was encountered, the engine will query the SMT solver using the collected path constraints to get concrete values for each symbolic input value. These values will then be recorded in the form of a test case, which can then be run again later to exercise the same path through the program. If a bug was encountered, the generated test case will be able to reproduce the taken path for further debugging and introspection. Figure 1 shows a small example program that performs operations depending on the value of a symbolic input variable x. The program contains two conditional branches that have to be traversed before the return in line 4 is reached. On the right, all paths explored by SymEx are shown. In the beginning, x is unconstrained, but, as SymEx progresses, a path for each side of the first branch is explored. For each side, a corresponding constraint (either x < 5 or x ≥ 5) is added to the path constraints. When the second branch is reached, only three paths need to be explored further: The constraint set {x < 5, x ≥ 100} is not satisfiable, and therefore this path will never be reachable during execution. In the end, SymEx will query the SMT solver for concrete values for x for each path to generate a suite of concrete test cases that cover all reachable paths of the program. METHOD OUTLINE SymEx engines such as KLEE [3], which our cases study utilizes, usually expect their input to be a program. However, protocol implementations are naturally libraries, and as such lack an implicit singular entry point. Although ways to analyze libraries directly have been proposed [15], they suffer from a lack of insight into what constitutes a sensible use of the library. Instead, we propose to analyze programs that utilize the libraries and execute different test scenarios. One way to choose the test scenarios for this would be to use existing applications that already implement real-world application logic. This is currently difficult to do for QUIC, as there are only very few applications built on top of QUIC, and is further complicated by their use of only a small set of different QUIC implementations. Instead, we follow the current best-practice in compliance testing by designing test scenarios based on primitives defined in the QUIC standard. Unlike common, concrete compliance testing suites, we formulate symbolic testing scenarios that perform large families of related tests in one go. These describe the involved endpoints (e.g., clients and servers) and the communication that takes place between them, for example, which connections are established, which streams are opened, what is sent on those streams, and so on. Such scenarios can be defined in both high-level as well as low-level terms. A more low-level scenario describes individual packets and effects such as loss or reordering instead of focusing on connections and streams. Independently of the test scenarios, we need to define what we categorize as actual errors, so that the SymEx engine can actually detect which paths exhibit erroneous behavior. We present two categories of errors here, one focused on interoperability, and one focused on robustness. Testing Interoperability Generally speaking, whenever there is a conflict between what the communication partners believe the state of their connection to be, an interoperability violation exists. In the case of networked programs, it is important to quantify the belief state of each endpoint in a way that is neither too constrained (e.g., if the server believes that a data connection is open, but the client has already sent a shutdown request, there is no conflict), nor too open (otherwise error detection becomes impossible). Such issues can cause the communication to continue without exhibiting low-level errors, but the result of the execution to differ from what was expected. For example, if the amount of application data sent by one endpoint differs from the amount of data received by the other after a finished transmission, this is an error, as the two endpoints hold different beliefs about the correct state of the connection. To be able to detect such bugs, it is necessary to have a way to extract the current belief state of each endpoint in a way that can be compared to that of the other endpoints. Here, standardization can help: A definition of what exactly is part of the (belief) state of a QUIC connection could be used by implementations to provide this information to analysis tools. Such information could then be used by testing and verification tools to great effect, enabling stronger and more semantically meaningful analyses. Testing Robustness Robustness can be defined as the ability of an implementation to deal correctly with unexpected events, such as packet loss, reordering or packets crafted with malicious intent. Here, errors usually manifest in the form of, e.g., out-of-bound memory accesses, useafter-free violations, assertion errors, etc. When using a SymEx engine, the engine will provide the capability to test for such violations out-of-the-box, already providing valuable testing feedback without needing to define additional error conditions. Generality of the Method This method is, in its core, protocol independent, and can be applied to other protocols than QUIC. However, its application to QUIC shows the effort required to implement it for non-trivial, real world protocols, as well as its suitability for such protocols. The question in this case is scalability: While it is usually straight-forward to apply any method to simple examples, we are interested in whether the method scales to implementations of complex protocols, such as QUIC, and also to see the requirements for such protocols in regards to automated testing. CASE STUDY: PICOQUIC AND QUANT For our case study we implemented our method for picoquic 1 and QUANT 2 . These implementations were chosen because they are written in C, and could therefore be analyzed by KLEE, our SymEx engine of choice, out-of-the-box. It has successfully been shown that SymEx can also be applied to programs written in other languages, like C++ [8], so this is not a limitation of the general approach. We defined multiple test scenarios, and developed simple clients for each library that execute the defined scenarios. An additional challenge is that the KLEE SymEx engine [3] only works on single programs, which caused us to implement a single program that instantiates all communication partners and advances them in tandem for each test scenario. This means that one endpoint (e.g., client or server) is executed until it can make no more progress (i.e., it blocks waiting for a response) at which point execution switches to the next endpoint. This continues until either the scenario is finished, or one endpoint reports an error. In the following sections we describe our test scenarios and present which library-specific adaptations were necessary, as well as which library-independent abstractions we implemented that can be re-used for other libraries in the future. Test Scenarios We decided upon three test scenarios that exercise some of the core features of QUIC: In the first scenario, a client establishes a connection with a server, then closes it again. In the second scenario, we establish a connection just as before, but the client also opens a stream and sends a simple HTTP request (GET /index.html), which the server then closes without responding. Finally, the third scenario builds upon the second one, but the server also responds with a one-byte response. We define as interoperability issues any case in which a run ends without the underlying scenario being fulfilled, e.g., because a connection could not be established, or because one of the endpoints timed out during the process. For each library we implemented a frontend which provides functions that create a client or a server for one of the scenarios, a function that advances a client or server (executing it until it has reached the next stage in the scenario or is blocked on network input), and a function that checks whether a client or server is finished with the scenario. In our evaluation, we focused on the scenarios being executed with a picoquic client communicating with a QUANT server, but our implementation also supports the other cases (QUANT client and picoquic server or both from the same library). Library-Independent Abstractions In order symbolically execute the test scenarios, we had to implement abstractions for various functionalities, such as blocking and non-blocking network operations, as well as cryptographic operations. Figure 2 shows an overview of the test setup, including the layers we replaced with abstractions, such as communication via UDP. UNIX Sockets. To enable KLEE to correctly route network data, we explicitly modeled the network environment by providing simple custom implementations of functions such as socket, connect, sendto, and so forth. Note that we implemented only those parts of the POSIX socket API necessary for executing the two implementations, as the whole API surface covers extensive functionality. These parts where straightforward to implement, and we used a simple linked-list structure for sent packets and otherwise tracked additional information per socket, and only implemented UDP functionality. Symbolic Values. We also used our abstraction to model some of the properties of UDP-based communication, such as unreliability. To model packet drops, we used a symbolic variable that decides whether or not to drop each packet. The result of this is that we will for each packet explore a path in which this packet was lost. Additionally, we also implemented the possibility to make certain bytes of a sent packet symbolic instead of simply delivering the packet. This allows testing the receiver of each packet with regards to robustness, within the current state of the communication. While this is enough for QUIC implementations that rely on blocking communication, such as picoquic, others rely on asynchronous event notifications. For the case of QUANT, this is provided by libev. Libev. Libev is a library that provides an event loop for asynchronous applications. We implemented a mock version of libev that fulfilled our requirements of being easy to integrate into QUANT and our final test scenario binaries, as well as being simple to evaluate with KLEE. OpenSSL. SymEx is not able to reverse constraints that are based on cryptographic operations (encryption, decryption, hashing, etc.), as otherwise the underlying cryptography would be broken. As QUIC heavily relies on cryptographic operations, we needed to make these operations transparent, for which we implemented an OpenSSL abstraction that always performs null-encryption. This means we implemented most of the functions used by picoquic and QUANT in a very bare-bones fashion, often nothing more than a no-op, and implemented encryption and decryption basically as a memcpy. With regards to hash-functions, we decided to use actual implementations of these hashes instead of, e.g., hashing all values to the same hash value. This had certain implications for our evaluation: On one hand, it makes our implementation more correct, as different messages will correctly hash to different values. On the other hand, whenever our SymEx engine had to reverse the result of such a hash-function, it would not be able to do so, possibly preventing the exploration of certain parts of the libraries. Library-Independence. All of these mocks are implemented independently of the QUIC library under test, and are reusable for future interoperability tests. Thus, our work lays the foundation for testing a larger set of implementations. Picoquic For picoquic, a frontend that can execute our test scenarios was straightforward to implement, due to the fact that an example client and an example server were available. We replaced blocking reads in the client and server with points at which execution would return to the test harness, so the next communication partner would be able to make progress. This was made easy by the fact that the picoquic API itself only prepares packets for sending, and leaves the actual sending to the application. This means that we could implement the communication handover inside of our frontend library, and did not need to implement it inside of picoquic itself, requiring no changes to the library. QUANT The changes needed for QUANT were more extensive, as QUANT internally uses a libev-based event loop, which we needed to intercept in order to be able to return execution to the test harness when the event loop would block waiting for new data. To do so, in addition to implementing a simple variant of libev as described before, we modified the top-level API functions of QUANT. These expected blocking behavior of the underlying event loop, but instead we changed them to return control to the test harness when entering the event loop would block. Additionally, QUANT made use of global variables, which lead to corrupt behavior when, e.g., trying to instantiate a QUANT server and client in the same binary. To circumvent this, we performed a simple renaming of all defined symbols on the LLVM IR of QUANT. This prefixes all functions, such as q_connect, with a prefix of our choice, resulting in, e.g., client__q_connect and server__q_connect. Since this also renamed all global variables, this allowed us to test QUANT clients and servers in the same binary. As only a single QUANT instance is contained, our case study did not require this additional renaming. However, since global variables are a common feature in programs, it is necessary that this is also supported by our approach. EVALUATION For our evaluation we considered six different combinations of scenarios and symbolic input. All configurations were executed in KLEE with Z3 [5] as the underlying SMT solver, with a time limit of 8 hours and a memory limit of 32 GB on a system with two E5-2643 v4 processors providing a total of 12 physical cores and 256 GB main memory. We additionally added timeouts of 10 seconds per instruction and per query, to prevent the analysis from being stuck on too hard queries. We chose picoquic and QUANT for our case study as both are written in C, which is supported by KLEE. Both of these libraries also implement the newest version of the QUIC standard at the time of writing (draft 14). Configurations We tested six configurations and provide the results of their symbolic execution in Table 1. Sym-stream. This configuration combines all three described scenarios. We added symbolic input that chooses which of these to execute, resulting in the execution of all three, as SymEx explores all possible paths. This is the same as executing all three scenarios concretely without further symbolic values. This configuration terminated in about a minute after exploring all three reachable paths through the test binary, and reported two bugs. The first error is an interoperability bug that we originally found during the development of our implementation. This bug occurs in the second scenario, when the stream is closed by the server without sending any data. In this case, the QUANT server silently closes the stream on its end, not notifying the client. The client then times out and closes the connection prematurely. The second error occurs because certain resources are freed which might still be in use inside of libev. This bug was discovered because our libev-abstraction touched the freed value, which was discovered by KLEE. In practice, this kind of bug is hard to check: As it occurred in a shared library, concrete execution with a tool like ASAN would not detect this bug, and this is exactly the kind of bug that can cause rare, random crashes. We reported both bugs in the QUANT library, and both were verified and later fixed 34 . This configuration also gives a good baseline for instruction and branch coverage, as the other configurations explore the third scenario, which covers the most API surface, with different symbolic values. The values for instruction and branch coverage include dead code, and thus their absolute values need to be treated with care. However, they can be used to compare against the other configurations. Sym-version. This configuration is built upon the third scenario (connection establishment, new stream, response), but makes the version proposed by the picoquic client to the server symbolic. We chose this configuration because setting the proposed version is an option of the picoquic library. This configuration terminates after only 25 minutes, with most of the time spent inside the solver. The reason for this is that this configuration resulted early on in the generation of constraints that were not solvable by the SMT solver in the given timeout, thus terminating all paths early on. Nevertheless, this configuration also found an error that prevented the establishment of a connection. This error occurs when the proposed version is set to 0xbabababa. As the current QUIC draft reserves all versions of the form 0x?a?a?a?a for version negotiation, it seems plausible that this version by itself could not lead to a successfully established connection. We categorize this bug as very mild, as it is obviously only a small API problem. Sym-drop. For this configuration we symbolically dropped every packet. This is the first configuration that needed more than a few hundred MB of memory, and also the first that explored a large number of paths through the program. We see that only little time was spent solving constraints, which makes sense, since no symbolic data was actually touched by either of the libraries (either a packet was delivered as-is or it was dropped). Most interestingly, this is also the first configuration that found a bug which only occurred after multiple exchanged packets, and would not be easily found during manual testing. The reported test case drops the 4th, 5th and 7th packet exchanged between the two endpoints, which triggers a segfault due to a null-pointer in the QUANT server when the 9th packet is received. We verified that this bug also occurs when running concretely with regular OpenSSL instead of our abstraction. This is a robustness bug, but it might also be an interoperability bug, as other implementations might not trigger it. Sym-mod-X. In these configurations, the first X bytes of every sent packet are made symbolic, in order to test the robustness of the receiving endpoint. This category includes the two configurations that reached the highest instruction and branch coverages, but it also includes the configuration that achieved the lowest coverages. A trend can be seen here: More symbolic bytes cause more work for the SymEx engine due to state explosion, resulting in more time spent inside the SMT solver, resulting in slower progress overall. However, the run that achieved the lowest coverage uncovered an additional bug in QUANT's packet receiving code. The generated test case triggers the bug by replacing the first 10 bytes of the first packet sent by picoquic with the concrete values [0xff, 0x01, 0x01, 0x01, 0x01, 0x67, 0xff, 0xff, 0xff, 0xff], which leads to a null-pointer dereference in the server. We verified this bug as well while running without our OpenSSL abstraction. FUTURE WORK While our case study shows the usefulness of automated testing techniques such as SymEx for analyzing QUIC implementations, there is still much that can be done. A first and important step is the definition of the kinds of belief state QUIC implementations should be able to report on. In a second step, such a model can then be used for testing implementations for state divergence regarding the belief states of the different endpoints. Most of the effort to achieve this should be in defining a common ground for the definition of the belief state. We expect then extracting the belief state from implementations to require manageable effort, since implementations must already be keeping track of the state of each connection. This could be realized in the form of standardized testing and verification interfaces for protocol implementations, which would enable high levels of accessibility for new analysis approaches. This in turn would lead to high-quality implementations, increasing stability, robustness and performance in a field where all of these are important. Our test scenario only dropped packets or made some of the bytes symbolic, but did not take the specific structure of QUIC packets into account. Here, a layer that reads the packets that are sent and performs symbolic mutations based on the semantics of the protocol, e.g., symbolic ACK numbers, could lead to more thorough and scalable testing. The need for such a method becomes obvious when looking at the sym-mod-10 configuration, which already caused a visible slowdown of the SymEx engine due to state explosion. Furthermore, to analyze more parts of protocol implementations, additional test scenarios that excercise so-far uncovered protocol functionality are required. One way to achieve this would be to create more test scenarios based on the QUIC standard. However, it might also be possible to automatically derive test scenarios, either from a model of the standard, or from the implementations themselves. For this, knowing which API call caused which state change could help choosing possible next API calls. To extend test scenarios to more than two endpoints, it might be favorable to utilize SymEx techniques that target distributed systems, such as KleeNet [17,18]. While doing so, it might also become relevant to investigate symbolic time, since behavior in network protocols is often dependent on timing, most notably due to timeouts. CONCLUSION We presented an interoperability-guided method to test QUIC implementations and demonstrated its potential in a case study. Our method consists of testing implementations in pre-defined scenarios, but enriched with additional symbolic input, such as packet drops and symbolic modifications. In our case study we showed that, in order to symbolically execute and test implementations, it is required that underlying libraries are abstracted in a way that is sensible for testing. On one hand, kernel code such as UNIX sockets can otherwise not be executed and analyzed, but also to, e.g., turn encryption transparent in order to enable any analysis at all. We were able to uncover several bugs with varying levels of severity. While two were simple API issues that could easily be found through manual testing, two of the other three would be hard to find without some kind of automated testing approach, as they occur only in very specific situations: The right packets in a long chain of packets have to be dropped, or a very specific first packet has to be sent. The last bug was only detected due to abstracting libev, but does not necessarily require SymEx to uncover. In summary, most of these bugs are robustness bugs. To detect deeper semantic interoperability bugs, support in implementations that provides information about the current belief state of endpoints is required. We appeal to the authors of QUIC implementations, as well as to the members of the IETF working group, to develop a common understanding of what information makes up the belief state of a QUIC connection, and to extend implementations with ways to report this information for the sake of deep semantic interoperability testing.
4,508
1811.12099
2903071347
The main reason for the standardization of network protocols, like QUIC, is to ensure interoperability between implementations, which poses a challenging task. Manual tests are currently used to test the different existing implementations for interoperability, but given the complex nature of network protocols, it is hard to cover all possible edge cases. State-of-the-art automated software testing techniques, such as Symbolic Execution (SymEx), have proven themselves capable of analyzing complex real-world software and finding hard to detect bugs. We present a SymEx-based method for finding interoperability issues in QUIC implementations, and explore its merit in a case study that analyzes the interoperability of picoquic and QUANT. We find that, while SymEx is able to analyze deep interactions between different implementations and uncovers several bugs, in order to enable efficient interoperability testing, implementations need to provide additional information about their current protocol state.
One way to circumvent this chicken-egg problem is to exploit the fact that any relevant standard will have multiple implementations, which enables the substitution of compliance testing with that of interoperability testing. While it is possible that neither implementation is technically compliant with the standard, it becomes more and more improbable that the standard is captured incorrectly by many different people in exactly the same manner. Due to the inherent state-explosion problem of interoperability testing (multiple different, or even all possible programs are considered at once), multiple approaches to specialized @cite_22 and general @cite_17 @cite_21 interoperability testing have been proposed in the past.
{ "abstract": [ "", "The increasing adoption of Software Defined Networking, and OpenFlow in particular, brings great hope for increasing extensibility and lowering costs of deploying new network functionality. A key component in these networks is the OpenFlow agent, a piece of software that a switch runs to enable remote programmatic access to its forwarding tables. While testing high-level network functionality, the correct behavior and interoperability of any OpenFlow agent are taken for granted. However, existing tools for testing agents are not exhaustive nor systematic, and only check that the agent's basic functionality works. In addition, the rapidly changing and sometimes vague OpenFlow specifications can result in multiple implementations that behave differently. This paper presents SOFT, an approach for testing the interoperability of OpenFlow switches. Our key insight is in automatically identifying the testing inputs that cause different OpenFlow agent implementations to behave inconsistently. To this end, we first symbolically execute each agent under test in isolation to derive which set of inputs causes which behavior. We then crosscheck all distinct behaviors across different agent implementations and evaluate whether a common input subset causes inconsistent behaviors. Our evaluation shows that our tool identified several inconsistencies between the publicly available Reference OpenFlow switch and Open vSwitch implementations.", "We propose PIC, a tool that helps developers search for non-interoperabilities in protocol implementations. We formulate this problem using intersection of the sets of messages that one protocol participant can send but another will reject as non-compliant. PIC leverages symbolic execution to characterize these sets and uses two novel techniques to scale to real-world implementations. First, it uses joint symbolic execution, in which receiver-side program analysis is constrained based on sender-side constraints, dramatically reducing the number of execution paths to consider. Second, it incorporates a search strategy that steers symbolic execution toward likely non-interoperabilities. We show that PIC is able to find multiple previously unknown noninteroperabilities in large and mature implementations of the SIP and SPDY (v2 through v3.1) protocols, some of which have since been fixed by the respective developers." ], "cite_N": [ "@cite_21", "@cite_22", "@cite_17" ], "mid": [ "", "2162360270", "2260681216" ] }
Interoperability-Guided Testing of QUIC Implementations using Symbolic Execution
The emergence of new, modern protocols for the Internet promises a solution to long-standing issues that can only be solved by changing core parts of the current protocol stack. Such new protocols and their implementations must meet the highest requirements: They will have to reliably function at similar levels of maturity as what they aim to replace. This includes aspects such as reliability, security, performance and, prominently, interoperability between implementations. Ensuring interoperability is the main reason for standardizing QUIC as a protocol, and the IETF standardization process goes to great lengths, such as requiring multiple independent implementations, to make sure this is achievable. Thus, better methods and tools that assist with the difficult challenge of interoperability testing are highly desirable. Automated testing techniques, such as Symbolic Execution (SymEx), have proven themselves to be capable of analyzing complex real world software, usually focused on finding low-level safety violations [4], and SymEx has also proven its worth in the networking domain in various other ways [7, 14, 16-18, 22, 24, 25]. This paper explores the potential of SymEx for checking the interoperability of QUIC implementations. It does so by presenting a SymEx-based method to detect interoperability issues, and demonstrates its potential in a case study of two existing QUIC implementations, picoquic and QUANT. We discover that, while our method is able to successfully analyze nontrivial interactions between different implementations, implementations need to disclose more protocol-level information to truly enable deep semantic interoperability testing. Key Contributions and Outline The key contributions of this paper are as follows: • We describe a method that uses Symbolic Execution (SymEx) to test QUIC implementations for interoperability, and discuss how additional information from implementations about their current protocol state could be leveraged for semantically deeper testing. • We then present our case study in which we symbolically test picoquic and QUANT for interoperability, and discuss the abstraction layers that are necessary to enable SymEx for QUIC implementations. • The final key contribution is the evaluation of our implementation, testing picoquic and QUANT, in which we report on the performance of our method as well as on defects we discovered. We begin by giving background on SymEx in Sect. 2, followed by a discussion of related work in Sect. 3. We then present our method in Sect. 4, and describe its implementation and the setup of the case study in Sect. 5. This is followed by an evaluation of our results in Sect. 6, before we shortly discuss future work in Sect. 7 and conclude in Sect. 8 if(x < 5) { } if(x >= 100) {x < 5} if(x >= 100) {x ≥ 5} return ok {x < 5, x ≥ 100} return ok {x < 5, x < 100} return ok {x ≥ 5, x ≥ 100} return ok {x ≥ 5, x < 100} 2 if(x < 5) ok = false; 1 bool ok = true; 3 if(x >= 100) ok = false; 4 return ok; Figure 1: Symbolic Execution (SymEx) of a small example program. Constraints encountered in branching statements are recorded in the path constraints of the corresponding explored paths. By checking new branching conditions for satisfiability on each path, exactly all reachable paths through the program are explored. SYMBOLIC EXECUTION (SYMEX) Given a program that takes some input (e.g., command line arguments, files, network packets, etc.), SymEx systematically explores the program by executing all reachable paths. It does so by assigning symbolic values instead of concrete ones to its input, which allows the SymEx engine to fork execution at a branch-statement (i.e., if) when both branches are feasible. If this is the case, the condition that caused the fork (i.e., the condition inside the if statement) is remembered on the execution path following the true-branch as an additional constraint. On the other execution path, which follows the false-branch, the negation of the condition is remembered as a constraint instead. To determine the reachability given the current constraints, an SMT solver, such as Z3 [5], is queried. SMT solvers are the backbone of every SymEx engine, and their performance and completeness directly influence the efficiency of the symbolic analysis, and they ensure that only feasible paths are explored. Continuing in this fashion a SymEx engine will explore all reachable paths through the program. Whenever a path terminates, either regularly or because an error was encountered, the engine will query the SMT solver using the collected path constraints to get concrete values for each symbolic input value. These values will then be recorded in the form of a test case, which can then be run again later to exercise the same path through the program. If a bug was encountered, the generated test case will be able to reproduce the taken path for further debugging and introspection. Figure 1 shows a small example program that performs operations depending on the value of a symbolic input variable x. The program contains two conditional branches that have to be traversed before the return in line 4 is reached. On the right, all paths explored by SymEx are shown. In the beginning, x is unconstrained, but, as SymEx progresses, a path for each side of the first branch is explored. For each side, a corresponding constraint (either x < 5 or x ≥ 5) is added to the path constraints. When the second branch is reached, only three paths need to be explored further: The constraint set {x < 5, x ≥ 100} is not satisfiable, and therefore this path will never be reachable during execution. In the end, SymEx will query the SMT solver for concrete values for x for each path to generate a suite of concrete test cases that cover all reachable paths of the program. METHOD OUTLINE SymEx engines such as KLEE [3], which our cases study utilizes, usually expect their input to be a program. However, protocol implementations are naturally libraries, and as such lack an implicit singular entry point. Although ways to analyze libraries directly have been proposed [15], they suffer from a lack of insight into what constitutes a sensible use of the library. Instead, we propose to analyze programs that utilize the libraries and execute different test scenarios. One way to choose the test scenarios for this would be to use existing applications that already implement real-world application logic. This is currently difficult to do for QUIC, as there are only very few applications built on top of QUIC, and is further complicated by their use of only a small set of different QUIC implementations. Instead, we follow the current best-practice in compliance testing by designing test scenarios based on primitives defined in the QUIC standard. Unlike common, concrete compliance testing suites, we formulate symbolic testing scenarios that perform large families of related tests in one go. These describe the involved endpoints (e.g., clients and servers) and the communication that takes place between them, for example, which connections are established, which streams are opened, what is sent on those streams, and so on. Such scenarios can be defined in both high-level as well as low-level terms. A more low-level scenario describes individual packets and effects such as loss or reordering instead of focusing on connections and streams. Independently of the test scenarios, we need to define what we categorize as actual errors, so that the SymEx engine can actually detect which paths exhibit erroneous behavior. We present two categories of errors here, one focused on interoperability, and one focused on robustness. Testing Interoperability Generally speaking, whenever there is a conflict between what the communication partners believe the state of their connection to be, an interoperability violation exists. In the case of networked programs, it is important to quantify the belief state of each endpoint in a way that is neither too constrained (e.g., if the server believes that a data connection is open, but the client has already sent a shutdown request, there is no conflict), nor too open (otherwise error detection becomes impossible). Such issues can cause the communication to continue without exhibiting low-level errors, but the result of the execution to differ from what was expected. For example, if the amount of application data sent by one endpoint differs from the amount of data received by the other after a finished transmission, this is an error, as the two endpoints hold different beliefs about the correct state of the connection. To be able to detect such bugs, it is necessary to have a way to extract the current belief state of each endpoint in a way that can be compared to that of the other endpoints. Here, standardization can help: A definition of what exactly is part of the (belief) state of a QUIC connection could be used by implementations to provide this information to analysis tools. Such information could then be used by testing and verification tools to great effect, enabling stronger and more semantically meaningful analyses. Testing Robustness Robustness can be defined as the ability of an implementation to deal correctly with unexpected events, such as packet loss, reordering or packets crafted with malicious intent. Here, errors usually manifest in the form of, e.g., out-of-bound memory accesses, useafter-free violations, assertion errors, etc. When using a SymEx engine, the engine will provide the capability to test for such violations out-of-the-box, already providing valuable testing feedback without needing to define additional error conditions. Generality of the Method This method is, in its core, protocol independent, and can be applied to other protocols than QUIC. However, its application to QUIC shows the effort required to implement it for non-trivial, real world protocols, as well as its suitability for such protocols. The question in this case is scalability: While it is usually straight-forward to apply any method to simple examples, we are interested in whether the method scales to implementations of complex protocols, such as QUIC, and also to see the requirements for such protocols in regards to automated testing. CASE STUDY: PICOQUIC AND QUANT For our case study we implemented our method for picoquic 1 and QUANT 2 . These implementations were chosen because they are written in C, and could therefore be analyzed by KLEE, our SymEx engine of choice, out-of-the-box. It has successfully been shown that SymEx can also be applied to programs written in other languages, like C++ [8], so this is not a limitation of the general approach. We defined multiple test scenarios, and developed simple clients for each library that execute the defined scenarios. An additional challenge is that the KLEE SymEx engine [3] only works on single programs, which caused us to implement a single program that instantiates all communication partners and advances them in tandem for each test scenario. This means that one endpoint (e.g., client or server) is executed until it can make no more progress (i.e., it blocks waiting for a response) at which point execution switches to the next endpoint. This continues until either the scenario is finished, or one endpoint reports an error. In the following sections we describe our test scenarios and present which library-specific adaptations were necessary, as well as which library-independent abstractions we implemented that can be re-used for other libraries in the future. Test Scenarios We decided upon three test scenarios that exercise some of the core features of QUIC: In the first scenario, a client establishes a connection with a server, then closes it again. In the second scenario, we establish a connection just as before, but the client also opens a stream and sends a simple HTTP request (GET /index.html), which the server then closes without responding. Finally, the third scenario builds upon the second one, but the server also responds with a one-byte response. We define as interoperability issues any case in which a run ends without the underlying scenario being fulfilled, e.g., because a connection could not be established, or because one of the endpoints timed out during the process. For each library we implemented a frontend which provides functions that create a client or a server for one of the scenarios, a function that advances a client or server (executing it until it has reached the next stage in the scenario or is blocked on network input), and a function that checks whether a client or server is finished with the scenario. In our evaluation, we focused on the scenarios being executed with a picoquic client communicating with a QUANT server, but our implementation also supports the other cases (QUANT client and picoquic server or both from the same library). Library-Independent Abstractions In order symbolically execute the test scenarios, we had to implement abstractions for various functionalities, such as blocking and non-blocking network operations, as well as cryptographic operations. Figure 2 shows an overview of the test setup, including the layers we replaced with abstractions, such as communication via UDP. UNIX Sockets. To enable KLEE to correctly route network data, we explicitly modeled the network environment by providing simple custom implementations of functions such as socket, connect, sendto, and so forth. Note that we implemented only those parts of the POSIX socket API necessary for executing the two implementations, as the whole API surface covers extensive functionality. These parts where straightforward to implement, and we used a simple linked-list structure for sent packets and otherwise tracked additional information per socket, and only implemented UDP functionality. Symbolic Values. We also used our abstraction to model some of the properties of UDP-based communication, such as unreliability. To model packet drops, we used a symbolic variable that decides whether or not to drop each packet. The result of this is that we will for each packet explore a path in which this packet was lost. Additionally, we also implemented the possibility to make certain bytes of a sent packet symbolic instead of simply delivering the packet. This allows testing the receiver of each packet with regards to robustness, within the current state of the communication. While this is enough for QUIC implementations that rely on blocking communication, such as picoquic, others rely on asynchronous event notifications. For the case of QUANT, this is provided by libev. Libev. Libev is a library that provides an event loop for asynchronous applications. We implemented a mock version of libev that fulfilled our requirements of being easy to integrate into QUANT and our final test scenario binaries, as well as being simple to evaluate with KLEE. OpenSSL. SymEx is not able to reverse constraints that are based on cryptographic operations (encryption, decryption, hashing, etc.), as otherwise the underlying cryptography would be broken. As QUIC heavily relies on cryptographic operations, we needed to make these operations transparent, for which we implemented an OpenSSL abstraction that always performs null-encryption. This means we implemented most of the functions used by picoquic and QUANT in a very bare-bones fashion, often nothing more than a no-op, and implemented encryption and decryption basically as a memcpy. With regards to hash-functions, we decided to use actual implementations of these hashes instead of, e.g., hashing all values to the same hash value. This had certain implications for our evaluation: On one hand, it makes our implementation more correct, as different messages will correctly hash to different values. On the other hand, whenever our SymEx engine had to reverse the result of such a hash-function, it would not be able to do so, possibly preventing the exploration of certain parts of the libraries. Library-Independence. All of these mocks are implemented independently of the QUIC library under test, and are reusable for future interoperability tests. Thus, our work lays the foundation for testing a larger set of implementations. Picoquic For picoquic, a frontend that can execute our test scenarios was straightforward to implement, due to the fact that an example client and an example server were available. We replaced blocking reads in the client and server with points at which execution would return to the test harness, so the next communication partner would be able to make progress. This was made easy by the fact that the picoquic API itself only prepares packets for sending, and leaves the actual sending to the application. This means that we could implement the communication handover inside of our frontend library, and did not need to implement it inside of picoquic itself, requiring no changes to the library. QUANT The changes needed for QUANT were more extensive, as QUANT internally uses a libev-based event loop, which we needed to intercept in order to be able to return execution to the test harness when the event loop would block waiting for new data. To do so, in addition to implementing a simple variant of libev as described before, we modified the top-level API functions of QUANT. These expected blocking behavior of the underlying event loop, but instead we changed them to return control to the test harness when entering the event loop would block. Additionally, QUANT made use of global variables, which lead to corrupt behavior when, e.g., trying to instantiate a QUANT server and client in the same binary. To circumvent this, we performed a simple renaming of all defined symbols on the LLVM IR of QUANT. This prefixes all functions, such as q_connect, with a prefix of our choice, resulting in, e.g., client__q_connect and server__q_connect. Since this also renamed all global variables, this allowed us to test QUANT clients and servers in the same binary. As only a single QUANT instance is contained, our case study did not require this additional renaming. However, since global variables are a common feature in programs, it is necessary that this is also supported by our approach. EVALUATION For our evaluation we considered six different combinations of scenarios and symbolic input. All configurations were executed in KLEE with Z3 [5] as the underlying SMT solver, with a time limit of 8 hours and a memory limit of 32 GB on a system with two E5-2643 v4 processors providing a total of 12 physical cores and 256 GB main memory. We additionally added timeouts of 10 seconds per instruction and per query, to prevent the analysis from being stuck on too hard queries. We chose picoquic and QUANT for our case study as both are written in C, which is supported by KLEE. Both of these libraries also implement the newest version of the QUIC standard at the time of writing (draft 14). Configurations We tested six configurations and provide the results of their symbolic execution in Table 1. Sym-stream. This configuration combines all three described scenarios. We added symbolic input that chooses which of these to execute, resulting in the execution of all three, as SymEx explores all possible paths. This is the same as executing all three scenarios concretely without further symbolic values. This configuration terminated in about a minute after exploring all three reachable paths through the test binary, and reported two bugs. The first error is an interoperability bug that we originally found during the development of our implementation. This bug occurs in the second scenario, when the stream is closed by the server without sending any data. In this case, the QUANT server silently closes the stream on its end, not notifying the client. The client then times out and closes the connection prematurely. The second error occurs because certain resources are freed which might still be in use inside of libev. This bug was discovered because our libev-abstraction touched the freed value, which was discovered by KLEE. In practice, this kind of bug is hard to check: As it occurred in a shared library, concrete execution with a tool like ASAN would not detect this bug, and this is exactly the kind of bug that can cause rare, random crashes. We reported both bugs in the QUANT library, and both were verified and later fixed 34 . This configuration also gives a good baseline for instruction and branch coverage, as the other configurations explore the third scenario, which covers the most API surface, with different symbolic values. The values for instruction and branch coverage include dead code, and thus their absolute values need to be treated with care. However, they can be used to compare against the other configurations. Sym-version. This configuration is built upon the third scenario (connection establishment, new stream, response), but makes the version proposed by the picoquic client to the server symbolic. We chose this configuration because setting the proposed version is an option of the picoquic library. This configuration terminates after only 25 minutes, with most of the time spent inside the solver. The reason for this is that this configuration resulted early on in the generation of constraints that were not solvable by the SMT solver in the given timeout, thus terminating all paths early on. Nevertheless, this configuration also found an error that prevented the establishment of a connection. This error occurs when the proposed version is set to 0xbabababa. As the current QUIC draft reserves all versions of the form 0x?a?a?a?a for version negotiation, it seems plausible that this version by itself could not lead to a successfully established connection. We categorize this bug as very mild, as it is obviously only a small API problem. Sym-drop. For this configuration we symbolically dropped every packet. This is the first configuration that needed more than a few hundred MB of memory, and also the first that explored a large number of paths through the program. We see that only little time was spent solving constraints, which makes sense, since no symbolic data was actually touched by either of the libraries (either a packet was delivered as-is or it was dropped). Most interestingly, this is also the first configuration that found a bug which only occurred after multiple exchanged packets, and would not be easily found during manual testing. The reported test case drops the 4th, 5th and 7th packet exchanged between the two endpoints, which triggers a segfault due to a null-pointer in the QUANT server when the 9th packet is received. We verified that this bug also occurs when running concretely with regular OpenSSL instead of our abstraction. This is a robustness bug, but it might also be an interoperability bug, as other implementations might not trigger it. Sym-mod-X. In these configurations, the first X bytes of every sent packet are made symbolic, in order to test the robustness of the receiving endpoint. This category includes the two configurations that reached the highest instruction and branch coverages, but it also includes the configuration that achieved the lowest coverages. A trend can be seen here: More symbolic bytes cause more work for the SymEx engine due to state explosion, resulting in more time spent inside the SMT solver, resulting in slower progress overall. However, the run that achieved the lowest coverage uncovered an additional bug in QUANT's packet receiving code. The generated test case triggers the bug by replacing the first 10 bytes of the first packet sent by picoquic with the concrete values [0xff, 0x01, 0x01, 0x01, 0x01, 0x67, 0xff, 0xff, 0xff, 0xff], which leads to a null-pointer dereference in the server. We verified this bug as well while running without our OpenSSL abstraction. FUTURE WORK While our case study shows the usefulness of automated testing techniques such as SymEx for analyzing QUIC implementations, there is still much that can be done. A first and important step is the definition of the kinds of belief state QUIC implementations should be able to report on. In a second step, such a model can then be used for testing implementations for state divergence regarding the belief states of the different endpoints. Most of the effort to achieve this should be in defining a common ground for the definition of the belief state. We expect then extracting the belief state from implementations to require manageable effort, since implementations must already be keeping track of the state of each connection. This could be realized in the form of standardized testing and verification interfaces for protocol implementations, which would enable high levels of accessibility for new analysis approaches. This in turn would lead to high-quality implementations, increasing stability, robustness and performance in a field where all of these are important. Our test scenario only dropped packets or made some of the bytes symbolic, but did not take the specific structure of QUIC packets into account. Here, a layer that reads the packets that are sent and performs symbolic mutations based on the semantics of the protocol, e.g., symbolic ACK numbers, could lead to more thorough and scalable testing. The need for such a method becomes obvious when looking at the sym-mod-10 configuration, which already caused a visible slowdown of the SymEx engine due to state explosion. Furthermore, to analyze more parts of protocol implementations, additional test scenarios that excercise so-far uncovered protocol functionality are required. One way to achieve this would be to create more test scenarios based on the QUIC standard. However, it might also be possible to automatically derive test scenarios, either from a model of the standard, or from the implementations themselves. For this, knowing which API call caused which state change could help choosing possible next API calls. To extend test scenarios to more than two endpoints, it might be favorable to utilize SymEx techniques that target distributed systems, such as KleeNet [17,18]. While doing so, it might also become relevant to investigate symbolic time, since behavior in network protocols is often dependent on timing, most notably due to timeouts. CONCLUSION We presented an interoperability-guided method to test QUIC implementations and demonstrated its potential in a case study. Our method consists of testing implementations in pre-defined scenarios, but enriched with additional symbolic input, such as packet drops and symbolic modifications. In our case study we showed that, in order to symbolically execute and test implementations, it is required that underlying libraries are abstracted in a way that is sensible for testing. On one hand, kernel code such as UNIX sockets can otherwise not be executed and analyzed, but also to, e.g., turn encryption transparent in order to enable any analysis at all. We were able to uncover several bugs with varying levels of severity. While two were simple API issues that could easily be found through manual testing, two of the other three would be hard to find without some kind of automated testing approach, as they occur only in very specific situations: The right packets in a long chain of packets have to be dropped, or a very specific first packet has to be sent. The last bug was only detected due to abstracting libev, but does not necessarily require SymEx to uncover. In summary, most of these bugs are robustness bugs. To detect deeper semantic interoperability bugs, support in implementations that provides information about the current belief state of endpoints is required. We appeal to the authors of QUIC implementations, as well as to the members of the IETF working group, to develop a common understanding of what information makes up the belief state of a QUIC connection, and to extend implementations with ways to report this information for the sake of deep semantic interoperability testing.
4,508
1811.12108
2913534916
Complex image processing and computer vision systems often consist of a processing pipeline of functional modules. We intend to replace parts or all of a target pipeline with deep neural networks to achieve benefits such as increased accuracy or reduced computational requirement. To acquire a large amount of labeled data necessary to train the deep neural network, we propose a workflow that leverages the target pipeline to create a significantly larger labeled training set automatically, without prior domain knowledge of the target pipeline. We show experimentally that despite the noise introduced by automated labeling and only using a very small initially labeled data set, the trained deep neural networks can achieve similar or even better performance than the components they replace, while in some cases also reducing computational requirements.
Our work can be considered an approach to @cite_16 @cite_8 , in which a target function is approximated by a surrogate that is cheaper to compute but introduces inaccuracy. In computer vision and image processing, some level of inaccuracy is often tolerable, due to the limits of human perception and the lack of a clearly delineated correct'' answer @cite_20 . Approximation can be introduced at the hardware level, such as by using approximate adder circuits (e.g. @cite_17 ), or at the software level by restructuring the algorithm.
{ "abstract": [ "Approximate computing has recently emerged as a promising approach to energy-efficient design of digital systems. Approximate computing relies on the ability of many systems and applications to tolerate some loss of quality or optimality in the computed result. By relaxing the need for fully precise or completely deterministic operations, approximate computing techniques allow substantially improved energy efficiency. This paper reviews recent progress in the area, including design of approximate arithmetic blocks, pertinent error and quality measures, and algorithm-level techniques for approximate computing.", "Addition is a fundamental function in arithmetic operation; several adder designs have been proposed for implementations in inexact computing. These adders show different operational profiles; some of them are approximate in nature while others rely on probabilistic features of nanoscale circuits. However, there has been a lack of appropriate metrics to evaluate the efficacy of various inexact designs. In this paper, new metrics are proposed for evaluating the reliability as well as the power efficiency of approximate and probabilistic adders. Reliability is analyzed using the so-called sequential probability transition matrices (SPTMs). Error distance (ED) is initially defined as the arithmetic distance between an erroneous output and the correct output for a given input. The mean error distance (MED) and normalized error distance (NED) are then proposed as unified figures that consider the averaging effect of multiple inputs and the normalization of multiple-bit adders. It is shown that the MED is an effective metric for measuring the implementation accuracy of a multiple-bit adder and that the NED is a nearly invariant metric independent of the size of an adder. The MED is, therefore, useful in assessing the effectiveness of an approximate or probabilistic adder implementation, while the NED is useful in characterizing the reliability of a specific design. Since inexact adders are often used for saving power, the product of power and NED is further utilized for evaluating the tradeoffs between power consumption and precision. Although illustrated using adders, the proposed metrics are potentially useful in assessing other arithmetic circuit designs for applications of inexact computing.", "Approximate computing, which refers to a class of techniques that relax the requirement of exact equivalence between the specification and implementation of a computing system, has attracted significant interest in recent years. We propose a systematic methodology, called MACACO, for the M odeling and A nalysis of C ircuits for A pproximate C omputing. The proposed methodology can be utilized to analyze how an approximate circuit behaves with reference to a conventional correct implementation, by computing metrics such as worst-case error, average-case error, error probability, and error distribution. The methodology applies to both timing-induced approximations such as voltage over-scaling or over-clocking, and functional approximations based on logic complexity reduction. The first step in MACACO is the construction of an equivalent untimed circuit that represents the behavior of the approximate circuit at a given voltage and clock period. Next, we construct a virtual error circuit that represents the error in the approximate circuit's output for any given input or input sequence. Finally, we apply conventional Boolean analysis techniques (SAT solvers, BDDs) and statistical techniques (Monte-Carlo simulation) in order to compute the various metrics of interest. We have applied the proposed methodology to analyze a range of approximate designs for datapath building blocks. Our results show that MACACO can help a designer to systematically evaluate the impact of approximate circuits, and to choose between different approximate implementations, thereby facilitating the adoption of such circuits for approximate computing.", "Approximate computing trades off computation quality with effort expended, and as rising performance demands confront plateauing resource budgets, approximate computing has become not merely attractive, but even imperative. In this article, we present a survey of techniques for approximate computing (AC). We discuss strategies for finding approximable program portions and monitoring output quality, techniques for using AC in different processing units (e.g., CPU, GPU, and FPGA), processor components, memory technologies, and so forth, as well as programming frameworks for AC. We classify these techniques based on several key characteristics to emphasize their similarities and differences. The aim of this article is to provide insights to researchers into working of AC techniques and inspire more efforts in this area to make AC the mainstream computing approach in future systems." ], "cite_N": [ "@cite_16", "@cite_17", "@cite_20", "@cite_8" ], "mid": [ "2020217519", "1998824039", "1991735330", "2265166184" ] }
0
1811.12108
2913534916
Complex image processing and computer vision systems often consist of a processing pipeline of functional modules. We intend to replace parts or all of a target pipeline with deep neural networks to achieve benefits such as increased accuracy or reduced computational requirement. To acquire a large amount of labeled data necessary to train the deep neural network, we propose a workflow that leverages the target pipeline to create a significantly larger labeled training set automatically, without prior domain knowledge of the target pipeline. We show experimentally that despite the noise introduced by automated labeling and only using a very small initially labeled data set, the trained deep neural networks can achieve similar or even better performance than the components they replace, while in some cases also reducing computational requirements.
Our work can also be seen as an application of the semi-supervised learning paradigm @cite_15 , where the learner is given both labeled and unlabeled training data. We take a bootstrapping or self-supervised'' approach @cite_6 @cite_5 , using elements of the processing pipeline as surrogate models to label the unlabeled examples. The imputed labels will contain errors, and thus techniques for learning from noisy labels @cite_3 @cite_1 are also relevant. Several works have shown that neural networks can be trained successfully based on noisy labels (e.g. @cite_5 @cite_21 @cite_12 @cite_7 ).
{ "abstract": [ "Current approaches for fine-grained recognition do the following: First, recruit experts to annotate a dataset of images, optionally also collecting more structured data in the form of part annotations and bounding boxes. Second, train a model utilizing this data. Toward the goal of solving fine-grained recognition, we introduce an alternative approach, leveraging free, noisy data from the web and simple, generic methods of recognition. This approach has benefits in both performance and scalability. We demonstrate its efficacy on four fine-grained datasets, greatly exceeding existing state of the art without the manual collection of even a single label, and furthermore show first results at scaling to more than 10,000 fine-grained categories. Quantitatively, we achieve top-1 accuracies of (92.3 , ) on CUB-200-2011, (85.4 , ) on Birdsnap, (93.4 , ) on FGVC-Aircraft, and (80.8 , ) on Stanford Dogs without using their annotated training sets. We compare our approach to an active learning approach for expanding fine-grained datasets.", "The availability of large labeled datasets has allowed Convolutional Network models to achieve impressive recognition results. However, in many settings manual annotation of the data is impractical; instead our data has noisy labels, i.e. there is some freely available label for each image which may or may not be accurate. In this paper, we explore the performance of discriminatively-trained Convnets when trained on such noisy data. We introduce an extra noise layer into the network which adapts the network outputs to match the noisy label distribution. The parameters of this noise layer can be estimated as part of the training process and involve simple modifications to current training infrastructures for deep networks. We demonstrate the approaches on several datasets, including large scale experiments on the ImageNet classification benchmark.", "We prove that the empirical risk of most well-known loss functions factors into a linear term aggregating all labels with a term that is label free, and can further be expressed by sums of the same loss. This holds true even for non-smooth, non-convex losses and in any RKHS. The first term is a (kernel) mean operator -- the focal quantity of this work -- which we characterize as the sufficient statistic for the labels. The result tightens known generalization bounds and sheds new light on their interpretation. Factorization has a direct application on weakly supervised learning. In particular, we demonstrate that algorithms like SGD and proximal methods can be adapted with minimal effort to handle weak supervision, once the mean operator has been estimated. We apply this idea to learning with asymmetric noisy labels, connecting and extending prior work. Furthermore, we show that most losses enjoy a data-dependent (by the mean operator) form of noise robustness, in contrast with known negative results.", "We study binary classification in the presence of class-conditional random noise, where the learner gets to see labels that are flipped independently with some probability, and where the flip probability depends on the class. Our goal is to devise learning algorithms that are efficient and statistically consistent with respect to commonly used utility measures. In particular, we look at a family of measures motivated by their application in domains where cost-sensitive learning is necessary (for example, when there is class imbalance). In contrast to most of the existing literature on consistent classification that are limited to the classical 0-1 loss, our analysis includes more general utility measures such as the AM measure (arithmetic mean of True Positive Rate and True Negative Rate). For this problem of cost-sensitive learning under class-conditional random noise, we develop two approaches that are based on suitably modifying surrogate losses. First, we provide a simple unbiased estimator of any loss, and obtain performance bounds for empirical utility maximization in the presence of i.i.d. data with noisy labels. If the loss function satis_es a simple symmetry condition, we show that using unbiased estimator leads to an efficient algorithm for empirical maximization. Second, by leveraging a reduction of risk minimization under noisy labels to classification with weighted 0-1 loss, we suggest the use of a simple weighted surrogate loss, for which we are able to obtain strong utility bounds. This approach implies that methods already used in practice, such as biased SVM and weighted logistic regression, are provably noise-tolerant. For two practically important measures in our family, we show that the proposed methods are competitive with respect to recently proposed methods for dealing with label noise in several benchmark data sets.", "This paper presents an unsupervised learning algorithm for sense disambiguation that, when trained on unannotated English text, rivals the performance of supervised techniques that require time-consuming hand annotations. The algorithm is based on two powerful constraints---that words tend to have one sense per discourse and one sense per collocation---exploited in an iterative bootstrapping procedure. Tested accuracy exceeds 96 .", "Current state-of-the-art deep learning systems for visual object recognition and detection use purely supervised training with regularization such as dropout to avoid overfitting. The performance depends critically on the amount of labeled examples, and in current practice the labels are assumed to be unambiguous and accurate. However, this assumption often does not hold; e.g. in recognition, class labels may be missing; in detection, objects in the image may not be localized; and in general, the labeling may be subjective. In this work we propose a generic way to handle noisy and incomplete labeling by augmenting the prediction objective with a notion of consistency. We consider a prediction consistent if the same prediction is made given similar percepts, where the notion of similarity is between deep network features computed from the input data. In experiments we demonstrate that our approach yields substantial robustness to label noise on several datasets. On MNIST handwritten digits, we show that our model is robust to label corruption. On the Toronto Face Database, we show that our model handles well the case of subjective labels in emotion recognition, achieving state-of-the- art results, and can also benefit from unlabeled face images with no modification to our method. On the ILSVRC2014 detection challenge data, we show that our approach extends to very deep networks, high resolution images and structured outputs, and results in improved scalable detection.", "Door lock apparatus in which a door latch mechanism is operated by inner and outer door handles coupled to a latch shaft extending through the latch mechanism. Handles are coupled to ends of latch shaft by coupling devices enabling door to be locked from the inside to prevent entry from the outside but can still be opened from the inside by normal operation of outside handle. Inside coupling device has limited lost-motion which is used to operate cam device to unlock the door on actuation of inner handles.", "We present an approach to utilize large amounts of web data for learning CNNs. Specifically inspired by curriculum learning, we present a two-step approach for CNN training. First, we use easy images to train an initial visual representation. We then use this initial CNN and adapt it to harder, more realistic images by leveraging the structure of data and categories. We demonstrate that our two-stage CNN outperforms a fine-tuned CNN trained on ImageNet on Pascal VOC 2012. We also demonstrate the strength of webly supervised learning by localizing objects in web images and training a R-CNN style [19] detector. It achieves the best performance on VOC 2007 where no VOC training data is used. Finally, we show our approach is quite robust to noise and performs comparably even when we use image search results from March 2013 (pre-CNN image search era)." ], "cite_N": [ "@cite_7", "@cite_21", "@cite_1", "@cite_3", "@cite_6", "@cite_5", "@cite_15", "@cite_12" ], "mid": [ "2287418003", "1866072925", "2963113424", "2803642127", "2101210369", "2121056381", "2136504847", "2124219775" ] }
0
1811.11507
2902069582
We tackle one-shot visual search by example for arbitrary object categories: Given an example image of a novel reference object, find and segment all object instances of the same category within a scene. To address this problem, we propose Siamese Mask R-CNN. It extends Mask R-CNN by a Siamese backbone encoding both reference image and scene, allowing it to target detection and segmentation towards the reference category. We use Siamese Mask R-CNN to perform one-shot instance segmentation on MS-COCO, demonstrating that it can detect and segment objects of novel categories it was not trained on, and without using mask annotations at test time. Our results highlight challenges of the one-shot setting: while transferring knowledge about instance segmentation to novel object categories not used during training works very well, targeting the detection and segmentation networks towards the reference category appears to be more difficult. Our work provides a first strong baseline for one-shot instance segmentation and will hopefully inspire further research in this relatively unexplored field.
Object detection is a classical computer vision problem @cite_57 @cite_19 @cite_33 @cite_61 . Modern work can be split broadly into two general approaches: Single stage detectors @cite_92 @cite_70 @cite_20 @cite_43 @cite_42 are usually very fast, while multi-stage detectors @cite_34 @cite_84 @cite_11 @cite_93 perform a coarse proposal step followed by a fine-grained classification, and are usually more accurate. Most state-of-the-art systems are based on Faster R-CNN @cite_3 , a two-step object detector that generates proposals, for each of which it crops features out of the last feature map of a backbone. Feature Pyramid Networks @cite_12 are a popular extension that uses feature maps at multiple spatial resolutions to increase scale invariance.
{ "abstract": [ "", "", "", "We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.", "", "", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.", "The Pascal Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection. This paper describes the dataset and evaluation procedure. We review the state-of-the-art in evaluated methods for both classification and detection, analyse whether the methods are statistically different, what they are learning from the images (e.g. the object or its context), and what the methods find easy or confuse. The paper concludes with lessons learnt in the three year history of the challenge, and proposes directions for future improvement and extension.", "", "", "", "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.", "Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But recent deep learning object detectors have avoided pyramid representations, in part because they are compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using FPN in a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available.", "", "" ], "cite_N": [ "@cite_61", "@cite_33", "@cite_70", "@cite_92", "@cite_42", "@cite_84", "@cite_3", "@cite_57", "@cite_19", "@cite_43", "@cite_93", "@cite_34", "@cite_12", "@cite_20", "@cite_11" ], "mid": [ "", "", "", "2193145675", "", "", "2613718673", "2031489346", "", "", "", "2102605133", "2949533892", "", "" ] }
One-Shot Instance Segmentation
Humans do not only excel at acquiring novel concepts from a small number of training examples (few-shot learning), but can also readily point to such objects (object detection) and draw their outlines (instance segmentation). In recent years, machine vision has made substantial advances in one-shot learning [38,79,24] with a strong focus on image classification in a discriminative setting. Similarly, a lot of progress has been made on object detection and instance segmentation [29,59], but both tasks are still very data-hungry and the proposed approaches perform well only for a small number of object classes, for which enough annotated examples are available. In this paper, we work towards taking the one-shot setting to real-world instance segmentation: We learn to detect and segment arbitrary object categories (not necessarily included in the training set) based on a single visual example ( Fig. 1). That is, given an arbitrary query image and a single reference instance, the goal is to generate a bounding box and an instance mask for every instance in the image that is of the same object category as the reference. This type of visual search task creates new challenges for computer vision algorithms, as methods from metric and few-shot learning have to be incorporated into the notoriously hard tasks of object identification and segmentation. Our approach is based on taking ideas from metric learning (Siamese networks) and combining them with Mask R-CNN, a state-of-the-art object detection and segmentation system (Fig. 2). Our main contributions are as follows: • We present Siamese Mask R-CNN for performing oneshot instance segmentation. It extends Mask R-CNN [29] with a Siamese backbone and a matching procedure to perform visual search. • We introduce a novel one-shot visual search task, requiring object detection and instance segmentation based on a single visual example. • We establish an evaluation protocol for this task and evaluate our model on MS-COCO [44]. We show that segmenting novel object categories works well even without mask annotations at test time, while targeting the detection towards the reference category is the main challenge. • We will make code and pre-trained models available. Related work Our approach lies at the intersection of few-shot/metric learning, object detection/visual search, and instance segmentation. Each of these aspects has been studied extensively, as we review in the following. The novelty of our approach is the combination of all these aspects into a new problem. Object detection. Object detection is a classical computer vision problem [22,31,82,4]. Modern work can be split broadly into two general approaches: Single stage detectors [47,66,67,68,43] are usually very fast, while multistage detectors [26,25,71,29] perform a coarse proposal step followed by a fine-grained classification, and are usually more accurate. Most state-of-the-art systems are based on Faster R-CNN [71], a two-step object detector that generates proposals, for each of which it crops features out of the last feature map of a backbone. Feature Pyramid Networks [42] are a popular extension that uses feature maps at multiple spatial resolutions to increase scale invariance. Instance segmentation. In contrast to semantic segmentation [49,55,73,60,90,9,15,48], where every pixel is classified into a category, instance segmentation additionally requires to discriminate between individual object instances [27,18,28,62,19,39,63,72,5,14,23,29,45,70,37]. Most current state-of-the-art systems are based on Mask R-CNN [29,46,1], an extension of Faster R-CNN [71] performing joint object detection and instance segmentation. Weakly supervised object detection and segmentation. Labeled data is hard to obtain for instance-level tasks like object detection, and even more so for pixel-level tasks like segmentation [44,12,3]. Therefore, various weakly and semi-supervised approaches have been explored [32,88,57,35,92]. Weak supervision is a promising direction for annotation-heavy tasks, hence it has been explored for semantic segmentation [58,57,61,17,88,7,41], object detection [56,91,67] and instance segmentation [35,33,92]. Visual search. Visual search has a long history in perceptual psychology (reviewed, e.g., by [75]), although typically with simple visual patterns, while search for arbitrary objects in real scenes has been addressed only recently [89,87], and often using a natural language cue [87]. Few-shot learning. Few-Shot learning has seen great progress over the last years. A classic approach is based on metric learning using Siamese neural networks [8,16,36], which -due to its simplicity -is also the approach we use. The metric learning approach has seen a number of improvements in recent years [36,84,79,85,86]. Other approaches are based on generative models [38,76], ideas from information retrieval [81] or employ meta learning [24,40,52,51,53,54,74,80,69]. Few-shot segmentation. Closely related to our work is one-shot semantic segmentation of images using either an object instance as reference [78,65,20,50] or a texture [83]. However, the key difference is that these systems perform pixel-level classifications and cannot distinguish individual instances. The only work on one-shot instance segmentation we are aware of tracks an object instance across a video sequence based on a small number of annotated frames [10,11], which differs from our setup in that a single object is to be tracked, for which ground-truth annotations are available. Few-shot object detection. There is related, but not directly comparable work on few-shot object detection. Some work focuses on settings with few (more than one) annotated training images per category [13,21], while others tackle the zero-shot setting based on only a textual description of the reference [6,64]. Most closely related to our work is concurrent work based on Siamese networks for one-shot detection on an Omniglot-based dataset and for audio data [34] as well as work on fine-grained bird classification and localization in ImageNet images [77], which tend to have only one or few instances per image. In contrast, we work on potentially cluttered real-world images. One-shot object detection and instance segmentation on MS-COCO We define a one-shot object detection and instance segmentation task on MS-COCO: Given a reference image showing a close-up of an example object, find all instances of objects belonging to the same category in a separate query image, which shows an entire visual scene potentially containing many objects. To work in a one-shot setting, we split the 80 object categories in MS-COCO into background and one-shot evaluation splits 1 , containing 60 and 20 categories, respectively. We generate four such background/evaluation splits by starting with the first, second, third or fourth category, respectively, and including every fourth category into the one-shot evaluation split. We call those splits S 1 -S 4 ; they are given in Table 3 in the Appendix. Note that this one-shot visual search setup differs from earlier, purely discriminative one-shot learning setups: At training time, the query images can contain objects from the one-shot evaluation categories, but they are neither selected as the reference nor are they annotated in any way. We therefore still refer to this setting as one-shot, because no label information is available for these categories during training. Conversely, at test time, the query images contain both known and novel object categories. Taken together, we consider this setup to be a realistic scenario in the real world of an autonomous agent, which would typically encounter new objects alongside the known objects and may encounter unlabeled objects multiple times before they become relevant and label information is provided (think of a household robot seeing a certain type of toy in various parts of the apartment multiple times before you instruct it to go pick it up for you). This setup also produces a number of challenges for evaluation, which we discuss in Section 5.2. Siamese Mask R-CNN The key idea behind Siamese Mask R-CNN is to detect and segment object instances based on a single visual example of some object category. Thus, it must deal with arbitrary, potentially previously unseen object categories, rather than with a fixed set of categories. We base Siamese Mask R-CNN on Mask R-CNN [29] with feature pyramid networks [42]. To adapt it to the visual search task, we turn the backbone into a Siamese network -hence the prefix Siamese -, which extracts features from both the reference image and the scene and computes a pixel-wise similarity between the two. The image features and the similarity score form the input to three heads: (1) the Region Pro- Figure 3. Sketch of the matching procedure. The reference encoding is reduced to a vector by average pooling (1) and the point by point absolute difference to the scene encoding is computed (2). The concatenated (3) scene encoding and reference features are reduced by a 1 × 1 convolution (4) before feeding them to the network heads. posal Network (RPN), (2) the bounding box classification and regression head and (3) the segmentation head. In the following, we briefly review the key components of Mask R-CNN and then introduce our extensions. Mask R-CNN Mask R-CNN is a two-stage object detector that consists of a backbone feature extractor and multiple heads operating on these features (see Fig. 2A). We choose a ResNet50 [30] with Feature Pyramid Networks (FPN) [42] as our backbone. The heads consist of two stages. First, the region proposal network (RPN) is applied convolutionally across the image to predict possible object locations in the scene. The highest scoring region proposals are then cropped from the backbone feature maps and used as inputs for the bounding box classification (CLS) and regression (BBOX) head as well as the instance masking head (MASK). Siamese feature pyramid networks In the conventional object detection/instance segmentation setting, the set of possible categories is known in advance, so the task of the backbone is to extract useful features for the subsequent detection and segmentation stages. In contrast, in the one-shot setting the information on which objects to detect and segment is provided in the form of a reference image, which can contain an object category the system has not been trained on. To adapt to this situation, our backbone does not only extract useful features from the scene image, but also computes a similarity metric to the reference at each possible location. To do so, we follow the basic idea of Siamese networks [36] and apply the same backbone (ResNet50 with FPN) with shared weights to extract features from both the reference and the scene. These features are then matched pixel-wise as described below. Feature matching The feature pyramid network produces image features at multiple scales, hence we perform the following matching procedure at each scale of the pyramid (Fig. 3): 1. Pool the features of the reference image over space using average pooling to obtain a vector embedding of the category to be detected and segmented. 2. At every spatial position of the scene representation, compute the absolute difference between the features of the reference and that of the scene. 3. Concatenate the scene representation and the pixelwise distance between the two. 4. Reduce the number of features by 1 × 1 convolution. The resulting features are then used as a drop-in replacement for the original feature pyramid as they have the same dimensionality. The key difference is that they do not only encode the content of the scene image, but also its similarity to the reference image, which forms the basis for the subsequent heads to generate object proposals, classify matches vs. non-matches and generate instance masks. Head architecture We use the same region proposal network (RPN) as Mask R-CNN, changing only its inputs as described above and the way examples are generated during training (described below). We also use the same classification and bounding box regression head as Mask R-CNN, but change the classification from an 80-way class discrimination to a binary match/non-match discrimination. Similarly, for the mask branch we generate only a single instance mask instead of one per potential class. Implementation details Our system is based on the Matterport implementation of Mask R-CNN [2]. We provide all details in Appendix 1. Experiments We train Siamese Mask R-CNN jointly on object detection and instance segmentation in the visual search setting. We evaluate the trained models both on previously seen and unseen (one-shot) categories using splits of MS-COCO. Training Pre-training backbone. We pre-train the ResNet backbone on image classification on a reduced subset of Ima-geNet, which contains images from the 687 ImageNet categories without correspondence in MS-COCO -hence we refer to it as ImageNet-687. Pre-training on this reduced set ensures that we do not use any label information about the one-shot classes at any training stage. Training Siamese Mask R-CNN. We train the models using stochastic gradient descent with momentum for 160,000 steps with a batch size of 12 on four NVIDIA P100 GPUs in parallel. We use an initial learning rate of 0.02 and a momentum of 0.9. During the first 1,000 steps, we train only the heads. After that, we train the entire network, including the backbone and all heads, end-to-end. After 120,000 steps, we divide the learning rate by 10. Construction of mini-batches. During training, a minibatch contains 12 sets of reference and query images. We first draw the query images at random from the training set and pre-process them in the following way: (1) we resize an image so that the longer side is 1024 px, while keeping the aspect ratio, (2) we zero-pad the smaller side of the image to be square 1024 × 1024, (3) we subtract the mean ImageNet RGB value from each pixel. Next, for each image, we generate a reference image as follows: (1) draw a random category among all categories of the background set present in the image, (2) crop a random instance of the selected category out of any image in the training set (using the bounding box annotation), and (3) resize the reference image so that its longer side is 192 px and zero-pad the shorter side to get a square image of 192 × 192. To enable a quick look-up of reference instances, we created an index that contains a list of categories present in each image. Labels. We use only the annotations of object instances in the query image that belong to the corresponding reference category. All other objects are treated as background. Loss function. Siamese Mask R-CNN is trained on the same basic multi-task objective as Mask R-CNN: classification and bounding box loss for the RPN; classification, bounding box and mask loss for each RoI. There are a couple of differences as well. First, the classification losses consist of a binary cross-entropy of the match/non-match classification rather than an 80-way multinomial crossentropy used for classification on MS-COCO. Second, we found that weighting the individual losses differently improved performance in the one-shot setting. Specifically, we apply the following weights to each component of the loss function: RPN classification loss: 2, RPN bounding box loss: 0.1, RoI classification loss: 2, RoI bounding box loss: 0.5 and mask loss: 1. Mask R-CNN. For comparison, we also trained the original Mask R-CNN on MS-COCO on all 80 classes for 320,000 steps using the same hyper parameters as for Siamese Mask R-CNN but without the adjustments to the loss function weights described above. Evaluation General procedure. We evaluate the performance of our model using the MS-COCO val 2017 set as a test set (it was not used for training). We do one evaluation run per class split S, using the following procedure: Figure 4. Object scores can be thought of as posterior probabilities, i.e. the product of image evidence and category prior. Thus, the optimal criterion depends on the prior, but in a one-shot setting, there is no information about the prior. Baseline: random boxes As a very naïve baseline, we evaluate the performance of a model predicting random bounding boxes and segmentation masks. To do so, we take ground-truth bounding boxes and segmentation masks for the category of the reference image, and randomly shift the boxes around the image (assigning a random confidence value for each box between 0.8 and 1). We keep the ground-truth segmentation masks intact in the shifted boxes. Such procedure allows us to get random predictions while keeping certain statistics of the ground-truth annotations (e.g. number of boxes per image, their sizes, etc.). Results Example-based detection and segmentation We start by showing our results on the task of object detection and instance segmentation targeted to a single class, which is given by an example. This is essentially a metric learning problem: we learn a similarity metric between image regions and the reference image. This allows the detection and segmentation heads to produce bounding boxes and instance masks for matching objects. As discussed above, this problem is harder than training an object detector for a fixed set of classes, and we therefore simplified the training and evaluation process (see Section 5.2 above). To put our one-shot results reported below in context, we first trained both Siamese Mask R-CNN as well regular Mask R-CNN on the entire MS-COCO data set (Table 1). Our Mask R-CNN implementation performed reasonably, achieving 42.5% mAP50 on detection and 40.1% on instance segmentation. These numbers are not state-of-theart (due to limited availability of extendable code and pretrained models), but that doesn't change the conclusions, since we are interested in relative performance differences to Mask R-CNN and not in absolute values. Siamese Mask R-CNN achieved 35.7% mAP on detection and 33.4% on instance segmentation using the same backbone, training schedule, etc., but based on examples rather than trained on a fixed set of categories. Thus, we conclude that the proposed Siamese Mask R-CNN architec- ture can learn object detection and instance segmentation based on examples, but there is room for improvement, suggesting that the example-based setting is more challenging. One-shot instance segmentation Next, we report the results of evaluating Siamese Mask R-CNN in the one-shot setting. That is, we train on the background splits without using instances of one-shot evaluation splits (Section 3) as reference images. These results are shown in Table 2. The average detection mAP50 scores for the one-shot splits are around 17%, while the segmentation ones are around 15%, with some variability between splits. These values are significantly lower than those for the background splits, indicating the difficulty of the oneshot setting. The mAP50 scores for the background splits are slightly higher than those in Table 1, because the former contain only 60 categories while the latter were trained on all 80. Taken together, these results suggest that we observe a substantial degree of overfitting on the background classes used during training. This result is in contrast to earlier work on Omniglot [50] that observed good generalization beyond the background set, presumably because Omniglot contains a larger number of categories and the image statistics are simpler. Figure 5 shows examples of successful Siamese Mask R-CNN predictions for one-shot categories (i.e. categories not used during training). These examples allow us to get a feeling for the difficulty of the task: the reference inputs are quite different from the instances in the query image, sometimes they show only part of the reference object and they are never annotated with ground truth segmentation masks. To generate bounding boxes and segmentation masks, the model can use only its general knowledge about objects and their boundaries and the metric learned on the other categories to compute the visual similarity between the reference and the query instances. For instance, the bus on the right or the horse in the bottom left in Figure 5 are incomplete and the network has never been provided with ground truth bounding boxes or instance masks for either horses or buses. Nevertheless, it still finds the correct object in the query image and segments the entire object. Qualitative analysis We also show examples of failure cases in Figure 6. The picture that emerges from both successful and failure cases is that the network produces overall very good bounding boxes and segmentation masks, but often fails at targeting it towards the correct category. We elaborate more in the next section on the challenges of the one-shot setting. False positives in the one-shot setting There is a marked drop in model performance between the background and the one-shot evaluation splits, suggesting some degree of overfitting to the background categories One-shot classes Figure 7. Confusion matrix for the Siamese Mask R-CNN model using split S2 for one-shot evaluation. The element (i, j) shows the AP50 of using detections for category i and evaluating them as instances of category j. The histogram below the matrix shows the most commonly confused (or falsely predicted) categories. used during training. If overfitting to background classes was indeed the main issue, we would expect false positives to be biased towards these categories and, in particular, towards those categories that are most frequent in the training set. This seems to be qualitatively the case (Fig. 5). In addition, we quantified this observation by computing a confusion matrix of MS-COCO categories (Fig. 7). The element (i, j) of this matrix corresponds to the AP50 value of detections obtained for reference images of category i, which are evaluated as if the reference images belonged to category j. If there were no false positives, the off-diagonal elements of the matrix would be zero. The sums of values in the columns show instances of categories that are most often falsely detected (the histogram of such sums is shown below the matrix). Among such commonly falsely predicted categories are people, cars, airplanes, clocks, and other categories that are common in the dataset. Effect of image clutter Previous work on synthetic data [50] found that cluttered scenes are especially challenging in the one-shot setting. This effect is also present in the current context. Both detection and segmentation scores are substantially higher when conditioning on images with a small number of total instances (Figure 8), underscoring the importance of extending the model to robustly process cluttered scenes. Discussion We introduced the task of one-shot instance segmentation and proposed a model based on combining the Mask R-CNN architecture with a metric learning approach to perform this task. There are two main problems in this task: (1) learning a good metric for one-shot detection of novel objects and (2) transferring the knowledge about bounding boxes and instance masks from known to novel object categories. Our results suggest that in the context of MS-COCO, the first part is more difficult than the second part. Overall, bounding boxes and instance masks are of high quality. The relatively weak performance of our current model appears to be caused by its difficulties in classifying if the detected object is of the same category as the reference. Our observation of a substantial amount of overfitting towards the categories used during training supports this hypothesis. Our system is not based on the latest and highestperforming object detector, but was rather driven by availability of code for existing approaches; we expect that incorporating better object detection architectures and larger backbones into our one-shot visual search framework will lead to performance improvements analogous to those reported on the fixed-category problem. However, closing the gap between the fixed-category and the one-shot visual search problems would likely require not just better mAP50 score on the test set Detection Segmentation components for our model, but rather conceptual changes to the model itself and to the training data. Such changes might include larger datasets with more object categories than MS-COCO or more sophisticated approaches to oneshot learning from a relatively small number of background categories. There are a couple of drawbacks to our current approach, and resolving them is likely to lead to improvements in performance. For instance, during training we currently treat all instances of the one-shot categories as background, which probably encourages the model to suppress their detection even if they match the reference well. In addition, the reference instances are sometimes hard to recognize even for humans, because they are cropped to their bounding box and lack image context, which can be an important cue for recognition. Finally, the system currently relies exclusively on comparing each object proposal to the reference image and performing a match/non-match discrimination. However, one may instead want to do an N +1-way classification, assigning each instance to one of the N already known categories or a novel, N +1 st one, and only in the latter case rely on a similarity metric and a binary match/non-match classification. In summary, one-shot instance segmentation is a hard problem on a diverse real-world dataset like MS-COCO. It requires combining ideas from few-shot/metric learning, object detection and segmentation, and we believe it is a perfect test bed for developing truly general vision systems. mental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI/IBC, or the U.S. Government.
4,076
1811.11507
2902069582
We tackle one-shot visual search by example for arbitrary object categories: Given an example image of a novel reference object, find and segment all object instances of the same category within a scene. To address this problem, we propose Siamese Mask R-CNN. It extends Mask R-CNN by a Siamese backbone encoding both reference image and scene, allowing it to target detection and segmentation towards the reference category. We use Siamese Mask R-CNN to perform one-shot instance segmentation on MS-COCO, demonstrating that it can detect and segment objects of novel categories it was not trained on, and without using mask annotations at test time. Our results highlight challenges of the one-shot setting: while transferring knowledge about instance segmentation to novel object categories not used during training works very well, targeting the detection and segmentation networks towards the reference category appears to be more difficult. Our work provides a first strong baseline for one-shot instance segmentation and will hopefully inspire further research in this relatively unexplored field.
Visual search has a long history in perceptual psychology (reviewed, , by @cite_37 ), although typically with simple visual patterns, while search for arbitrary objects in real scenes has been addressed only recently @cite_45 @cite_29 , and often using a natural language cue @cite_29 .
{ "abstract": [ "How efficient is visual search in real scenes? In searches for targets among arrays of randomly placed distractors, efficiency is often indexed by the slope of the reaction time (RT) × Set Size function. However, it may be impossible to define set size for real scenes. As an approximation, we hand-labeled 100 indoor scenes and used the number of labeled regions as a surrogate for set size. In Experiment 1, observers searched for named objects (a chair, bowl, etc.). With set size defined as the number of labeled regions, search was very efficient ( 5 ms item). When we controlled for a possible guessing strategy in Experiment 2, slopes increased somewhat ( 15 ms item), but they were much shallower than search for a random object among other distinctive objects outside of a scene setting (Exp. 3: 40 ms item). In Experiments 4–6, observers searched repeatedly through the same scene for different objects. Increased familiarity with scenes had modest effects on RTs, while repetition of target items had large effects (>500 ms). We propose that visual search in scenes is efficient because scene-specific forms of attentional guidance can eliminate most regions from the “functional set size” of items that could possibly be the target.", "In this work we present In-Place Activated Batch Normalization (InPlace-ABN) - a novel approach to drastically reduce the training memory footprint of modern deep neural networks in a computationally efficient way. Our solution substitutes the conventionally used succession of BatchNorm + Activation layers with a single plugin layer, hence avoiding invasive framework surgery while providing straightforward applicability for existing deep learning frameworks. We obtain memory savings of up to 50 by dropping intermediate results and by recovering required information during the backward pass through the inversion of stored forward results, with only minor increase (0.8-2 ) in computation time. Also, we demonstrate how frequently used checkpointing approaches can be made computationally as efficient as InPlace-ABN. In our experiments on image classification, we demonstrate on-par results on ImageNet-1k with state-of-the-art approaches. On the memory-demanding task of semantic segmentation, we report results for COCO-Stuff, Cityscapes and Mapillary Vistas, obtaining new state-of-the-art results on the latter without additional training data but in a single-scale and -model scenario. Code can be found at this https URL .", "To determine whether categorical search is guided we had subjects search for teddy bear targets either with a target preview (specific condition) or without (categorical condition). Distractors were random realistic objects. Although subjects searched longer and made more eye movements in the categorical condition, targets were fixated far sooner than was expected by chance. By varying target repetition we also determined that this categorical guidance was not due to guidance from specific previously viewed targets. We conclude that search is guided to categorically-defined targets, and that this guidance uses a categorical model composed of features common to the target class." ], "cite_N": [ "@cite_29", "@cite_37", "@cite_45" ], "mid": [ "2058499946", "2772805599", "2122951813" ] }
One-Shot Instance Segmentation
Humans do not only excel at acquiring novel concepts from a small number of training examples (few-shot learning), but can also readily point to such objects (object detection) and draw their outlines (instance segmentation). In recent years, machine vision has made substantial advances in one-shot learning [38,79,24] with a strong focus on image classification in a discriminative setting. Similarly, a lot of progress has been made on object detection and instance segmentation [29,59], but both tasks are still very data-hungry and the proposed approaches perform well only for a small number of object classes, for which enough annotated examples are available. In this paper, we work towards taking the one-shot setting to real-world instance segmentation: We learn to detect and segment arbitrary object categories (not necessarily included in the training set) based on a single visual example ( Fig. 1). That is, given an arbitrary query image and a single reference instance, the goal is to generate a bounding box and an instance mask for every instance in the image that is of the same object category as the reference. This type of visual search task creates new challenges for computer vision algorithms, as methods from metric and few-shot learning have to be incorporated into the notoriously hard tasks of object identification and segmentation. Our approach is based on taking ideas from metric learning (Siamese networks) and combining them with Mask R-CNN, a state-of-the-art object detection and segmentation system (Fig. 2). Our main contributions are as follows: • We present Siamese Mask R-CNN for performing oneshot instance segmentation. It extends Mask R-CNN [29] with a Siamese backbone and a matching procedure to perform visual search. • We introduce a novel one-shot visual search task, requiring object detection and instance segmentation based on a single visual example. • We establish an evaluation protocol for this task and evaluate our model on MS-COCO [44]. We show that segmenting novel object categories works well even without mask annotations at test time, while targeting the detection towards the reference category is the main challenge. • We will make code and pre-trained models available. Related work Our approach lies at the intersection of few-shot/metric learning, object detection/visual search, and instance segmentation. Each of these aspects has been studied extensively, as we review in the following. The novelty of our approach is the combination of all these aspects into a new problem. Object detection. Object detection is a classical computer vision problem [22,31,82,4]. Modern work can be split broadly into two general approaches: Single stage detectors [47,66,67,68,43] are usually very fast, while multistage detectors [26,25,71,29] perform a coarse proposal step followed by a fine-grained classification, and are usually more accurate. Most state-of-the-art systems are based on Faster R-CNN [71], a two-step object detector that generates proposals, for each of which it crops features out of the last feature map of a backbone. Feature Pyramid Networks [42] are a popular extension that uses feature maps at multiple spatial resolutions to increase scale invariance. Instance segmentation. In contrast to semantic segmentation [49,55,73,60,90,9,15,48], where every pixel is classified into a category, instance segmentation additionally requires to discriminate between individual object instances [27,18,28,62,19,39,63,72,5,14,23,29,45,70,37]. Most current state-of-the-art systems are based on Mask R-CNN [29,46,1], an extension of Faster R-CNN [71] performing joint object detection and instance segmentation. Weakly supervised object detection and segmentation. Labeled data is hard to obtain for instance-level tasks like object detection, and even more so for pixel-level tasks like segmentation [44,12,3]. Therefore, various weakly and semi-supervised approaches have been explored [32,88,57,35,92]. Weak supervision is a promising direction for annotation-heavy tasks, hence it has been explored for semantic segmentation [58,57,61,17,88,7,41], object detection [56,91,67] and instance segmentation [35,33,92]. Visual search. Visual search has a long history in perceptual psychology (reviewed, e.g., by [75]), although typically with simple visual patterns, while search for arbitrary objects in real scenes has been addressed only recently [89,87], and often using a natural language cue [87]. Few-shot learning. Few-Shot learning has seen great progress over the last years. A classic approach is based on metric learning using Siamese neural networks [8,16,36], which -due to its simplicity -is also the approach we use. The metric learning approach has seen a number of improvements in recent years [36,84,79,85,86]. Other approaches are based on generative models [38,76], ideas from information retrieval [81] or employ meta learning [24,40,52,51,53,54,74,80,69]. Few-shot segmentation. Closely related to our work is one-shot semantic segmentation of images using either an object instance as reference [78,65,20,50] or a texture [83]. However, the key difference is that these systems perform pixel-level classifications and cannot distinguish individual instances. The only work on one-shot instance segmentation we are aware of tracks an object instance across a video sequence based on a small number of annotated frames [10,11], which differs from our setup in that a single object is to be tracked, for which ground-truth annotations are available. Few-shot object detection. There is related, but not directly comparable work on few-shot object detection. Some work focuses on settings with few (more than one) annotated training images per category [13,21], while others tackle the zero-shot setting based on only a textual description of the reference [6,64]. Most closely related to our work is concurrent work based on Siamese networks for one-shot detection on an Omniglot-based dataset and for audio data [34] as well as work on fine-grained bird classification and localization in ImageNet images [77], which tend to have only one or few instances per image. In contrast, we work on potentially cluttered real-world images. One-shot object detection and instance segmentation on MS-COCO We define a one-shot object detection and instance segmentation task on MS-COCO: Given a reference image showing a close-up of an example object, find all instances of objects belonging to the same category in a separate query image, which shows an entire visual scene potentially containing many objects. To work in a one-shot setting, we split the 80 object categories in MS-COCO into background and one-shot evaluation splits 1 , containing 60 and 20 categories, respectively. We generate four such background/evaluation splits by starting with the first, second, third or fourth category, respectively, and including every fourth category into the one-shot evaluation split. We call those splits S 1 -S 4 ; they are given in Table 3 in the Appendix. Note that this one-shot visual search setup differs from earlier, purely discriminative one-shot learning setups: At training time, the query images can contain objects from the one-shot evaluation categories, but they are neither selected as the reference nor are they annotated in any way. We therefore still refer to this setting as one-shot, because no label information is available for these categories during training. Conversely, at test time, the query images contain both known and novel object categories. Taken together, we consider this setup to be a realistic scenario in the real world of an autonomous agent, which would typically encounter new objects alongside the known objects and may encounter unlabeled objects multiple times before they become relevant and label information is provided (think of a household robot seeing a certain type of toy in various parts of the apartment multiple times before you instruct it to go pick it up for you). This setup also produces a number of challenges for evaluation, which we discuss in Section 5.2. Siamese Mask R-CNN The key idea behind Siamese Mask R-CNN is to detect and segment object instances based on a single visual example of some object category. Thus, it must deal with arbitrary, potentially previously unseen object categories, rather than with a fixed set of categories. We base Siamese Mask R-CNN on Mask R-CNN [29] with feature pyramid networks [42]. To adapt it to the visual search task, we turn the backbone into a Siamese network -hence the prefix Siamese -, which extracts features from both the reference image and the scene and computes a pixel-wise similarity between the two. The image features and the similarity score form the input to three heads: (1) the Region Pro- Figure 3. Sketch of the matching procedure. The reference encoding is reduced to a vector by average pooling (1) and the point by point absolute difference to the scene encoding is computed (2). The concatenated (3) scene encoding and reference features are reduced by a 1 × 1 convolution (4) before feeding them to the network heads. posal Network (RPN), (2) the bounding box classification and regression head and (3) the segmentation head. In the following, we briefly review the key components of Mask R-CNN and then introduce our extensions. Mask R-CNN Mask R-CNN is a two-stage object detector that consists of a backbone feature extractor and multiple heads operating on these features (see Fig. 2A). We choose a ResNet50 [30] with Feature Pyramid Networks (FPN) [42] as our backbone. The heads consist of two stages. First, the region proposal network (RPN) is applied convolutionally across the image to predict possible object locations in the scene. The highest scoring region proposals are then cropped from the backbone feature maps and used as inputs for the bounding box classification (CLS) and regression (BBOX) head as well as the instance masking head (MASK). Siamese feature pyramid networks In the conventional object detection/instance segmentation setting, the set of possible categories is known in advance, so the task of the backbone is to extract useful features for the subsequent detection and segmentation stages. In contrast, in the one-shot setting the information on which objects to detect and segment is provided in the form of a reference image, which can contain an object category the system has not been trained on. To adapt to this situation, our backbone does not only extract useful features from the scene image, but also computes a similarity metric to the reference at each possible location. To do so, we follow the basic idea of Siamese networks [36] and apply the same backbone (ResNet50 with FPN) with shared weights to extract features from both the reference and the scene. These features are then matched pixel-wise as described below. Feature matching The feature pyramid network produces image features at multiple scales, hence we perform the following matching procedure at each scale of the pyramid (Fig. 3): 1. Pool the features of the reference image over space using average pooling to obtain a vector embedding of the category to be detected and segmented. 2. At every spatial position of the scene representation, compute the absolute difference between the features of the reference and that of the scene. 3. Concatenate the scene representation and the pixelwise distance between the two. 4. Reduce the number of features by 1 × 1 convolution. The resulting features are then used as a drop-in replacement for the original feature pyramid as they have the same dimensionality. The key difference is that they do not only encode the content of the scene image, but also its similarity to the reference image, which forms the basis for the subsequent heads to generate object proposals, classify matches vs. non-matches and generate instance masks. Head architecture We use the same region proposal network (RPN) as Mask R-CNN, changing only its inputs as described above and the way examples are generated during training (described below). We also use the same classification and bounding box regression head as Mask R-CNN, but change the classification from an 80-way class discrimination to a binary match/non-match discrimination. Similarly, for the mask branch we generate only a single instance mask instead of one per potential class. Implementation details Our system is based on the Matterport implementation of Mask R-CNN [2]. We provide all details in Appendix 1. Experiments We train Siamese Mask R-CNN jointly on object detection and instance segmentation in the visual search setting. We evaluate the trained models both on previously seen and unseen (one-shot) categories using splits of MS-COCO. Training Pre-training backbone. We pre-train the ResNet backbone on image classification on a reduced subset of Ima-geNet, which contains images from the 687 ImageNet categories without correspondence in MS-COCO -hence we refer to it as ImageNet-687. Pre-training on this reduced set ensures that we do not use any label information about the one-shot classes at any training stage. Training Siamese Mask R-CNN. We train the models using stochastic gradient descent with momentum for 160,000 steps with a batch size of 12 on four NVIDIA P100 GPUs in parallel. We use an initial learning rate of 0.02 and a momentum of 0.9. During the first 1,000 steps, we train only the heads. After that, we train the entire network, including the backbone and all heads, end-to-end. After 120,000 steps, we divide the learning rate by 10. Construction of mini-batches. During training, a minibatch contains 12 sets of reference and query images. We first draw the query images at random from the training set and pre-process them in the following way: (1) we resize an image so that the longer side is 1024 px, while keeping the aspect ratio, (2) we zero-pad the smaller side of the image to be square 1024 × 1024, (3) we subtract the mean ImageNet RGB value from each pixel. Next, for each image, we generate a reference image as follows: (1) draw a random category among all categories of the background set present in the image, (2) crop a random instance of the selected category out of any image in the training set (using the bounding box annotation), and (3) resize the reference image so that its longer side is 192 px and zero-pad the shorter side to get a square image of 192 × 192. To enable a quick look-up of reference instances, we created an index that contains a list of categories present in each image. Labels. We use only the annotations of object instances in the query image that belong to the corresponding reference category. All other objects are treated as background. Loss function. Siamese Mask R-CNN is trained on the same basic multi-task objective as Mask R-CNN: classification and bounding box loss for the RPN; classification, bounding box and mask loss for each RoI. There are a couple of differences as well. First, the classification losses consist of a binary cross-entropy of the match/non-match classification rather than an 80-way multinomial crossentropy used for classification on MS-COCO. Second, we found that weighting the individual losses differently improved performance in the one-shot setting. Specifically, we apply the following weights to each component of the loss function: RPN classification loss: 2, RPN bounding box loss: 0.1, RoI classification loss: 2, RoI bounding box loss: 0.5 and mask loss: 1. Mask R-CNN. For comparison, we also trained the original Mask R-CNN on MS-COCO on all 80 classes for 320,000 steps using the same hyper parameters as for Siamese Mask R-CNN but without the adjustments to the loss function weights described above. Evaluation General procedure. We evaluate the performance of our model using the MS-COCO val 2017 set as a test set (it was not used for training). We do one evaluation run per class split S, using the following procedure: Figure 4. Object scores can be thought of as posterior probabilities, i.e. the product of image evidence and category prior. Thus, the optimal criterion depends on the prior, but in a one-shot setting, there is no information about the prior. Baseline: random boxes As a very naïve baseline, we evaluate the performance of a model predicting random bounding boxes and segmentation masks. To do so, we take ground-truth bounding boxes and segmentation masks for the category of the reference image, and randomly shift the boxes around the image (assigning a random confidence value for each box between 0.8 and 1). We keep the ground-truth segmentation masks intact in the shifted boxes. Such procedure allows us to get random predictions while keeping certain statistics of the ground-truth annotations (e.g. number of boxes per image, their sizes, etc.). Results Example-based detection and segmentation We start by showing our results on the task of object detection and instance segmentation targeted to a single class, which is given by an example. This is essentially a metric learning problem: we learn a similarity metric between image regions and the reference image. This allows the detection and segmentation heads to produce bounding boxes and instance masks for matching objects. As discussed above, this problem is harder than training an object detector for a fixed set of classes, and we therefore simplified the training and evaluation process (see Section 5.2 above). To put our one-shot results reported below in context, we first trained both Siamese Mask R-CNN as well regular Mask R-CNN on the entire MS-COCO data set (Table 1). Our Mask R-CNN implementation performed reasonably, achieving 42.5% mAP50 on detection and 40.1% on instance segmentation. These numbers are not state-of-theart (due to limited availability of extendable code and pretrained models), but that doesn't change the conclusions, since we are interested in relative performance differences to Mask R-CNN and not in absolute values. Siamese Mask R-CNN achieved 35.7% mAP on detection and 33.4% on instance segmentation using the same backbone, training schedule, etc., but based on examples rather than trained on a fixed set of categories. Thus, we conclude that the proposed Siamese Mask R-CNN architec- ture can learn object detection and instance segmentation based on examples, but there is room for improvement, suggesting that the example-based setting is more challenging. One-shot instance segmentation Next, we report the results of evaluating Siamese Mask R-CNN in the one-shot setting. That is, we train on the background splits without using instances of one-shot evaluation splits (Section 3) as reference images. These results are shown in Table 2. The average detection mAP50 scores for the one-shot splits are around 17%, while the segmentation ones are around 15%, with some variability between splits. These values are significantly lower than those for the background splits, indicating the difficulty of the oneshot setting. The mAP50 scores for the background splits are slightly higher than those in Table 1, because the former contain only 60 categories while the latter were trained on all 80. Taken together, these results suggest that we observe a substantial degree of overfitting on the background classes used during training. This result is in contrast to earlier work on Omniglot [50] that observed good generalization beyond the background set, presumably because Omniglot contains a larger number of categories and the image statistics are simpler. Figure 5 shows examples of successful Siamese Mask R-CNN predictions for one-shot categories (i.e. categories not used during training). These examples allow us to get a feeling for the difficulty of the task: the reference inputs are quite different from the instances in the query image, sometimes they show only part of the reference object and they are never annotated with ground truth segmentation masks. To generate bounding boxes and segmentation masks, the model can use only its general knowledge about objects and their boundaries and the metric learned on the other categories to compute the visual similarity between the reference and the query instances. For instance, the bus on the right or the horse in the bottom left in Figure 5 are incomplete and the network has never been provided with ground truth bounding boxes or instance masks for either horses or buses. Nevertheless, it still finds the correct object in the query image and segments the entire object. Qualitative analysis We also show examples of failure cases in Figure 6. The picture that emerges from both successful and failure cases is that the network produces overall very good bounding boxes and segmentation masks, but often fails at targeting it towards the correct category. We elaborate more in the next section on the challenges of the one-shot setting. False positives in the one-shot setting There is a marked drop in model performance between the background and the one-shot evaluation splits, suggesting some degree of overfitting to the background categories One-shot classes Figure 7. Confusion matrix for the Siamese Mask R-CNN model using split S2 for one-shot evaluation. The element (i, j) shows the AP50 of using detections for category i and evaluating them as instances of category j. The histogram below the matrix shows the most commonly confused (or falsely predicted) categories. used during training. If overfitting to background classes was indeed the main issue, we would expect false positives to be biased towards these categories and, in particular, towards those categories that are most frequent in the training set. This seems to be qualitatively the case (Fig. 5). In addition, we quantified this observation by computing a confusion matrix of MS-COCO categories (Fig. 7). The element (i, j) of this matrix corresponds to the AP50 value of detections obtained for reference images of category i, which are evaluated as if the reference images belonged to category j. If there were no false positives, the off-diagonal elements of the matrix would be zero. The sums of values in the columns show instances of categories that are most often falsely detected (the histogram of such sums is shown below the matrix). Among such commonly falsely predicted categories are people, cars, airplanes, clocks, and other categories that are common in the dataset. Effect of image clutter Previous work on synthetic data [50] found that cluttered scenes are especially challenging in the one-shot setting. This effect is also present in the current context. Both detection and segmentation scores are substantially higher when conditioning on images with a small number of total instances (Figure 8), underscoring the importance of extending the model to robustly process cluttered scenes. Discussion We introduced the task of one-shot instance segmentation and proposed a model based on combining the Mask R-CNN architecture with a metric learning approach to perform this task. There are two main problems in this task: (1) learning a good metric for one-shot detection of novel objects and (2) transferring the knowledge about bounding boxes and instance masks from known to novel object categories. Our results suggest that in the context of MS-COCO, the first part is more difficult than the second part. Overall, bounding boxes and instance masks are of high quality. The relatively weak performance of our current model appears to be caused by its difficulties in classifying if the detected object is of the same category as the reference. Our observation of a substantial amount of overfitting towards the categories used during training supports this hypothesis. Our system is not based on the latest and highestperforming object detector, but was rather driven by availability of code for existing approaches; we expect that incorporating better object detection architectures and larger backbones into our one-shot visual search framework will lead to performance improvements analogous to those reported on the fixed-category problem. However, closing the gap between the fixed-category and the one-shot visual search problems would likely require not just better mAP50 score on the test set Detection Segmentation components for our model, but rather conceptual changes to the model itself and to the training data. Such changes might include larger datasets with more object categories than MS-COCO or more sophisticated approaches to oneshot learning from a relatively small number of background categories. There are a couple of drawbacks to our current approach, and resolving them is likely to lead to improvements in performance. For instance, during training we currently treat all instances of the one-shot categories as background, which probably encourages the model to suppress their detection even if they match the reference well. In addition, the reference instances are sometimes hard to recognize even for humans, because they are cropped to their bounding box and lack image context, which can be an important cue for recognition. Finally, the system currently relies exclusively on comparing each object proposal to the reference image and performing a match/non-match discrimination. However, one may instead want to do an N +1-way classification, assigning each instance to one of the N already known categories or a novel, N +1 st one, and only in the latter case rely on a similarity metric and a binary match/non-match classification. In summary, one-shot instance segmentation is a hard problem on a diverse real-world dataset like MS-COCO. It requires combining ideas from few-shot/metric learning, object detection and segmentation, and we believe it is a perfect test bed for developing truly general vision systems. mental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI/IBC, or the U.S. Government.
4,076
1811.11507
2902069582
We tackle one-shot visual search by example for arbitrary object categories: Given an example image of a novel reference object, find and segment all object instances of the same category within a scene. To address this problem, we propose Siamese Mask R-CNN. It extends Mask R-CNN by a Siamese backbone encoding both reference image and scene, allowing it to target detection and segmentation towards the reference category. We use Siamese Mask R-CNN to perform one-shot instance segmentation on MS-COCO, demonstrating that it can detect and segment objects of novel categories it was not trained on, and without using mask annotations at test time. Our results highlight challenges of the one-shot setting: while transferring knowledge about instance segmentation to novel object categories not used during training works very well, targeting the detection and segmentation networks towards the reference category appears to be more difficult. Our work provides a first strong baseline for one-shot instance segmentation and will hopefully inspire further research in this relatively unexplored field.
Few-Shot learning has seen great progress over the last years. A classic approach is based on metric learning using Siamese neural networks @cite_22 @cite_58 @cite_90 , which -- due to its simplicity -- is also the approach we use. The metric learning approach has seen a number of improvements in recent years @cite_56 @cite_89 @cite_51 @cite_60 @cite_15 . Other approaches are based on generative models @cite_27 @cite_68 , ideas from information retrieval @cite_32 or employ meta learning @cite_4 @cite_28 @cite_95 @cite_21 @cite_0 @cite_38 @cite_52 @cite_8 @cite_6 .
{ "abstract": [ "This paper describes the development of an algorithm for verification of signatures written on a touch-sensitive pad. The signature verification algorithm is based on an artificial neural network. The novel network presented here, called a “Siamese” time delay neural network, consists of two identical networks joined at their output. During training the network learns to measure the similarity between pairs of signatures. When used for verification, only one half of the Siamese network is evaluated. The output of this half network is the feature vector for the input signature. Verification consists of comparing this feature vector with a stored feature vector for the signer. Signatures closer than a chosen threshold to this stored representation are accepted, all other signatures are rejected as forgeries. System performance is illustrated with experiments performed in the laboratory.", "", "", "", "We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on two few-shot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies.", "", "", "", "", "", "", "Few-shot learning refers to understanding new concepts from only a few examples. We propose an information retrieval-inspired approach for this problem that is motivated by the increased importance of maximally leveraging all the available information in this low-data regime. We define a training objective that aims to extract as much information as possible from each training batch by effectively optimizing over all relative orderings of the batch points simultaneously. In particular, we view each batch point as a query' that ranks the remaining ones based on its predicted relevance to them and we define a model within the framework of structured prediction to optimize mean Average Precision over these rankings. Our method achieves impressive results on the standard few-shot classification benchmarks while is also capable of few-shot retrieval.", "", "This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http: disi.unitn.it uijlings SelectiveSearch.html ).", "", "People learning new concepts can often generalize successfully from just a single example, yet machine learning algorithms typically require tens or hundreds of examples to perform with similar accuracy. People can also use learned concepts in richer ways than conventional algorithms—for action, imagination, and explanation. We present a computational model that captures these human learning abilities for a large class of simple visual concepts: handwritten characters from the world’s alphabets. The model represents concepts as simple programs that best explain observed examples under a Bayesian criterion. On a challenging one-shot classification task, the model achieves human-level performance while outperforming recent deep learning approaches. We also present several “visual Turing tests” probing the model’s creative generalization abilities, which in many cases are indistinguishable from human behavior.", "", "", "", "" ], "cite_N": [ "@cite_22", "@cite_15", "@cite_58", "@cite_38", "@cite_4", "@cite_60", "@cite_8", "@cite_21", "@cite_52", "@cite_68", "@cite_28", "@cite_32", "@cite_6", "@cite_56", "@cite_95", "@cite_27", "@cite_90", "@cite_89", "@cite_0", "@cite_51" ], "mid": [ "2171590421", "", "", "", "2604763608", "", "", "", "", "", "", "2734621483", "", "2088049833", "", "2194321275", "", "", "", "" ] }
One-Shot Instance Segmentation
Humans do not only excel at acquiring novel concepts from a small number of training examples (few-shot learning), but can also readily point to such objects (object detection) and draw their outlines (instance segmentation). In recent years, machine vision has made substantial advances in one-shot learning [38,79,24] with a strong focus on image classification in a discriminative setting. Similarly, a lot of progress has been made on object detection and instance segmentation [29,59], but both tasks are still very data-hungry and the proposed approaches perform well only for a small number of object classes, for which enough annotated examples are available. In this paper, we work towards taking the one-shot setting to real-world instance segmentation: We learn to detect and segment arbitrary object categories (not necessarily included in the training set) based on a single visual example ( Fig. 1). That is, given an arbitrary query image and a single reference instance, the goal is to generate a bounding box and an instance mask for every instance in the image that is of the same object category as the reference. This type of visual search task creates new challenges for computer vision algorithms, as methods from metric and few-shot learning have to be incorporated into the notoriously hard tasks of object identification and segmentation. Our approach is based on taking ideas from metric learning (Siamese networks) and combining them with Mask R-CNN, a state-of-the-art object detection and segmentation system (Fig. 2). Our main contributions are as follows: • We present Siamese Mask R-CNN for performing oneshot instance segmentation. It extends Mask R-CNN [29] with a Siamese backbone and a matching procedure to perform visual search. • We introduce a novel one-shot visual search task, requiring object detection and instance segmentation based on a single visual example. • We establish an evaluation protocol for this task and evaluate our model on MS-COCO [44]. We show that segmenting novel object categories works well even without mask annotations at test time, while targeting the detection towards the reference category is the main challenge. • We will make code and pre-trained models available. Related work Our approach lies at the intersection of few-shot/metric learning, object detection/visual search, and instance segmentation. Each of these aspects has been studied extensively, as we review in the following. The novelty of our approach is the combination of all these aspects into a new problem. Object detection. Object detection is a classical computer vision problem [22,31,82,4]. Modern work can be split broadly into two general approaches: Single stage detectors [47,66,67,68,43] are usually very fast, while multistage detectors [26,25,71,29] perform a coarse proposal step followed by a fine-grained classification, and are usually more accurate. Most state-of-the-art systems are based on Faster R-CNN [71], a two-step object detector that generates proposals, for each of which it crops features out of the last feature map of a backbone. Feature Pyramid Networks [42] are a popular extension that uses feature maps at multiple spatial resolutions to increase scale invariance. Instance segmentation. In contrast to semantic segmentation [49,55,73,60,90,9,15,48], where every pixel is classified into a category, instance segmentation additionally requires to discriminate between individual object instances [27,18,28,62,19,39,63,72,5,14,23,29,45,70,37]. Most current state-of-the-art systems are based on Mask R-CNN [29,46,1], an extension of Faster R-CNN [71] performing joint object detection and instance segmentation. Weakly supervised object detection and segmentation. Labeled data is hard to obtain for instance-level tasks like object detection, and even more so for pixel-level tasks like segmentation [44,12,3]. Therefore, various weakly and semi-supervised approaches have been explored [32,88,57,35,92]. Weak supervision is a promising direction for annotation-heavy tasks, hence it has been explored for semantic segmentation [58,57,61,17,88,7,41], object detection [56,91,67] and instance segmentation [35,33,92]. Visual search. Visual search has a long history in perceptual psychology (reviewed, e.g., by [75]), although typically with simple visual patterns, while search for arbitrary objects in real scenes has been addressed only recently [89,87], and often using a natural language cue [87]. Few-shot learning. Few-Shot learning has seen great progress over the last years. A classic approach is based on metric learning using Siamese neural networks [8,16,36], which -due to its simplicity -is also the approach we use. The metric learning approach has seen a number of improvements in recent years [36,84,79,85,86]. Other approaches are based on generative models [38,76], ideas from information retrieval [81] or employ meta learning [24,40,52,51,53,54,74,80,69]. Few-shot segmentation. Closely related to our work is one-shot semantic segmentation of images using either an object instance as reference [78,65,20,50] or a texture [83]. However, the key difference is that these systems perform pixel-level classifications and cannot distinguish individual instances. The only work on one-shot instance segmentation we are aware of tracks an object instance across a video sequence based on a small number of annotated frames [10,11], which differs from our setup in that a single object is to be tracked, for which ground-truth annotations are available. Few-shot object detection. There is related, but not directly comparable work on few-shot object detection. Some work focuses on settings with few (more than one) annotated training images per category [13,21], while others tackle the zero-shot setting based on only a textual description of the reference [6,64]. Most closely related to our work is concurrent work based on Siamese networks for one-shot detection on an Omniglot-based dataset and for audio data [34] as well as work on fine-grained bird classification and localization in ImageNet images [77], which tend to have only one or few instances per image. In contrast, we work on potentially cluttered real-world images. One-shot object detection and instance segmentation on MS-COCO We define a one-shot object detection and instance segmentation task on MS-COCO: Given a reference image showing a close-up of an example object, find all instances of objects belonging to the same category in a separate query image, which shows an entire visual scene potentially containing many objects. To work in a one-shot setting, we split the 80 object categories in MS-COCO into background and one-shot evaluation splits 1 , containing 60 and 20 categories, respectively. We generate four such background/evaluation splits by starting with the first, second, third or fourth category, respectively, and including every fourth category into the one-shot evaluation split. We call those splits S 1 -S 4 ; they are given in Table 3 in the Appendix. Note that this one-shot visual search setup differs from earlier, purely discriminative one-shot learning setups: At training time, the query images can contain objects from the one-shot evaluation categories, but they are neither selected as the reference nor are they annotated in any way. We therefore still refer to this setting as one-shot, because no label information is available for these categories during training. Conversely, at test time, the query images contain both known and novel object categories. Taken together, we consider this setup to be a realistic scenario in the real world of an autonomous agent, which would typically encounter new objects alongside the known objects and may encounter unlabeled objects multiple times before they become relevant and label information is provided (think of a household robot seeing a certain type of toy in various parts of the apartment multiple times before you instruct it to go pick it up for you). This setup also produces a number of challenges for evaluation, which we discuss in Section 5.2. Siamese Mask R-CNN The key idea behind Siamese Mask R-CNN is to detect and segment object instances based on a single visual example of some object category. Thus, it must deal with arbitrary, potentially previously unseen object categories, rather than with a fixed set of categories. We base Siamese Mask R-CNN on Mask R-CNN [29] with feature pyramid networks [42]. To adapt it to the visual search task, we turn the backbone into a Siamese network -hence the prefix Siamese -, which extracts features from both the reference image and the scene and computes a pixel-wise similarity between the two. The image features and the similarity score form the input to three heads: (1) the Region Pro- Figure 3. Sketch of the matching procedure. The reference encoding is reduced to a vector by average pooling (1) and the point by point absolute difference to the scene encoding is computed (2). The concatenated (3) scene encoding and reference features are reduced by a 1 × 1 convolution (4) before feeding them to the network heads. posal Network (RPN), (2) the bounding box classification and regression head and (3) the segmentation head. In the following, we briefly review the key components of Mask R-CNN and then introduce our extensions. Mask R-CNN Mask R-CNN is a two-stage object detector that consists of a backbone feature extractor and multiple heads operating on these features (see Fig. 2A). We choose a ResNet50 [30] with Feature Pyramid Networks (FPN) [42] as our backbone. The heads consist of two stages. First, the region proposal network (RPN) is applied convolutionally across the image to predict possible object locations in the scene. The highest scoring region proposals are then cropped from the backbone feature maps and used as inputs for the bounding box classification (CLS) and regression (BBOX) head as well as the instance masking head (MASK). Siamese feature pyramid networks In the conventional object detection/instance segmentation setting, the set of possible categories is known in advance, so the task of the backbone is to extract useful features for the subsequent detection and segmentation stages. In contrast, in the one-shot setting the information on which objects to detect and segment is provided in the form of a reference image, which can contain an object category the system has not been trained on. To adapt to this situation, our backbone does not only extract useful features from the scene image, but also computes a similarity metric to the reference at each possible location. To do so, we follow the basic idea of Siamese networks [36] and apply the same backbone (ResNet50 with FPN) with shared weights to extract features from both the reference and the scene. These features are then matched pixel-wise as described below. Feature matching The feature pyramid network produces image features at multiple scales, hence we perform the following matching procedure at each scale of the pyramid (Fig. 3): 1. Pool the features of the reference image over space using average pooling to obtain a vector embedding of the category to be detected and segmented. 2. At every spatial position of the scene representation, compute the absolute difference between the features of the reference and that of the scene. 3. Concatenate the scene representation and the pixelwise distance between the two. 4. Reduce the number of features by 1 × 1 convolution. The resulting features are then used as a drop-in replacement for the original feature pyramid as they have the same dimensionality. The key difference is that they do not only encode the content of the scene image, but also its similarity to the reference image, which forms the basis for the subsequent heads to generate object proposals, classify matches vs. non-matches and generate instance masks. Head architecture We use the same region proposal network (RPN) as Mask R-CNN, changing only its inputs as described above and the way examples are generated during training (described below). We also use the same classification and bounding box regression head as Mask R-CNN, but change the classification from an 80-way class discrimination to a binary match/non-match discrimination. Similarly, for the mask branch we generate only a single instance mask instead of one per potential class. Implementation details Our system is based on the Matterport implementation of Mask R-CNN [2]. We provide all details in Appendix 1. Experiments We train Siamese Mask R-CNN jointly on object detection and instance segmentation in the visual search setting. We evaluate the trained models both on previously seen and unseen (one-shot) categories using splits of MS-COCO. Training Pre-training backbone. We pre-train the ResNet backbone on image classification on a reduced subset of Ima-geNet, which contains images from the 687 ImageNet categories without correspondence in MS-COCO -hence we refer to it as ImageNet-687. Pre-training on this reduced set ensures that we do not use any label information about the one-shot classes at any training stage. Training Siamese Mask R-CNN. We train the models using stochastic gradient descent with momentum for 160,000 steps with a batch size of 12 on four NVIDIA P100 GPUs in parallel. We use an initial learning rate of 0.02 and a momentum of 0.9. During the first 1,000 steps, we train only the heads. After that, we train the entire network, including the backbone and all heads, end-to-end. After 120,000 steps, we divide the learning rate by 10. Construction of mini-batches. During training, a minibatch contains 12 sets of reference and query images. We first draw the query images at random from the training set and pre-process them in the following way: (1) we resize an image so that the longer side is 1024 px, while keeping the aspect ratio, (2) we zero-pad the smaller side of the image to be square 1024 × 1024, (3) we subtract the mean ImageNet RGB value from each pixel. Next, for each image, we generate a reference image as follows: (1) draw a random category among all categories of the background set present in the image, (2) crop a random instance of the selected category out of any image in the training set (using the bounding box annotation), and (3) resize the reference image so that its longer side is 192 px and zero-pad the shorter side to get a square image of 192 × 192. To enable a quick look-up of reference instances, we created an index that contains a list of categories present in each image. Labels. We use only the annotations of object instances in the query image that belong to the corresponding reference category. All other objects are treated as background. Loss function. Siamese Mask R-CNN is trained on the same basic multi-task objective as Mask R-CNN: classification and bounding box loss for the RPN; classification, bounding box and mask loss for each RoI. There are a couple of differences as well. First, the classification losses consist of a binary cross-entropy of the match/non-match classification rather than an 80-way multinomial crossentropy used for classification on MS-COCO. Second, we found that weighting the individual losses differently improved performance in the one-shot setting. Specifically, we apply the following weights to each component of the loss function: RPN classification loss: 2, RPN bounding box loss: 0.1, RoI classification loss: 2, RoI bounding box loss: 0.5 and mask loss: 1. Mask R-CNN. For comparison, we also trained the original Mask R-CNN on MS-COCO on all 80 classes for 320,000 steps using the same hyper parameters as for Siamese Mask R-CNN but without the adjustments to the loss function weights described above. Evaluation General procedure. We evaluate the performance of our model using the MS-COCO val 2017 set as a test set (it was not used for training). We do one evaluation run per class split S, using the following procedure: Figure 4. Object scores can be thought of as posterior probabilities, i.e. the product of image evidence and category prior. Thus, the optimal criterion depends on the prior, but in a one-shot setting, there is no information about the prior. Baseline: random boxes As a very naïve baseline, we evaluate the performance of a model predicting random bounding boxes and segmentation masks. To do so, we take ground-truth bounding boxes and segmentation masks for the category of the reference image, and randomly shift the boxes around the image (assigning a random confidence value for each box between 0.8 and 1). We keep the ground-truth segmentation masks intact in the shifted boxes. Such procedure allows us to get random predictions while keeping certain statistics of the ground-truth annotations (e.g. number of boxes per image, their sizes, etc.). Results Example-based detection and segmentation We start by showing our results on the task of object detection and instance segmentation targeted to a single class, which is given by an example. This is essentially a metric learning problem: we learn a similarity metric between image regions and the reference image. This allows the detection and segmentation heads to produce bounding boxes and instance masks for matching objects. As discussed above, this problem is harder than training an object detector for a fixed set of classes, and we therefore simplified the training and evaluation process (see Section 5.2 above). To put our one-shot results reported below in context, we first trained both Siamese Mask R-CNN as well regular Mask R-CNN on the entire MS-COCO data set (Table 1). Our Mask R-CNN implementation performed reasonably, achieving 42.5% mAP50 on detection and 40.1% on instance segmentation. These numbers are not state-of-theart (due to limited availability of extendable code and pretrained models), but that doesn't change the conclusions, since we are interested in relative performance differences to Mask R-CNN and not in absolute values. Siamese Mask R-CNN achieved 35.7% mAP on detection and 33.4% on instance segmentation using the same backbone, training schedule, etc., but based on examples rather than trained on a fixed set of categories. Thus, we conclude that the proposed Siamese Mask R-CNN architec- ture can learn object detection and instance segmentation based on examples, but there is room for improvement, suggesting that the example-based setting is more challenging. One-shot instance segmentation Next, we report the results of evaluating Siamese Mask R-CNN in the one-shot setting. That is, we train on the background splits without using instances of one-shot evaluation splits (Section 3) as reference images. These results are shown in Table 2. The average detection mAP50 scores for the one-shot splits are around 17%, while the segmentation ones are around 15%, with some variability between splits. These values are significantly lower than those for the background splits, indicating the difficulty of the oneshot setting. The mAP50 scores for the background splits are slightly higher than those in Table 1, because the former contain only 60 categories while the latter were trained on all 80. Taken together, these results suggest that we observe a substantial degree of overfitting on the background classes used during training. This result is in contrast to earlier work on Omniglot [50] that observed good generalization beyond the background set, presumably because Omniglot contains a larger number of categories and the image statistics are simpler. Figure 5 shows examples of successful Siamese Mask R-CNN predictions for one-shot categories (i.e. categories not used during training). These examples allow us to get a feeling for the difficulty of the task: the reference inputs are quite different from the instances in the query image, sometimes they show only part of the reference object and they are never annotated with ground truth segmentation masks. To generate bounding boxes and segmentation masks, the model can use only its general knowledge about objects and their boundaries and the metric learned on the other categories to compute the visual similarity between the reference and the query instances. For instance, the bus on the right or the horse in the bottom left in Figure 5 are incomplete and the network has never been provided with ground truth bounding boxes or instance masks for either horses or buses. Nevertheless, it still finds the correct object in the query image and segments the entire object. Qualitative analysis We also show examples of failure cases in Figure 6. The picture that emerges from both successful and failure cases is that the network produces overall very good bounding boxes and segmentation masks, but often fails at targeting it towards the correct category. We elaborate more in the next section on the challenges of the one-shot setting. False positives in the one-shot setting There is a marked drop in model performance between the background and the one-shot evaluation splits, suggesting some degree of overfitting to the background categories One-shot classes Figure 7. Confusion matrix for the Siamese Mask R-CNN model using split S2 for one-shot evaluation. The element (i, j) shows the AP50 of using detections for category i and evaluating them as instances of category j. The histogram below the matrix shows the most commonly confused (or falsely predicted) categories. used during training. If overfitting to background classes was indeed the main issue, we would expect false positives to be biased towards these categories and, in particular, towards those categories that are most frequent in the training set. This seems to be qualitatively the case (Fig. 5). In addition, we quantified this observation by computing a confusion matrix of MS-COCO categories (Fig. 7). The element (i, j) of this matrix corresponds to the AP50 value of detections obtained for reference images of category i, which are evaluated as if the reference images belonged to category j. If there were no false positives, the off-diagonal elements of the matrix would be zero. The sums of values in the columns show instances of categories that are most often falsely detected (the histogram of such sums is shown below the matrix). Among such commonly falsely predicted categories are people, cars, airplanes, clocks, and other categories that are common in the dataset. Effect of image clutter Previous work on synthetic data [50] found that cluttered scenes are especially challenging in the one-shot setting. This effect is also present in the current context. Both detection and segmentation scores are substantially higher when conditioning on images with a small number of total instances (Figure 8), underscoring the importance of extending the model to robustly process cluttered scenes. Discussion We introduced the task of one-shot instance segmentation and proposed a model based on combining the Mask R-CNN architecture with a metric learning approach to perform this task. There are two main problems in this task: (1) learning a good metric for one-shot detection of novel objects and (2) transferring the knowledge about bounding boxes and instance masks from known to novel object categories. Our results suggest that in the context of MS-COCO, the first part is more difficult than the second part. Overall, bounding boxes and instance masks are of high quality. The relatively weak performance of our current model appears to be caused by its difficulties in classifying if the detected object is of the same category as the reference. Our observation of a substantial amount of overfitting towards the categories used during training supports this hypothesis. Our system is not based on the latest and highestperforming object detector, but was rather driven by availability of code for existing approaches; we expect that incorporating better object detection architectures and larger backbones into our one-shot visual search framework will lead to performance improvements analogous to those reported on the fixed-category problem. However, closing the gap between the fixed-category and the one-shot visual search problems would likely require not just better mAP50 score on the test set Detection Segmentation components for our model, but rather conceptual changes to the model itself and to the training data. Such changes might include larger datasets with more object categories than MS-COCO or more sophisticated approaches to oneshot learning from a relatively small number of background categories. There are a couple of drawbacks to our current approach, and resolving them is likely to lead to improvements in performance. For instance, during training we currently treat all instances of the one-shot categories as background, which probably encourages the model to suppress their detection even if they match the reference well. In addition, the reference instances are sometimes hard to recognize even for humans, because they are cropped to their bounding box and lack image context, which can be an important cue for recognition. Finally, the system currently relies exclusively on comparing each object proposal to the reference image and performing a match/non-match discrimination. However, one may instead want to do an N +1-way classification, assigning each instance to one of the N already known categories or a novel, N +1 st one, and only in the latter case rely on a similarity metric and a binary match/non-match classification. In summary, one-shot instance segmentation is a hard problem on a diverse real-world dataset like MS-COCO. It requires combining ideas from few-shot/metric learning, object detection and segmentation, and we believe it is a perfect test bed for developing truly general vision systems. mental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI/IBC, or the U.S. Government.
4,076
1811.11507
2902069582
We tackle one-shot visual search by example for arbitrary object categories: Given an example image of a novel reference object, find and segment all object instances of the same category within a scene. To address this problem, we propose Siamese Mask R-CNN. It extends Mask R-CNN by a Siamese backbone encoding both reference image and scene, allowing it to target detection and segmentation towards the reference category. We use Siamese Mask R-CNN to perform one-shot instance segmentation on MS-COCO, demonstrating that it can detect and segment objects of novel categories it was not trained on, and without using mask annotations at test time. Our results highlight challenges of the one-shot setting: while transferring knowledge about instance segmentation to novel object categories not used during training works very well, targeting the detection and segmentation networks towards the reference category appears to be more difficult. Our work provides a first strong baseline for one-shot instance segmentation and will hopefully inspire further research in this relatively unexplored field.
There is related, but not directly comparable work on few-shot object detection. Some work focuses on settings with few (more than one) annotated training images per category @cite_16 @cite_83 , while others tackle the zero-shot setting based on only a textual description of the reference @cite_40 @cite_75 . Most closely related to our work is concurrent work based on Siamese networks for one-shot detection on an Omniglot-based dataset and for audio data @cite_86 as well as work on fine-grained bird classification and localization in ImageNet images @cite_47 , which tend to have only one or few instances per image. In contrast, we work on potentially cluttered real-world images.
{ "abstract": [ "We introduce and tackle the problem of zero-shot object detection (ZSD), which aims to detect object classes which are not observed during training. We work with a challenging set of object classes, not restricting ourselves to similar and or fine-grained categories as in prior works on zero-shot classification. We present a principled approach by first adapting visual-semantic embeddings for ZSD. We then discuss the problems associated with selecting a background class and motivate two background-aware approaches for learning robust detectors. One of these models uses a fixed background class and the other is based on iterative latent assignments. We also outline the challenge associated with using a limited number of training classes and propose a solution based on dense sampling of the semantic label space using auxiliary data with a large number of categories. We propose novel splits of two standard detection datasets – MSCOCO and VisualGenome, and present extensive empirical results in both the traditional and generalized zero-shot settings to highlight the benefits of the proposed methods. We provide useful insights into the algorithm and conclude by posing some open questions to encourage further research.", "", "We consider the task of weakly supervised one-shot detection. In this task, we attempt to perform a detection task over a set of unseen classes, when training only using weak binary labels that indicate the existence of a class instance in a given example. The model is conditioned on a single exemplar of an unseen class and a target example that may or may not contain an instance of the same class as the exemplar. A similarity map is computed by using a Siamese neural network to map the exemplar and regions of the target example to a latent representation space and then computing cosine similarity scores between representations. An attention mechanism weights different regions in the target example, and enables learning of the one-shot detection task using the weaker labels alone. The model can be applied to detection tasks from different domains, including computer vision object detection. We evaluate our attention Siamese networks on a one-shot detection task from the audio domain, where it detects audio keywords in spoken utterances. Our model considerably outperforms a baseline approach and yields a 42.6 average precision for detection across 10 unseen classes. Moreover, architectural developments from computer vision object detection models such as a region proposal network can be incorporated into the model architecture, and results show that performance is expected to improve by doing so.", "Distance metric learning (DML) has been successfully applied to object classification, both in the standard regime of rich training data and in the few-shot scenario, where each category is represented by only a few examples. In this work, we propose a new method for DML that simultaneously learns the backbone network parameters, the embedding space, and the multi-modal distribution of each of the training categories in that space, in a single end-to-end training process. Our approach outperforms state-of-the-art methods for DML-based object classification on a variety of standard fine-grained datasets. Furthermore, we demonstrate the effectiveness of our approach on the problem of few-shot object detection, by incorporating the proposed DML architecture as a classification head into a standard object detection model. We achieve the best results on the ImageNet-LOC dataset compared to strong baselines, when only a few training examples are available. We also offer the community a new episodic benchmark based on the ImageNet dataset for the few-shot object detection task.", "Recent advances in object detection are mainly driven by deep learning with large-scale detection benchmarks. However, the fully-annotated training set is often limited for a target detection task, which may deteriorate the performance of deep detectors. To address this challenge, we propose a novel low-shot transfer detector (LSTD) in this paper, where we leverage rich source-domain knowledge to construct an effective target-domain detector with very few training examples. The main contributions are described as follows. First, we design a flexible deep architecture of LSTD to alleviate transfer difficulties in low-shot detection. This architecture can integrate the advantages of both SSD and Faster RCNN in a unified deep framework. Second, we introduce a novel regularized transfer learning framework for low-shot detection, where the transfer knowledge (TK) and background depression (BD) regularizations are proposed to leverage object knowledge respectively from source and target domains, in order to further enhance fine-tuning with a few target images. Finally, we examine our LSTD on a number of challenging low-shot detection experiments, where LSTD outperforms other state-of-the-art approaches. The results demonstrate that LSTD is a preferable deep detector for low-shot scenarios.", "" ], "cite_N": [ "@cite_40", "@cite_83", "@cite_86", "@cite_47", "@cite_16", "@cite_75" ], "mid": [ "2963936013", "", "2782826317", "2808388233", "2788210750", "" ] }
One-Shot Instance Segmentation
Humans do not only excel at acquiring novel concepts from a small number of training examples (few-shot learning), but can also readily point to such objects (object detection) and draw their outlines (instance segmentation). In recent years, machine vision has made substantial advances in one-shot learning [38,79,24] with a strong focus on image classification in a discriminative setting. Similarly, a lot of progress has been made on object detection and instance segmentation [29,59], but both tasks are still very data-hungry and the proposed approaches perform well only for a small number of object classes, for which enough annotated examples are available. In this paper, we work towards taking the one-shot setting to real-world instance segmentation: We learn to detect and segment arbitrary object categories (not necessarily included in the training set) based on a single visual example ( Fig. 1). That is, given an arbitrary query image and a single reference instance, the goal is to generate a bounding box and an instance mask for every instance in the image that is of the same object category as the reference. This type of visual search task creates new challenges for computer vision algorithms, as methods from metric and few-shot learning have to be incorporated into the notoriously hard tasks of object identification and segmentation. Our approach is based on taking ideas from metric learning (Siamese networks) and combining them with Mask R-CNN, a state-of-the-art object detection and segmentation system (Fig. 2). Our main contributions are as follows: • We present Siamese Mask R-CNN for performing oneshot instance segmentation. It extends Mask R-CNN [29] with a Siamese backbone and a matching procedure to perform visual search. • We introduce a novel one-shot visual search task, requiring object detection and instance segmentation based on a single visual example. • We establish an evaluation protocol for this task and evaluate our model on MS-COCO [44]. We show that segmenting novel object categories works well even without mask annotations at test time, while targeting the detection towards the reference category is the main challenge. • We will make code and pre-trained models available. Related work Our approach lies at the intersection of few-shot/metric learning, object detection/visual search, and instance segmentation. Each of these aspects has been studied extensively, as we review in the following. The novelty of our approach is the combination of all these aspects into a new problem. Object detection. Object detection is a classical computer vision problem [22,31,82,4]. Modern work can be split broadly into two general approaches: Single stage detectors [47,66,67,68,43] are usually very fast, while multistage detectors [26,25,71,29] perform a coarse proposal step followed by a fine-grained classification, and are usually more accurate. Most state-of-the-art systems are based on Faster R-CNN [71], a two-step object detector that generates proposals, for each of which it crops features out of the last feature map of a backbone. Feature Pyramid Networks [42] are a popular extension that uses feature maps at multiple spatial resolutions to increase scale invariance. Instance segmentation. In contrast to semantic segmentation [49,55,73,60,90,9,15,48], where every pixel is classified into a category, instance segmentation additionally requires to discriminate between individual object instances [27,18,28,62,19,39,63,72,5,14,23,29,45,70,37]. Most current state-of-the-art systems are based on Mask R-CNN [29,46,1], an extension of Faster R-CNN [71] performing joint object detection and instance segmentation. Weakly supervised object detection and segmentation. Labeled data is hard to obtain for instance-level tasks like object detection, and even more so for pixel-level tasks like segmentation [44,12,3]. Therefore, various weakly and semi-supervised approaches have been explored [32,88,57,35,92]. Weak supervision is a promising direction for annotation-heavy tasks, hence it has been explored for semantic segmentation [58,57,61,17,88,7,41], object detection [56,91,67] and instance segmentation [35,33,92]. Visual search. Visual search has a long history in perceptual psychology (reviewed, e.g., by [75]), although typically with simple visual patterns, while search for arbitrary objects in real scenes has been addressed only recently [89,87], and often using a natural language cue [87]. Few-shot learning. Few-Shot learning has seen great progress over the last years. A classic approach is based on metric learning using Siamese neural networks [8,16,36], which -due to its simplicity -is also the approach we use. The metric learning approach has seen a number of improvements in recent years [36,84,79,85,86]. Other approaches are based on generative models [38,76], ideas from information retrieval [81] or employ meta learning [24,40,52,51,53,54,74,80,69]. Few-shot segmentation. Closely related to our work is one-shot semantic segmentation of images using either an object instance as reference [78,65,20,50] or a texture [83]. However, the key difference is that these systems perform pixel-level classifications and cannot distinguish individual instances. The only work on one-shot instance segmentation we are aware of tracks an object instance across a video sequence based on a small number of annotated frames [10,11], which differs from our setup in that a single object is to be tracked, for which ground-truth annotations are available. Few-shot object detection. There is related, but not directly comparable work on few-shot object detection. Some work focuses on settings with few (more than one) annotated training images per category [13,21], while others tackle the zero-shot setting based on only a textual description of the reference [6,64]. Most closely related to our work is concurrent work based on Siamese networks for one-shot detection on an Omniglot-based dataset and for audio data [34] as well as work on fine-grained bird classification and localization in ImageNet images [77], which tend to have only one or few instances per image. In contrast, we work on potentially cluttered real-world images. One-shot object detection and instance segmentation on MS-COCO We define a one-shot object detection and instance segmentation task on MS-COCO: Given a reference image showing a close-up of an example object, find all instances of objects belonging to the same category in a separate query image, which shows an entire visual scene potentially containing many objects. To work in a one-shot setting, we split the 80 object categories in MS-COCO into background and one-shot evaluation splits 1 , containing 60 and 20 categories, respectively. We generate four such background/evaluation splits by starting with the first, second, third or fourth category, respectively, and including every fourth category into the one-shot evaluation split. We call those splits S 1 -S 4 ; they are given in Table 3 in the Appendix. Note that this one-shot visual search setup differs from earlier, purely discriminative one-shot learning setups: At training time, the query images can contain objects from the one-shot evaluation categories, but they are neither selected as the reference nor are they annotated in any way. We therefore still refer to this setting as one-shot, because no label information is available for these categories during training. Conversely, at test time, the query images contain both known and novel object categories. Taken together, we consider this setup to be a realistic scenario in the real world of an autonomous agent, which would typically encounter new objects alongside the known objects and may encounter unlabeled objects multiple times before they become relevant and label information is provided (think of a household robot seeing a certain type of toy in various parts of the apartment multiple times before you instruct it to go pick it up for you). This setup also produces a number of challenges for evaluation, which we discuss in Section 5.2. Siamese Mask R-CNN The key idea behind Siamese Mask R-CNN is to detect and segment object instances based on a single visual example of some object category. Thus, it must deal with arbitrary, potentially previously unseen object categories, rather than with a fixed set of categories. We base Siamese Mask R-CNN on Mask R-CNN [29] with feature pyramid networks [42]. To adapt it to the visual search task, we turn the backbone into a Siamese network -hence the prefix Siamese -, which extracts features from both the reference image and the scene and computes a pixel-wise similarity between the two. The image features and the similarity score form the input to three heads: (1) the Region Pro- Figure 3. Sketch of the matching procedure. The reference encoding is reduced to a vector by average pooling (1) and the point by point absolute difference to the scene encoding is computed (2). The concatenated (3) scene encoding and reference features are reduced by a 1 × 1 convolution (4) before feeding them to the network heads. posal Network (RPN), (2) the bounding box classification and regression head and (3) the segmentation head. In the following, we briefly review the key components of Mask R-CNN and then introduce our extensions. Mask R-CNN Mask R-CNN is a two-stage object detector that consists of a backbone feature extractor and multiple heads operating on these features (see Fig. 2A). We choose a ResNet50 [30] with Feature Pyramid Networks (FPN) [42] as our backbone. The heads consist of two stages. First, the region proposal network (RPN) is applied convolutionally across the image to predict possible object locations in the scene. The highest scoring region proposals are then cropped from the backbone feature maps and used as inputs for the bounding box classification (CLS) and regression (BBOX) head as well as the instance masking head (MASK). Siamese feature pyramid networks In the conventional object detection/instance segmentation setting, the set of possible categories is known in advance, so the task of the backbone is to extract useful features for the subsequent detection and segmentation stages. In contrast, in the one-shot setting the information on which objects to detect and segment is provided in the form of a reference image, which can contain an object category the system has not been trained on. To adapt to this situation, our backbone does not only extract useful features from the scene image, but also computes a similarity metric to the reference at each possible location. To do so, we follow the basic idea of Siamese networks [36] and apply the same backbone (ResNet50 with FPN) with shared weights to extract features from both the reference and the scene. These features are then matched pixel-wise as described below. Feature matching The feature pyramid network produces image features at multiple scales, hence we perform the following matching procedure at each scale of the pyramid (Fig. 3): 1. Pool the features of the reference image over space using average pooling to obtain a vector embedding of the category to be detected and segmented. 2. At every spatial position of the scene representation, compute the absolute difference between the features of the reference and that of the scene. 3. Concatenate the scene representation and the pixelwise distance between the two. 4. Reduce the number of features by 1 × 1 convolution. The resulting features are then used as a drop-in replacement for the original feature pyramid as they have the same dimensionality. The key difference is that they do not only encode the content of the scene image, but also its similarity to the reference image, which forms the basis for the subsequent heads to generate object proposals, classify matches vs. non-matches and generate instance masks. Head architecture We use the same region proposal network (RPN) as Mask R-CNN, changing only its inputs as described above and the way examples are generated during training (described below). We also use the same classification and bounding box regression head as Mask R-CNN, but change the classification from an 80-way class discrimination to a binary match/non-match discrimination. Similarly, for the mask branch we generate only a single instance mask instead of one per potential class. Implementation details Our system is based on the Matterport implementation of Mask R-CNN [2]. We provide all details in Appendix 1. Experiments We train Siamese Mask R-CNN jointly on object detection and instance segmentation in the visual search setting. We evaluate the trained models both on previously seen and unseen (one-shot) categories using splits of MS-COCO. Training Pre-training backbone. We pre-train the ResNet backbone on image classification on a reduced subset of Ima-geNet, which contains images from the 687 ImageNet categories without correspondence in MS-COCO -hence we refer to it as ImageNet-687. Pre-training on this reduced set ensures that we do not use any label information about the one-shot classes at any training stage. Training Siamese Mask R-CNN. We train the models using stochastic gradient descent with momentum for 160,000 steps with a batch size of 12 on four NVIDIA P100 GPUs in parallel. We use an initial learning rate of 0.02 and a momentum of 0.9. During the first 1,000 steps, we train only the heads. After that, we train the entire network, including the backbone and all heads, end-to-end. After 120,000 steps, we divide the learning rate by 10. Construction of mini-batches. During training, a minibatch contains 12 sets of reference and query images. We first draw the query images at random from the training set and pre-process them in the following way: (1) we resize an image so that the longer side is 1024 px, while keeping the aspect ratio, (2) we zero-pad the smaller side of the image to be square 1024 × 1024, (3) we subtract the mean ImageNet RGB value from each pixel. Next, for each image, we generate a reference image as follows: (1) draw a random category among all categories of the background set present in the image, (2) crop a random instance of the selected category out of any image in the training set (using the bounding box annotation), and (3) resize the reference image so that its longer side is 192 px and zero-pad the shorter side to get a square image of 192 × 192. To enable a quick look-up of reference instances, we created an index that contains a list of categories present in each image. Labels. We use only the annotations of object instances in the query image that belong to the corresponding reference category. All other objects are treated as background. Loss function. Siamese Mask R-CNN is trained on the same basic multi-task objective as Mask R-CNN: classification and bounding box loss for the RPN; classification, bounding box and mask loss for each RoI. There are a couple of differences as well. First, the classification losses consist of a binary cross-entropy of the match/non-match classification rather than an 80-way multinomial crossentropy used for classification on MS-COCO. Second, we found that weighting the individual losses differently improved performance in the one-shot setting. Specifically, we apply the following weights to each component of the loss function: RPN classification loss: 2, RPN bounding box loss: 0.1, RoI classification loss: 2, RoI bounding box loss: 0.5 and mask loss: 1. Mask R-CNN. For comparison, we also trained the original Mask R-CNN on MS-COCO on all 80 classes for 320,000 steps using the same hyper parameters as for Siamese Mask R-CNN but without the adjustments to the loss function weights described above. Evaluation General procedure. We evaluate the performance of our model using the MS-COCO val 2017 set as a test set (it was not used for training). We do one evaluation run per class split S, using the following procedure: Figure 4. Object scores can be thought of as posterior probabilities, i.e. the product of image evidence and category prior. Thus, the optimal criterion depends on the prior, but in a one-shot setting, there is no information about the prior. Baseline: random boxes As a very naïve baseline, we evaluate the performance of a model predicting random bounding boxes and segmentation masks. To do so, we take ground-truth bounding boxes and segmentation masks for the category of the reference image, and randomly shift the boxes around the image (assigning a random confidence value for each box between 0.8 and 1). We keep the ground-truth segmentation masks intact in the shifted boxes. Such procedure allows us to get random predictions while keeping certain statistics of the ground-truth annotations (e.g. number of boxes per image, their sizes, etc.). Results Example-based detection and segmentation We start by showing our results on the task of object detection and instance segmentation targeted to a single class, which is given by an example. This is essentially a metric learning problem: we learn a similarity metric between image regions and the reference image. This allows the detection and segmentation heads to produce bounding boxes and instance masks for matching objects. As discussed above, this problem is harder than training an object detector for a fixed set of classes, and we therefore simplified the training and evaluation process (see Section 5.2 above). To put our one-shot results reported below in context, we first trained both Siamese Mask R-CNN as well regular Mask R-CNN on the entire MS-COCO data set (Table 1). Our Mask R-CNN implementation performed reasonably, achieving 42.5% mAP50 on detection and 40.1% on instance segmentation. These numbers are not state-of-theart (due to limited availability of extendable code and pretrained models), but that doesn't change the conclusions, since we are interested in relative performance differences to Mask R-CNN and not in absolute values. Siamese Mask R-CNN achieved 35.7% mAP on detection and 33.4% on instance segmentation using the same backbone, training schedule, etc., but based on examples rather than trained on a fixed set of categories. Thus, we conclude that the proposed Siamese Mask R-CNN architec- ture can learn object detection and instance segmentation based on examples, but there is room for improvement, suggesting that the example-based setting is more challenging. One-shot instance segmentation Next, we report the results of evaluating Siamese Mask R-CNN in the one-shot setting. That is, we train on the background splits without using instances of one-shot evaluation splits (Section 3) as reference images. These results are shown in Table 2. The average detection mAP50 scores for the one-shot splits are around 17%, while the segmentation ones are around 15%, with some variability between splits. These values are significantly lower than those for the background splits, indicating the difficulty of the oneshot setting. The mAP50 scores for the background splits are slightly higher than those in Table 1, because the former contain only 60 categories while the latter were trained on all 80. Taken together, these results suggest that we observe a substantial degree of overfitting on the background classes used during training. This result is in contrast to earlier work on Omniglot [50] that observed good generalization beyond the background set, presumably because Omniglot contains a larger number of categories and the image statistics are simpler. Figure 5 shows examples of successful Siamese Mask R-CNN predictions for one-shot categories (i.e. categories not used during training). These examples allow us to get a feeling for the difficulty of the task: the reference inputs are quite different from the instances in the query image, sometimes they show only part of the reference object and they are never annotated with ground truth segmentation masks. To generate bounding boxes and segmentation masks, the model can use only its general knowledge about objects and their boundaries and the metric learned on the other categories to compute the visual similarity between the reference and the query instances. For instance, the bus on the right or the horse in the bottom left in Figure 5 are incomplete and the network has never been provided with ground truth bounding boxes or instance masks for either horses or buses. Nevertheless, it still finds the correct object in the query image and segments the entire object. Qualitative analysis We also show examples of failure cases in Figure 6. The picture that emerges from both successful and failure cases is that the network produces overall very good bounding boxes and segmentation masks, but often fails at targeting it towards the correct category. We elaborate more in the next section on the challenges of the one-shot setting. False positives in the one-shot setting There is a marked drop in model performance between the background and the one-shot evaluation splits, suggesting some degree of overfitting to the background categories One-shot classes Figure 7. Confusion matrix for the Siamese Mask R-CNN model using split S2 for one-shot evaluation. The element (i, j) shows the AP50 of using detections for category i and evaluating them as instances of category j. The histogram below the matrix shows the most commonly confused (or falsely predicted) categories. used during training. If overfitting to background classes was indeed the main issue, we would expect false positives to be biased towards these categories and, in particular, towards those categories that are most frequent in the training set. This seems to be qualitatively the case (Fig. 5). In addition, we quantified this observation by computing a confusion matrix of MS-COCO categories (Fig. 7). The element (i, j) of this matrix corresponds to the AP50 value of detections obtained for reference images of category i, which are evaluated as if the reference images belonged to category j. If there were no false positives, the off-diagonal elements of the matrix would be zero. The sums of values in the columns show instances of categories that are most often falsely detected (the histogram of such sums is shown below the matrix). Among such commonly falsely predicted categories are people, cars, airplanes, clocks, and other categories that are common in the dataset. Effect of image clutter Previous work on synthetic data [50] found that cluttered scenes are especially challenging in the one-shot setting. This effect is also present in the current context. Both detection and segmentation scores are substantially higher when conditioning on images with a small number of total instances (Figure 8), underscoring the importance of extending the model to robustly process cluttered scenes. Discussion We introduced the task of one-shot instance segmentation and proposed a model based on combining the Mask R-CNN architecture with a metric learning approach to perform this task. There are two main problems in this task: (1) learning a good metric for one-shot detection of novel objects and (2) transferring the knowledge about bounding boxes and instance masks from known to novel object categories. Our results suggest that in the context of MS-COCO, the first part is more difficult than the second part. Overall, bounding boxes and instance masks are of high quality. The relatively weak performance of our current model appears to be caused by its difficulties in classifying if the detected object is of the same category as the reference. Our observation of a substantial amount of overfitting towards the categories used during training supports this hypothesis. Our system is not based on the latest and highestperforming object detector, but was rather driven by availability of code for existing approaches; we expect that incorporating better object detection architectures and larger backbones into our one-shot visual search framework will lead to performance improvements analogous to those reported on the fixed-category problem. However, closing the gap between the fixed-category and the one-shot visual search problems would likely require not just better mAP50 score on the test set Detection Segmentation components for our model, but rather conceptual changes to the model itself and to the training data. Such changes might include larger datasets with more object categories than MS-COCO or more sophisticated approaches to oneshot learning from a relatively small number of background categories. There are a couple of drawbacks to our current approach, and resolving them is likely to lead to improvements in performance. For instance, during training we currently treat all instances of the one-shot categories as background, which probably encourages the model to suppress their detection even if they match the reference well. In addition, the reference instances are sometimes hard to recognize even for humans, because they are cropped to their bounding box and lack image context, which can be an important cue for recognition. Finally, the system currently relies exclusively on comparing each object proposal to the reference image and performing a match/non-match discrimination. However, one may instead want to do an N +1-way classification, assigning each instance to one of the N already known categories or a novel, N +1 st one, and only in the latter case rely on a similarity metric and a binary match/non-match classification. In summary, one-shot instance segmentation is a hard problem on a diverse real-world dataset like MS-COCO. It requires combining ideas from few-shot/metric learning, object detection and segmentation, and we believe it is a perfect test bed for developing truly general vision systems. mental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI/IBC, or the U.S. Government.
4,076